From patchwork Thu Oct 31 16:46:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11221631 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 54F0015AB for ; Thu, 31 Oct 2019 16:58:59 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id A90AF2087F for ; Thu, 31 Oct 2019 16:58:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kscd/Ryy" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A90AF2087F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17186-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 25965 invoked by uid 550); 31 Oct 2019 16:58:53 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 12262 invoked from network); 31 Oct 2019 16:46:58 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=m8czdyKZmnHleycOU1GN1JDqnDhqDwUuQ2BzB2+NnLE=; b=kscd/Ryyvnax6ORha3g+9N2ycFdZdVKcUL7nn52FBqZCdom/ukOvZNiLYcIVHGWjJI RL+cSRsSc0KMkWX7Hpm6WW5mwmzyULTl69SeLKpONSPknn5ZqEZWkbRqognGlyERPXLo SV9nRf3GJ7wBfRyfYIInjM+YQGy1pLtrwhArDoKPFew1LEH2KfjE14WBRpDYQ6T6vBcN bTuA6HnIE/mjRqWbsSXxpwwDhLN9CG23XGon/2Fh6QYLxfPuum6XKBQrb0xQTBoamtRg GqrJFN7dDnqowhR+BaVm165Nc4KqgxxBwyK/OZ+cl3NUbIFIXQthBUWLYI6b8XUFTWxv fUxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=m8czdyKZmnHleycOU1GN1JDqnDhqDwUuQ2BzB2+NnLE=; b=RQwUqGr/ZEPzktOZt4ig81VVR/beT7UZwNVH+PT0YcCW2BEpyN3NwFI1ppNa5riHnF QYtaSi+V2VsZvPPTCKuQFOzNV4g1ovRHC7pbyFHfV1pAAkTRYCgg4M1qnIkCO4fSvPxT jRphOFRd6dXQF8Jtl5Z3wdHtZSCwu9VMjxOE2HVnxPZV1E1j4eqU2HofIT4D5cbW2AaR 7WpQAsS+Ja/a4Dl9pnCGlMSdJpi8NEZj8oRbu44x1k5pi8B7xeLzRl0ag8SUN2BbvvMu 3MBPlNDQDws/IHBWHyQcqcKUFH8+P21iRwsGZAFD4eQUEDeHAkydP9Lpah0HPyspjZTA UE1g== X-Gm-Message-State: APjAAAW3iGt0KW6kBsNCytkSmhrwPumAixhPbt0UYNpSvDKmVomJ42ke tSkWVUXTQsYgXLnIeoqVFk/ieTy2OsmFkrGx9WA= X-Google-Smtp-Source: APXvYqw9xt1orWeLvt0VfIxUZia5BAPhrbdVX/DF2ex1nZVelw9EIIUv1zSfOLTzEnTa2y67e76C/YQPYBoxghEHbPw= X-Received: by 2002:a63:4b54:: with SMTP id k20mr7973543pgl.70.1572540406049; Thu, 31 Oct 2019 09:46:46 -0700 (PDT) Date: Thu, 31 Oct 2019 09:46:21 -0700 In-Reply-To: <20191031164637.48901-1-samitolvanen@google.com> Message-Id: <20191031164637.48901-2-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191031164637.48901-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v3 01/17] arm64: mm: avoid x18 in idmap_kpti_install_ng_mappings From: samitolvanen@google.com To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen idmap_kpti_install_ng_mappings uses x18 as a temporary register, which will result in a conflict when x18 is reserved. Use x16 and x17 instead where needed. Signed-off-by: Sami Tolvanen Reviewed-by: Nick Desaulniers Reviewed-by: Mark Rutland --- arch/arm64/mm/proc.S | 63 ++++++++++++++++++++++---------------------- 1 file changed, 32 insertions(+), 31 deletions(-) diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index a1e0592d1fbc..fdabf40a83c8 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -250,15 +250,15 @@ ENTRY(idmap_kpti_install_ng_mappings) /* We're the boot CPU. Wait for the others to catch up */ sevl 1: wfe - ldaxr w18, [flag_ptr] - eor w18, w18, num_cpus - cbnz w18, 1b + ldaxr w17, [flag_ptr] + eor w17, w17, num_cpus + cbnz w17, 1b /* We need to walk swapper, so turn off the MMU. */ pre_disable_mmu_workaround - mrs x18, sctlr_el1 - bic x18, x18, #SCTLR_ELx_M - msr sctlr_el1, x18 + mrs x17, sctlr_el1 + bic x17, x17, #SCTLR_ELx_M + msr sctlr_el1, x17 isb /* Everybody is enjoying the idmap, so we can rewrite swapper. */ @@ -281,9 +281,9 @@ skip_pgd: isb /* We're done: fire up the MMU again */ - mrs x18, sctlr_el1 - orr x18, x18, #SCTLR_ELx_M - msr sctlr_el1, x18 + mrs x17, sctlr_el1 + orr x17, x17, #SCTLR_ELx_M + msr sctlr_el1, x17 isb /* @@ -353,46 +353,47 @@ skip_pte: b.ne do_pte b next_pmd + .unreq cpu + .unreq num_cpus + .unreq swapper_pa + .unreq cur_pgdp + .unreq end_pgdp + .unreq pgd + .unreq cur_pudp + .unreq end_pudp + .unreq pud + .unreq cur_pmdp + .unreq end_pmdp + .unreq pmd + .unreq cur_ptep + .unreq end_ptep + .unreq pte + /* Secondary CPUs end up here */ __idmap_kpti_secondary: /* Uninstall swapper before surgery begins */ - __idmap_cpu_set_reserved_ttbr1 x18, x17 + __idmap_cpu_set_reserved_ttbr1 x16, x17 /* Increment the flag to let the boot CPU we're ready */ -1: ldxr w18, [flag_ptr] - add w18, w18, #1 - stxr w17, w18, [flag_ptr] +1: ldxr w16, [flag_ptr] + add w16, w16, #1 + stxr w17, w16, [flag_ptr] cbnz w17, 1b /* Wait for the boot CPU to finish messing around with swapper */ sevl 1: wfe - ldxr w18, [flag_ptr] - cbnz w18, 1b + ldxr w16, [flag_ptr] + cbnz w16, 1b /* All done, act like nothing happened */ - offset_ttbr1 swapper_ttb, x18 + offset_ttbr1 swapper_ttb, x16 msr ttbr1_el1, swapper_ttb isb ret - .unreq cpu - .unreq num_cpus - .unreq swapper_pa .unreq swapper_ttb .unreq flag_ptr - .unreq cur_pgdp - .unreq end_pgdp - .unreq pgd - .unreq cur_pudp - .unreq end_pudp - .unreq pud - .unreq cur_pmdp - .unreq end_pmdp - .unreq pmd - .unreq cur_ptep - .unreq end_ptep - .unreq pte ENDPROC(idmap_kpti_install_ng_mappings) .popsection #endif From patchwork Thu Oct 31 16:46:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11221633 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9D4771390 for ; Thu, 31 Oct 2019 16:59:07 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id EF1D02087F for ; Thu, 31 Oct 2019 16:59:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Ru92iBuu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EF1D02087F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17187-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 26316 invoked by uid 550); 31 Oct 2019 16:58:57 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 13461 invoked from network); 31 Oct 2019 16:47:00 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=NvkEBCuGDFGmxutij1zSuSKxlsUUBaj5IQr0NLQJ5So=; b=Ru92iBuuYlPhHAFVLWeOs87kE0FiX1qHnTaOtpzTb/jKviPaXonwKKd+N66yjkRYuK cgnBq46YclvPXSguuBJeqzLri7jCCWWG2Dfk0n6Id4jZM2n7hZ/78QAEvq8GFPDPGfUD qNtrv8foscV0AHwaybS2313mz3Cil7NrOo8pDuPY6Jd1tSs6WarJ755+eSf/RomcChV4 7A1jpdn6Ivzs9aO1uXSVTF6S5nlm17BU8Kojsg1aHJT+c27sDLV3569+F5xcQdMXNe/B BAiGEf7hztz5GyYBj6nLqE1P0Jc3Ye8dMxEAQ36afKH6zRhZp6NxcrpWbI9k75gVvReY hMpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=NvkEBCuGDFGmxutij1zSuSKxlsUUBaj5IQr0NLQJ5So=; b=LvtQusmDBvdrUiN/dGZ200jZAE67YOUeB1MyXlehqKL353j76NtoJ8sraYJrC8gbXd cD4TDDJP1lO+3d/GCVu3xhDOPJ7U26Q0KC4+qM31S3hOKLNQMSgCQTn2kIHvSdT8oiMH piX91iNNGflQeRyrtyA0D+hoDPNLofwI3s/qnT0GfkU71X8vS+MuQ4S4Z5IQae8LxkuF q4sN3LEwjVSztbNJ+2+hah/MmvM8rbEza+dL6z777UyAuetZqDGwp0kRD9e75Y0k5JxP edvbzBNz+aihdqzPYsjukKk4RiLBto0oVPtbrqSM7ajr1dnx1kHDzKTyX4vN/kzEQKKC e16w== X-Gm-Message-State: APjAAAUNBwOG5+wHPYhDzEKj/V6aJFgBMbdp4f5sjE03wjN1hl4OXhji Xxw0r/L40PZMpC9hQ9xUT3Yvz2lsBvKbSkbycQY= X-Google-Smtp-Source: APXvYqyQVapc0HQb2zNlrO9ACUIg0p0PZIO3eKHtqDUB9iKBoqT2ZxFgyWPN3wIre99JvuCaIlvEX84YwMN3a1ZjF+Y= X-Received: by 2002:a63:d809:: with SMTP id b9mr7733622pgh.143.1572540408812; Thu, 31 Oct 2019 09:46:48 -0700 (PDT) Date: Thu, 31 Oct 2019 09:46:22 -0700 In-Reply-To: <20191031164637.48901-1-samitolvanen@google.com> Message-Id: <20191031164637.48901-3-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191031164637.48901-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v3 02/17] arm64/lib: copy_page: avoid x18 register in assembler code From: samitolvanen@google.com To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Register x18 will no longer be used as a caller save register in the future, so stop using it in the copy_page() code. Link: https://patchwork.kernel.org/patch/9836869/ Co-developed-by: Ard Biesheuvel [ changed the offset and bias to be explicit ] Signed-off-by: Sami Tolvanen Reviewed-by: Mark Rutland --- arch/arm64/lib/copy_page.S | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/arch/arm64/lib/copy_page.S b/arch/arm64/lib/copy_page.S index bbb8562396af..290dd3c5266c 100644 --- a/arch/arm64/lib/copy_page.S +++ b/arch/arm64/lib/copy_page.S @@ -34,45 +34,45 @@ alternative_else_nop_endif ldp x14, x15, [x1, #96] ldp x16, x17, [x1, #112] - mov x18, #(PAGE_SIZE - 128) + add x0, x0, #256 add x1, x1, #128 1: - subs x18, x18, #128 + tst x0, #(PAGE_SIZE - 1) alternative_if ARM64_HAS_NO_HW_PREFETCH prfm pldl1strm, [x1, #384] alternative_else_nop_endif - stnp x2, x3, [x0] + stnp x2, x3, [x0, #-256] ldp x2, x3, [x1] - stnp x4, x5, [x0, #16] + stnp x4, x5, [x0, #16 - 256] ldp x4, x5, [x1, #16] - stnp x6, x7, [x0, #32] + stnp x6, x7, [x0, #32 - 256] ldp x6, x7, [x1, #32] - stnp x8, x9, [x0, #48] + stnp x8, x9, [x0, #48 - 256] ldp x8, x9, [x1, #48] - stnp x10, x11, [x0, #64] + stnp x10, x11, [x0, #64 - 256] ldp x10, x11, [x1, #64] - stnp x12, x13, [x0, #80] + stnp x12, x13, [x0, #80 - 256] ldp x12, x13, [x1, #80] - stnp x14, x15, [x0, #96] + stnp x14, x15, [x0, #96 - 256] ldp x14, x15, [x1, #96] - stnp x16, x17, [x0, #112] + stnp x16, x17, [x0, #112 - 256] ldp x16, x17, [x1, #112] add x0, x0, #128 add x1, x1, #128 - b.gt 1b + b.ne 1b - stnp x2, x3, [x0] - stnp x4, x5, [x0, #16] - stnp x6, x7, [x0, #32] - stnp x8, x9, [x0, #48] - stnp x10, x11, [x0, #64] - stnp x12, x13, [x0, #80] - stnp x14, x15, [x0, #96] - stnp x16, x17, [x0, #112] + stnp x2, x3, [x0, #-256] + stnp x4, x5, [x0, #16 - 256] + stnp x6, x7, [x0, #32 - 256] + stnp x8, x9, [x0, #48 - 256] + stnp x10, x11, [x0, #64 - 256] + stnp x12, x13, [x0, #80 - 256] + stnp x14, x15, [x0, #96 - 256] + stnp x16, x17, [x0, #112 - 256] ret ENDPROC(copy_page) From patchwork Thu Oct 31 16:46:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11221635 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4847915AB for ; Thu, 31 Oct 2019 16:59:17 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 6BF0A2086D for ; Thu, 31 Oct 2019 16:59:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VeV07+y/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6BF0A2086D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17188-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 27654 invoked by uid 550); 31 Oct 2019 16:59:00 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 13622 invoked from network); 31 Oct 2019 16:47:03 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=FBkL0U7+mf5tcaSnPbZxQ7YN9ZSyYOV+Z9+Nl88/ERI=; b=VeV07+y/vZQMBPIGrWmEcN17NfUoL03xMkxy1CCe4nlsSpjdC/P/A7pU1PaSQAoaC4 yhQxR7V5Sze8M+kndBe6LqPB2aAkSx435gMvCe5Xr+ZmLLya44v9PAdlRVnM1C+6wl+V S1wNwAHn7FYnyQVpaOZ07mIDdFzL9vScqpX65tU2wxXwD52a54e9I+3cEH64o/Xmc3+r xr3Jgbe/jKwXa/SmooBBrjDK/CP2axgvDd8BrIAAVZXVKYEewhczgFi97fa/8ggAWFSs i4iLoSFfavLDhRxy2EtVhvkwWNU5Zc/SvzvXnZ4t1GlXrCZoOIf8AsyK/9x2BQE7Cb/J EKDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FBkL0U7+mf5tcaSnPbZxQ7YN9ZSyYOV+Z9+Nl88/ERI=; b=cVEdL7+yuihT0MT37uLYu18t7YzEegcfNdDKFNbcve/oz3Emn8KJtgx5avvRi+4Vql YoPZyRKP8xQ1GtstScAJI4VgqzvNTtta3Qbhnb8RwN4rIgsj8xna9L+Mik09r6v0RBxJ QqbLnGU4ejGrYP+8R5MTF/pH9PhJcSYbFNb0ehSJuvX9R1duuhNd6nLaiBREvCEYMRBW lA/l/QrVbPXjcT3GV0gmSxeswhrS/HtmITOWorjj/sqweeBNRQp0+N78MfHXoNRpzUe2 K3RMA1bDOwDOAp08ytq/0vPzbY5dOcbZRBX7hTOJAJuX+nhsOePPCfdChfSp/FMskfjR hD/Q== X-Gm-Message-State: APjAAAVxDR8X6uNXAUSZnxtC8mH/UyoSmW/FAp7rVaaHmXNSjRMaKFc+ UweArBq+SyMd+Cz6uw+AMUNxJAMWFyUe4kqqvys= X-Google-Smtp-Source: APXvYqwvSAzK7+rkcq2jVtKJglETlc7GpzZ/DLe6JRtqcrszDVQGnwVjheehlI36t6UkuGM2430j8oW3UHk8cjBajIM= X-Received: by 2002:a63:cf18:: with SMTP id j24mr8035896pgg.406.1572540411406; Thu, 31 Oct 2019 09:46:51 -0700 (PDT) Date: Thu, 31 Oct 2019 09:46:23 -0700 In-Reply-To: <20191031164637.48901-1-samitolvanen@google.com> Message-Id: <20191031164637.48901-4-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191031164637.48901-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v3 03/17] arm64: kvm: stop treating register x18 as caller save From: samitolvanen@google.com To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen In preparation of reserving x18, stop treating it as caller save in the KVM guest entry/exit code. Currently, the code assumes there is no need to preserve it for the host, given that it would have been assumed clobbered anyway by the function call to __guest_enter(). Instead, preserve its value and restore it upon return. Link: https://patchwork.kernel.org/patch/9836891/ Co-developed-by: Ard Biesheuvel [ updated commit message, switched from x18 to x29 for the guest context ] Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- arch/arm64/kvm/hyp/entry.S | 41 +++++++++++++++++++------------------- 1 file changed, 20 insertions(+), 21 deletions(-) diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index e5cc8d66bf53..c3c2d842c609 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -23,6 +23,7 @@ .pushsection .hyp.text, "ax" .macro save_callee_saved_regs ctxt + str x18, [\ctxt, #CPU_XREG_OFFSET(18)] stp x19, x20, [\ctxt, #CPU_XREG_OFFSET(19)] stp x21, x22, [\ctxt, #CPU_XREG_OFFSET(21)] stp x23, x24, [\ctxt, #CPU_XREG_OFFSET(23)] @@ -32,6 +33,8 @@ .endm .macro restore_callee_saved_regs ctxt + // We assume \ctxt is not x18-x28 + ldr x18, [\ctxt, #CPU_XREG_OFFSET(18)] ldp x19, x20, [\ctxt, #CPU_XREG_OFFSET(19)] ldp x21, x22, [\ctxt, #CPU_XREG_OFFSET(21)] ldp x23, x24, [\ctxt, #CPU_XREG_OFFSET(23)] @@ -48,7 +51,7 @@ ENTRY(__guest_enter) // x0: vcpu // x1: host context // x2-x17: clobbered by macros - // x18: guest context + // x29: guest context // Store the host regs save_callee_saved_regs x1 @@ -67,31 +70,28 @@ alternative_else_nop_endif ret 1: - add x18, x0, #VCPU_CONTEXT + add x29, x0, #VCPU_CONTEXT // Macro ptrauth_switch_to_guest format: // ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3) // The below macro to restore guest keys is not implemented in C code // as it may cause Pointer Authentication key signing mismatch errors // when this feature is enabled for kernel code. - ptrauth_switch_to_guest x18, x0, x1, x2 + ptrauth_switch_to_guest x29, x0, x1, x2 // Restore guest regs x0-x17 - ldp x0, x1, [x18, #CPU_XREG_OFFSET(0)] - ldp x2, x3, [x18, #CPU_XREG_OFFSET(2)] - ldp x4, x5, [x18, #CPU_XREG_OFFSET(4)] - ldp x6, x7, [x18, #CPU_XREG_OFFSET(6)] - ldp x8, x9, [x18, #CPU_XREG_OFFSET(8)] - ldp x10, x11, [x18, #CPU_XREG_OFFSET(10)] - ldp x12, x13, [x18, #CPU_XREG_OFFSET(12)] - ldp x14, x15, [x18, #CPU_XREG_OFFSET(14)] - ldp x16, x17, [x18, #CPU_XREG_OFFSET(16)] - - // Restore guest regs x19-x29, lr - restore_callee_saved_regs x18 - - // Restore guest reg x18 - ldr x18, [x18, #CPU_XREG_OFFSET(18)] + ldp x0, x1, [x29, #CPU_XREG_OFFSET(0)] + ldp x2, x3, [x29, #CPU_XREG_OFFSET(2)] + ldp x4, x5, [x29, #CPU_XREG_OFFSET(4)] + ldp x6, x7, [x29, #CPU_XREG_OFFSET(6)] + ldp x8, x9, [x29, #CPU_XREG_OFFSET(8)] + ldp x10, x11, [x29, #CPU_XREG_OFFSET(10)] + ldp x12, x13, [x29, #CPU_XREG_OFFSET(12)] + ldp x14, x15, [x29, #CPU_XREG_OFFSET(14)] + ldp x16, x17, [x29, #CPU_XREG_OFFSET(16)] + + // Restore guest regs x18-x29, lr + restore_callee_saved_regs x29 // Do not touch any register after this! eret @@ -114,7 +114,7 @@ ENTRY(__guest_exit) // Retrieve the guest regs x0-x1 from the stack ldp x2, x3, [sp], #16 // x0, x1 - // Store the guest regs x0-x1 and x4-x18 + // Store the guest regs x0-x1 and x4-x17 stp x2, x3, [x1, #CPU_XREG_OFFSET(0)] stp x4, x5, [x1, #CPU_XREG_OFFSET(4)] stp x6, x7, [x1, #CPU_XREG_OFFSET(6)] @@ -123,9 +123,8 @@ ENTRY(__guest_exit) stp x12, x13, [x1, #CPU_XREG_OFFSET(12)] stp x14, x15, [x1, #CPU_XREG_OFFSET(14)] stp x16, x17, [x1, #CPU_XREG_OFFSET(16)] - str x18, [x1, #CPU_XREG_OFFSET(18)] - // Store the guest regs x19-x29, lr + // Store the guest regs x18-x29, lr save_callee_saved_regs x1 get_host_ctxt x2, x3 From patchwork Thu Oct 31 16:46:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11221637 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7A8871390 for ; Thu, 31 Oct 2019 16:59:27 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id CBEFE2086D for ; Thu, 31 Oct 2019 16:59:26 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="DwFMzJZ+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CBEFE2086D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17189-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 28223 invoked by uid 550); 31 Oct 2019 16:59:07 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 13776 invoked from network); 31 Oct 2019 16:47:06 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=aHKOoX9zJAla15qSwjd99c+7iNT/pdKXHfBAM0TSbiA=; b=DwFMzJZ+kUlOAWN2GvdIrb0spmd3RCFaXXrjuk+n93RrGiHVTkKIE8QVClddg7kBDe 7IeZak8mKqDEsiR3pvG5D58rS+2OmcCl2VoAJ8fblz+yYSO2P8d24syfdwx60D462Dgh XCr3cWY+P9lvUhF8i41PfwSmLeFuaKf2xbTVnm6vK5gGs924c4E8sqG7J+640Cc4jSv1 COp3Hpg8QTv21PL1iF/DfTs8X6WtTvBta9g6udg3ODicYi1TP2epIsbDEj+VXTpASGOq p/A+gt82BwR62uCobShpcPv4dkr4CW0qj8ti60vpOnCzk1mJe/Au4gEO3Ke231E6vBr6 8IJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=aHKOoX9zJAla15qSwjd99c+7iNT/pdKXHfBAM0TSbiA=; b=ku9GTJF6y8ATthL/en2l0JQlArBe3WTi4TbjYU9OGVaG8mreZrRM8XDSFqZUF0iBIg lDORcEvp7+mfMFrMowBv/nEmSWqAvM0PLZlkGCEvRL/YktKqYtJp5iq8cQi/oNamDF4g f4M/yYdWXtwOHT0aHtCQnUgySdqPUhpDXu0gxyjFrPr/mw9wK9N2OBUF2DIiDEz0zQ92 Hv5GV3rzT5nz4seAS1Hhl/D4CqL/eGchVaqXz83CTtAA+Ee2/xI11/8pJzHHrhoS8xpb R3LWAnIK7Tzr2qxFD/ElkCGYCbKoZVpVKbcuCTJQxJf68r9upr33liY1nrO+5TjQMIKi LqxA== X-Gm-Message-State: APjAAAVKHI3k0B76a+ujrFpeUUfkHmM9ukXfIxZ0/KyckwolhykaAuzH z11DuW9j0qFMX7Ikss08WhCdUwLZopy0rHwsZf0= X-Google-Smtp-Source: APXvYqzWKFl/JealEA2qBs8OC840r89CQ5kizTD5lUFGsTusSG0GdYwJSJp9KVbGkb+yd42ZvImlbPdCOavSDAkzBMQ= X-Received: by 2002:a63:1904:: with SMTP id z4mr7825364pgl.413.1572540414204; Thu, 31 Oct 2019 09:46:54 -0700 (PDT) Date: Thu, 31 Oct 2019 09:46:24 -0700 In-Reply-To: <20191031164637.48901-1-samitolvanen@google.com> Message-Id: <20191031164637.48901-5-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191031164637.48901-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v3 04/17] arm64: kernel: avoid x18 __cpu_soft_restart From: samitolvanen@google.com To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen From: Ard Biesheuvel The code in __cpu_soft_restart() uses x18 as an arbitrary temp register, which will shortly be disallowed. So use x8 instead. Link: https://patchwork.kernel.org/patch/9836877/ Signed-off-by: Ard Biesheuvel Signed-off-by: Sami Tolvanen Reviewed-by: Mark Rutland Reviewed-by: Kees Cook --- arch/arm64/kernel/cpu-reset.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/cpu-reset.S b/arch/arm64/kernel/cpu-reset.S index 6ea337d464c4..32c7bf858dd9 100644 --- a/arch/arm64/kernel/cpu-reset.S +++ b/arch/arm64/kernel/cpu-reset.S @@ -42,11 +42,11 @@ ENTRY(__cpu_soft_restart) mov x0, #HVC_SOFT_RESTART hvc #0 // no return -1: mov x18, x1 // entry +1: mov x8, x1 // entry mov x0, x2 // arg0 mov x1, x3 // arg1 mov x2, x4 // arg2 - br x18 + br x8 ENDPROC(__cpu_soft_restart) .popsection From patchwork Thu Oct 31 16:46:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11221639 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D1F0E15AB for ; Thu, 31 Oct 2019 16:59:40 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 8E1B9208C0 for ; Thu, 31 Oct 2019 16:59:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Cxw4194T" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8E1B9208C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17190-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 28453 invoked by uid 550); 31 Oct 2019 16:59:10 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 13928 invoked from network); 31 Oct 2019 16:47:08 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=QhD+o6YtodNWQks2/cBiCtYNgYirv7RnSf25vOq25FU=; b=Cxw4194Te26mFFrAgTYbGN0KlK6jXK4//XYwoa8r1k6llrqyB0MXRr6V9cxYXN+WAv kptewk5/ZVEwJ2ab6YLpSXZNCX8DQq9qwkB1vTOchr99UymttB8fIRWU1/r9JRgS3yvn iAcSq0GkTbYeeJYWJP+rbxlPvpCwrQFzLZJQ1mmu2rsN3e42MGTOdV+76BKcnDMaCFSX hc8d03rUnkvqjf5JDTEThHMurXRejhQelQgqUsBHRMK7D/YoT6dr9B07IXmlES2toXCm r7EvDb4AI+NVlnwhoqAqiyKonqjFWbQmus/iUJktA1itPgnyC7kG8LvMm6OkaLlJSacx uSLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=QhD+o6YtodNWQks2/cBiCtYNgYirv7RnSf25vOq25FU=; b=AgV5ASONUjUHfVWLYddvSnnPRtCjVmC4kJKwP+HMfj/ZxTOdH7BnsC/gsViNQJ+B4C p93HruxU9tf0anQMyymxwal7XF65cmfs65ewzZlqyZsAEV79a15zza9nqHXKVmiCW+uW JMS7EBA/o29WvwO7RMDOrPrUIj+0fEiDLevSwnAKCQJX4Eam2b/TJYh3w6E0+yax6gee T0w3XIEyMNR/Up4uJ1R2TgV/vz9w1XaAdw8W1OTNXu+mfJrtg6grM1/LBdxQbWaA5n+E Hidk8AagPepW+GWJ76W7iYEg5D5MGicGqQ4eQ8s9fu2snSqCX/S6S6LE8vII4mHQv+f6 9mIg== X-Gm-Message-State: APjAAAUaM7oaw181dNdS0gsW7Fn0OwmPMxoW8j60I4ypSsHcqzwDd8Ju 0Z9+nE8w7AFjNQRx2qRU5Nw9Jkd1kaExcrLgE1c= X-Google-Smtp-Source: APXvYqz5LBjcUcj9+3lOPpLyX/xIcUOnwsH4DpfYlRjtpSW4HreHRWnV0Fn5Up3iU9AaGgj2FvBOEqunpQZFvArtsVA= X-Received: by 2002:a63:d703:: with SMTP id d3mr7706600pgg.102.1572540416660; Thu, 31 Oct 2019 09:46:56 -0700 (PDT) Date: Thu, 31 Oct 2019 09:46:25 -0700 In-Reply-To: <20191031164637.48901-1-samitolvanen@google.com> Message-Id: <20191031164637.48901-6-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191031164637.48901-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v3 05/17] add support for Clang's Shadow Call Stack (SCS) From: samitolvanen@google.com To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen This change adds generic support for Clang's Shadow Call Stack, which uses a shadow stack to protect return addresses from being overwritten by an attacker. Details are available here: https://clang.llvm.org/docs/ShadowCallStack.html Note that security guarantees in the kernel differ from the ones documented for user space. The kernel must store addresses of shadow stacks used by other tasks and interrupt handlers in memory, which means an attacker capable reading and writing arbitrary memory may be able to locate them and hijack control flow by modifying shadow stacks that are not currently in use. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- Makefile | 6 ++ arch/Kconfig | 33 +++++++ include/linux/compiler-clang.h | 6 ++ include/linux/compiler_types.h | 4 + include/linux/scs.h | 54 +++++++++++ init/init_task.c | 8 ++ kernel/Makefile | 1 + kernel/fork.c | 9 ++ kernel/sched/core.c | 2 + kernel/sched/sched.h | 1 + kernel/scs.c | 169 +++++++++++++++++++++++++++++++++ 11 files changed, 293 insertions(+) create mode 100644 include/linux/scs.h create mode 100644 kernel/scs.c diff --git a/Makefile b/Makefile index 79be70bf2899..e6337314f8fb 100644 --- a/Makefile +++ b/Makefile @@ -846,6 +846,12 @@ ifdef CONFIG_LIVEPATCH KBUILD_CFLAGS += $(call cc-option, -flive-patching=inline-clone) endif +ifdef CONFIG_SHADOW_CALL_STACK +CC_FLAGS_SCS := -fsanitize=shadow-call-stack +KBUILD_CFLAGS += $(CC_FLAGS_SCS) +export CC_FLAGS_SCS +endif + # arch Makefile may override CC so keep this after arch Makefile is included NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include) diff --git a/arch/Kconfig b/arch/Kconfig index 5f8a5d84dbbe..5e34cbcd8d6a 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -521,6 +521,39 @@ config STACKPROTECTOR_STRONG about 20% of all kernel functions, which increases the kernel code size by about 2%. +config ARCH_SUPPORTS_SHADOW_CALL_STACK + bool + help + An architecture should select this if it supports Clang's Shadow + Call Stack, has asm/scs.h, and implements runtime support for shadow + stack switching. + +config SHADOW_CALL_STACK_VMAP + bool + depends on SHADOW_CALL_STACK + help + Use virtually mapped shadow call stacks. Selecting this option + provides better stack exhaustion protection, but increases per-thread + memory consumption as a full page is allocated for each shadow stack. + +config SHADOW_CALL_STACK + bool "Clang Shadow Call Stack" + depends on ARCH_SUPPORTS_SHADOW_CALL_STACK + help + This option enables Clang's Shadow Call Stack, which uses a + shadow stack to protect function return addresses from being + overwritten by an attacker. More information can be found from + Clang's documentation: + + https://clang.llvm.org/docs/ShadowCallStack.html + + Note that security guarantees in the kernel differ from the ones + documented for user space. The kernel must store addresses of shadow + stacks used by other tasks and interrupt handlers in memory, which + means an attacker capable reading and writing arbitrary memory may + be able to locate them and hijack control flow by modifying shadow + stacks that are not currently in use. + config HAVE_ARCH_WITHIN_STACK_FRAMES bool help diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h index 333a6695a918..18fc4d29ef27 100644 --- a/include/linux/compiler-clang.h +++ b/include/linux/compiler-clang.h @@ -42,3 +42,9 @@ * compilers, like ICC. */ #define barrier() __asm__ __volatile__("" : : : "memory") + +#if __has_feature(shadow_call_stack) +# define __noscs __attribute__((__no_sanitize__("shadow-call-stack"))) +#else +# define __noscs +#endif diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 72393a8c1a6c..be5d5be4b1ae 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -202,6 +202,10 @@ struct ftrace_likely_data { # define randomized_struct_fields_end #endif +#ifndef __noscs +# define __noscs +#endif + #ifndef asm_volatile_goto #define asm_volatile_goto(x...) asm goto(x) #endif diff --git a/include/linux/scs.h b/include/linux/scs.h new file mode 100644 index 000000000000..0b70aff3846a --- /dev/null +++ b/include/linux/scs.h @@ -0,0 +1,54 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#ifndef _LINUX_SCS_H +#define _LINUX_SCS_H + +#include +#include +#include + +#ifdef CONFIG_SHADOW_CALL_STACK + +/* + * In testing, 1 KiB shadow stack size (i.e. 128 stack frames on a 64-bit + * architecture) provided ~40% safety margin on stack usage while keeping + * memory allocation overhead reasonable. + */ +#define SCS_SIZE 1024 +#define GFP_SCS (GFP_KERNEL | __GFP_ZERO) + +/* A random number to mark the end of the shadow stack. */ +#define SCS_END_MAGIC 0xaf0194819b1635f6UL + +#define task_scs(tsk) (task_thread_info(tsk)->shadow_call_stack) + +static inline void task_set_scs(struct task_struct *tsk, void *s) +{ + task_scs(tsk) = s; +} + +extern void scs_init(void); +extern void scs_task_reset(struct task_struct *tsk); +extern int scs_prepare(struct task_struct *tsk, int node); +extern bool scs_corrupted(struct task_struct *tsk); +extern void scs_release(struct task_struct *tsk); + +#else /* CONFIG_SHADOW_CALL_STACK */ + +#define task_scs(tsk) NULL + +static inline void task_set_scs(struct task_struct *tsk, void *s) {} +static inline void scs_init(void) {} +static inline void scs_task_reset(struct task_struct *tsk) {} +static inline int scs_prepare(struct task_struct *tsk, int node) { return 0; } +static inline bool scs_corrupted(struct task_struct *tsk) { return false; } +static inline void scs_release(struct task_struct *tsk) {} + +#endif /* CONFIG_SHADOW_CALL_STACK */ + +#endif /* _LINUX_SCS_H */ diff --git a/init/init_task.c b/init/init_task.c index 9e5cbe5eab7b..cbd40460e903 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -184,6 +185,13 @@ struct task_struct init_task }; EXPORT_SYMBOL(init_task); +#ifdef CONFIG_SHADOW_CALL_STACK +unsigned long init_shadow_call_stack[SCS_SIZE / sizeof(long)] __init_task_data + __aligned(SCS_SIZE) = { + [(SCS_SIZE / sizeof(long)) - 1] = SCS_END_MAGIC +}; +#endif + /* * Initial thread structure. Alignment of this is handled by a special * linker map entry. diff --git a/kernel/Makefile b/kernel/Makefile index daad787fb795..313dbd44d576 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -102,6 +102,7 @@ obj-$(CONFIG_TRACEPOINTS) += trace/ obj-$(CONFIG_IRQ_WORK) += irq_work.o obj-$(CONFIG_CPU_PM) += cpu_pm.o obj-$(CONFIG_BPF) += bpf/ +obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o obj-$(CONFIG_PERF_EVENTS) += events/ diff --git a/kernel/fork.c b/kernel/fork.c index bcdf53125210..3fa7ba64c62d 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -94,6 +94,7 @@ #include #include #include +#include #include #include @@ -451,6 +452,8 @@ void put_task_stack(struct task_struct *tsk) void free_task(struct task_struct *tsk) { + scs_release(tsk); + #ifndef CONFIG_THREAD_INFO_IN_TASK /* * The task is finally done with both the stack and thread_info, @@ -834,6 +837,8 @@ void __init fork_init(void) NULL, free_vm_stack_cache); #endif + scs_init(); + lockdep_init_task(&init_task); uprobes_init(); } @@ -893,6 +898,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) if (err) goto free_stack; + err = scs_prepare(tsk, node); + if (err) + goto free_stack; + #ifdef CONFIG_SECCOMP /* * We must handle setting up seccomp filters once we're under diff --git a/kernel/sched/core.c b/kernel/sched/core.c index dd05a378631a..e7faeb383008 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -6013,6 +6013,8 @@ void init_idle(struct task_struct *idle, int cpu) raw_spin_lock_irqsave(&idle->pi_lock, flags); raw_spin_lock(&rq->lock); + scs_task_reset(idle); + __sched_fork(0, idle); idle->state = TASK_RUNNING; idle->se.exec_start = sched_clock(); diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 0db2c1b3361e..c153003a011c 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -58,6 +58,7 @@ #include #include #include +#include #include #include #include diff --git a/kernel/scs.c b/kernel/scs.c new file mode 100644 index 000000000000..7c1a40020754 --- /dev/null +++ b/kernel/scs.c @@ -0,0 +1,169 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#include +#include +#include +#include +#include +#include +#include + +static inline void *__scs_base(struct task_struct *tsk) +{ + /* + * We allow architectures to use the shadow_call_stack field in + * struct thread_info to store the current shadow stack pointer + * during context switches. + * + * This allows the implementation to also clear the field when + * the task is active to avoid keeping pointers to the current + * task's shadow stack in memory. This can make it harder for an + * attacker to locate the shadow stack, but also requires us to + * compute the base address when needed. + * + * We assume the stack is aligned to SCS_SIZE. + */ + return (void *)((uintptr_t)task_scs(tsk) & ~(SCS_SIZE - 1)); +} + +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP + +/* Keep a cache of shadow stacks */ +#define SCS_CACHE_SIZE 2 +static DEFINE_PER_CPU(void *, scs_cache[SCS_CACHE_SIZE]); + +static void *scs_alloc(int node) +{ + int i; + + for (i = 0; i < SCS_CACHE_SIZE; i++) { + void *s; + + s = this_cpu_xchg(scs_cache[i], NULL); + if (s) { + memset(s, 0, SCS_SIZE); + return s; + } + } + + /* + * We allocate a full page for the shadow stack, which should be + * more than we need. Check the assumption nevertheless. + */ + BUILD_BUG_ON(SCS_SIZE > PAGE_SIZE); + + return __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, 0, + node, __builtin_return_address(0)); +} + +static void scs_free(void *s) +{ + int i; + + for (i = 0; i < SCS_CACHE_SIZE; i++) + if (this_cpu_cmpxchg(scs_cache[i], 0, s) == 0) + return; + + vfree_atomic(s); +} + +static int scs_cleanup(unsigned int cpu) +{ + int i; + void **cache = per_cpu_ptr(scs_cache, cpu); + + for (i = 0; i < SCS_CACHE_SIZE; i++) { + vfree(cache[i]); + cache[i] = NULL; + } + + return 0; +} + +void __init scs_init(void) +{ + cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "scs:scs_cache", NULL, + scs_cleanup); +} + +#else /* !CONFIG_SHADOW_CALL_STACK_VMAP */ + +static struct kmem_cache *scs_cache; + +static inline void *scs_alloc(int node) +{ + return kmem_cache_alloc_node(scs_cache, GFP_SCS, node); +} + +static inline void scs_free(void *s) +{ + kmem_cache_free(scs_cache, s); +} + +void __init scs_init(void) +{ + scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, SCS_SIZE, + 0, NULL); + WARN_ON(!scs_cache); +} + +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + +static inline unsigned long *scs_magic(struct task_struct *tsk) +{ + return (unsigned long *)(__scs_base(tsk) + SCS_SIZE) - 1; +} + +static inline void scs_set_magic(struct task_struct *tsk) +{ + *scs_magic(tsk) = SCS_END_MAGIC; +} + +void scs_task_reset(struct task_struct *tsk) +{ + /* + * Reset the shadow stack to the base address in case the task + * is reused. + */ + task_set_scs(tsk, __scs_base(tsk)); +} + +int scs_prepare(struct task_struct *tsk, int node) +{ + void *s; + + s = scs_alloc(node); + if (!s) + return -ENOMEM; + + task_set_scs(tsk, s); + scs_set_magic(tsk); + + return 0; +} + +bool scs_corrupted(struct task_struct *tsk) +{ + return *scs_magic(tsk) != SCS_END_MAGIC; +} + +void scs_release(struct task_struct *tsk) +{ + void *s; + + s = __scs_base(tsk); + if (!s) + return; + + WARN_ON(scs_corrupted(tsk)); + + task_set_scs(tsk, NULL); + scs_free(s); +} From patchwork Thu Oct 31 16:46:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11221641 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 00AF415AB for ; Thu, 31 Oct 2019 16:59:55 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 2019B208C0 for ; Thu, 31 Oct 2019 16:59:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rs0uCO2L" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2019B208C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17191-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 29714 invoked by uid 550); 31 Oct 2019 16:59:13 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 14071 invoked from network); 31 Oct 2019 16:47:11 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=W6zeo7Q7mErxhGrkP3ZQcZB+2c+vbDgfAATe7zE6k8w=; b=rs0uCO2LFxhHT7EP5DmY9c/L9bpEnEC1VWewlNoaBDRXWYErneD2OpqshnvSHhgAme JAAffaVsF93x7v8eAkLLFdnhPjbEe5w6bpLIdCGG79i4u6455JqLR5WYDETPD/mmPA6J 53cMinae7PlZMFr7sCQfwKZXcpsLWBU33Uo2WIMrwKbP6c0oSK1t5ivn5SFO0diLrfRL 1bTXBWTd+gAY9XQvrtmjn6WNC4vtXs3CFiRhZyBAg6iEklZ7Z8VlDBcWlRZD+cOiT1cK hrnGqUtbzaBVaBeQmSkMoSzh5D1nQi3JxGtOeZiYFpnRwcKMuc3ZV7BYKGrQNhY4IoWT n+Ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=W6zeo7Q7mErxhGrkP3ZQcZB+2c+vbDgfAATe7zE6k8w=; b=geDNmm4bWXdKdUqYjXGjs66FD3lqvh22sodC10brmbTx6F2Q4vNljZfsuNmcOXXTTc 9oUgyRWdjH8LTPmEIOCxPVqMBsSL1igWqlvAtkGYE0FIZdn7BNEz9t/obTR3Al9XygG2 iWGFGdJVjnNiUX/PFg/RqAsHUa3c12NMNFf4XEUW1uVwBDeY4JmI5H2Pq2mbUh+Rd6gI uUAMgqM2WiF51fbuaDFS2iVBbmwYY77KxtNPiqVOD99rdlzGbCCeLz4Y6FKIXHgLI2BD g3N3lCBWYUy6eHx2cmHNn+5K1r/KskXdVevoyMNpf4iEVsgfr3dvPk4v8RCK2N1VGSPB y6Ug== X-Gm-Message-State: APjAAAXHNhJp7o2z6iDg9fBremC2BFj8IIs0fYw91dEJyqvQlDdxaaPc CIjHTxt60YTW4YwdKc7Cd+NOdZ9lFlGH0c/vQ/o= X-Google-Smtp-Source: APXvYqzWZOZH12JNgjp0CkESC/eBLpldMqYZDmX+daV4O8eKiBn1J/9PBTxruUdmGj6MBzbO+M30dRLmcEZNXiRPnjk= X-Received: by 2002:a0d:d746:: with SMTP id z67mr5054248ywd.205.1572540419634; Thu, 31 Oct 2019 09:46:59 -0700 (PDT) Date: Thu, 31 Oct 2019 09:46:26 -0700 In-Reply-To: <20191031164637.48901-1-samitolvanen@google.com> Message-Id: <20191031164637.48901-7-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191031164637.48901-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v3 06/17] scs: add accounting From: samitolvanen@google.com To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen This change adds accounting for the memory allocated for shadow stacks. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- drivers/base/node.c | 6 ++++++ fs/proc/meminfo.c | 4 ++++ include/linux/mmzone.h | 3 +++ kernel/scs.c | 19 +++++++++++++++++++ mm/page_alloc.c | 6 ++++++ mm/vmstat.c | 3 +++ 6 files changed, 41 insertions(+) diff --git a/drivers/base/node.c b/drivers/base/node.c index 296546ffed6c..111e58ec231e 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -415,6 +415,9 @@ static ssize_t node_read_meminfo(struct device *dev, "Node %d AnonPages: %8lu kB\n" "Node %d Shmem: %8lu kB\n" "Node %d KernelStack: %8lu kB\n" +#ifdef CONFIG_SHADOW_CALL_STACK + "Node %d ShadowCallStack:%8lu kB\n" +#endif "Node %d PageTables: %8lu kB\n" "Node %d NFS_Unstable: %8lu kB\n" "Node %d Bounce: %8lu kB\n" @@ -438,6 +441,9 @@ static ssize_t node_read_meminfo(struct device *dev, nid, K(node_page_state(pgdat, NR_ANON_MAPPED)), nid, K(i.sharedram), nid, sum_zone_node_page_state(nid, NR_KERNEL_STACK_KB), +#ifdef CONFIG_SHADOW_CALL_STACK + nid, sum_zone_node_page_state(nid, NR_KERNEL_SCS_BYTES) / 1024, +#endif nid, K(sum_zone_node_page_state(nid, NR_PAGETABLE)), nid, K(node_page_state(pgdat, NR_UNSTABLE_NFS)), nid, K(sum_zone_node_page_state(nid, NR_BOUNCE)), diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index 8c1f1bb1a5ce..49768005a79e 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -103,6 +103,10 @@ static int meminfo_proc_show(struct seq_file *m, void *v) show_val_kb(m, "SUnreclaim: ", sunreclaim); seq_printf(m, "KernelStack: %8lu kB\n", global_zone_page_state(NR_KERNEL_STACK_KB)); +#ifdef CONFIG_SHADOW_CALL_STACK + seq_printf(m, "ShadowCallStack:%8lu kB\n", + global_zone_page_state(NR_KERNEL_SCS_BYTES) / 1024); +#endif show_val_kb(m, "PageTables: ", global_zone_page_state(NR_PAGETABLE)); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index bda20282746b..fcb8c1708f9e 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -200,6 +200,9 @@ enum zone_stat_item { NR_MLOCK, /* mlock()ed pages found and moved off LRU */ NR_PAGETABLE, /* used for pagetables */ NR_KERNEL_STACK_KB, /* measured in KiB */ +#if IS_ENABLED(CONFIG_SHADOW_CALL_STACK) + NR_KERNEL_SCS_BYTES, /* measured in bytes */ +#endif /* Second 128 byte cacheline */ NR_BOUNCE, #if IS_ENABLED(CONFIG_ZSMALLOC) diff --git a/kernel/scs.c b/kernel/scs.c index 7c1a40020754..7780fc4e29ac 100644 --- a/kernel/scs.c +++ b/kernel/scs.c @@ -11,6 +11,7 @@ #include #include #include +#include #include static inline void *__scs_base(struct task_struct *tsk) @@ -74,6 +75,11 @@ static void scs_free(void *s) vfree_atomic(s); } +static struct page *__scs_page(struct task_struct *tsk) +{ + return vmalloc_to_page(__scs_base(tsk)); +} + static int scs_cleanup(unsigned int cpu) { int i; @@ -107,6 +113,11 @@ static inline void scs_free(void *s) kmem_cache_free(scs_cache, s); } +static struct page *__scs_page(struct task_struct *tsk) +{ + return virt_to_page(__scs_base(tsk)); +} + void __init scs_init(void) { scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, SCS_SIZE, @@ -135,6 +146,12 @@ void scs_task_reset(struct task_struct *tsk) task_set_scs(tsk, __scs_base(tsk)); } +static void scs_account(struct task_struct *tsk, int account) +{ + mod_zone_page_state(page_zone(__scs_page(tsk)), NR_KERNEL_SCS_BYTES, + account * SCS_SIZE); +} + int scs_prepare(struct task_struct *tsk, int node) { void *s; @@ -145,6 +162,7 @@ int scs_prepare(struct task_struct *tsk, int node) task_set_scs(tsk, s); scs_set_magic(tsk); + scs_account(tsk, 1); return 0; } @@ -164,6 +182,7 @@ void scs_release(struct task_struct *tsk) WARN_ON(scs_corrupted(tsk)); + scs_account(tsk, -1); task_set_scs(tsk, NULL); scs_free(s); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index ecc3dbad606b..fe17d69d98a7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5361,6 +5361,9 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) " managed:%lukB" " mlocked:%lukB" " kernel_stack:%lukB" +#ifdef CONFIG_SHADOW_CALL_STACK + " shadow_call_stack:%lukB" +#endif " pagetables:%lukB" " bounce:%lukB" " free_pcp:%lukB" @@ -5382,6 +5385,9 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) K(zone_managed_pages(zone)), K(zone_page_state(zone, NR_MLOCK)), zone_page_state(zone, NR_KERNEL_STACK_KB), +#ifdef CONFIG_SHADOW_CALL_STACK + zone_page_state(zone, NR_KERNEL_SCS_BYTES) / 1024, +#endif K(zone_page_state(zone, NR_PAGETABLE)), K(zone_page_state(zone, NR_BOUNCE)), K(free_pcp), diff --git a/mm/vmstat.c b/mm/vmstat.c index 6afc892a148a..9fe4afe670fe 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1118,6 +1118,9 @@ const char * const vmstat_text[] = { "nr_mlock", "nr_page_table_pages", "nr_kernel_stack", +#if IS_ENABLED(CONFIG_SHADOW_CALL_STACK) + "nr_shadow_call_stack_bytes", +#endif "nr_bounce", #if IS_ENABLED(CONFIG_ZSMALLOC) "nr_zspages", From patchwork Thu Oct 31 16:46:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11221643 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 606A915AB for ; Thu, 31 Oct 2019 17:00:08 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id B3924208C0 for ; Thu, 31 Oct 2019 17:00:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="L6nedrmE" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B3924208C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17192-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 29975 invoked by uid 550); 31 Oct 2019 16:59:16 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 14225 invoked from network); 31 Oct 2019 16:47:14 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ISQCDuiClH+Ob0sv0IF5h/3w79WcDO0VBPNi/N63ddM=; b=L6nedrmEU6JvkHcsnA5dyzyzZUdFX15pwpo+PGFFiFEz0adILiGuaSNCBEk5zwayGV E9E9rgsrq5/1A2h2MtticPSYVLesH41cnymaor0lAwHnDlDWxTf5/4qQBQJbY1uWFtLZ C6sr+tdm50IQTwMHAjyoVlcXkjLcFiHznGojckMYMtDoi/bOtSH0IlV0Tnioy87094XS ezKAXvoCEbuRluKXTl7DxPY8VwUTSt5guCPWasnZX2NkFzFDs5i1SJLDNnu+b65KtE/0 /kLy//TY9FnsnLxzNsp6fluwupXbn29fiFnAO1x5t8/AaHOuDntvEDf7xlit3hYA7ljG EZkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ISQCDuiClH+Ob0sv0IF5h/3w79WcDO0VBPNi/N63ddM=; b=uVr+pjwkDvHW9Dth7cPHs4NQdtyAQX7vDg59DlaIHPg+b/Acg7hUllOktAsWXewops 9nip4e7l4/ZBxKKkjk/K4YvfVJDV20gCuPVjPj43d4KNKeDCSszIuWHYBADOWtlLttI9 Bwhkt1E824IG4IKtARg04c+g1SMb3Z7+VOiMASXSDT4U+HHxyZXtpqrMtLBBc1ufDqc9 GmGo5SXm6hhiK206V5PDazEWLlFJLyT/3hQi52ZWVSX0pDkJwoxEVm7N4RW1+Zsy0kO1 wZ92Nw4KDNnjB+ZV85mecr0kS3WFsUZ2T1owDmFdHZdkwjfZhXhr2702qxNItnZHLvDR 39Hg== X-Gm-Message-State: APjAAAWMBuk3K/W0A94nltfKhlvxlx/QY6wbJ+H09wABnuPo4TX0xPoi q7SAHlN79H5E0dxwzHJH89pcInlv8PkHT1qA75w= X-Google-Smtp-Source: APXvYqylBnOlY6J5yphs6OUfcffeYpoUMccmhuKdcTOrwjC05s5MJ0bhS5H4FPPnPRXOHEgt796GjMyXraMpNUfDtLU= X-Received: by 2002:a0d:e987:: with SMTP id s129mr5018878ywe.111.1572540422670; Thu, 31 Oct 2019 09:47:02 -0700 (PDT) Date: Thu, 31 Oct 2019 09:46:27 -0700 In-Reply-To: <20191031164637.48901-1-samitolvanen@google.com> Message-Id: <20191031164637.48901-8-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191031164637.48901-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v3 07/17] scs: add support for stack usage debugging From: samitolvanen@google.com To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Implements CONFIG_DEBUG_STACK_USAGE for shadow stacks. Signed-off-by: Sami Tolvanen --- kernel/scs.c | 39 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 39 insertions(+) diff --git a/kernel/scs.c b/kernel/scs.c index 7780fc4e29ac..67c43af627d1 100644 --- a/kernel/scs.c +++ b/kernel/scs.c @@ -167,6 +167,44 @@ int scs_prepare(struct task_struct *tsk, int node) return 0; } +#ifdef CONFIG_DEBUG_STACK_USAGE +static inline unsigned long scs_used(struct task_struct *tsk) +{ + unsigned long *p = __scs_base(tsk); + unsigned long *end = scs_magic(tsk); + uintptr_t s = (uintptr_t)p; + + while (p < end && *p) + p++; + + return (uintptr_t)p - s; +} + +static void scs_check_usage(struct task_struct *tsk) +{ + static DEFINE_SPINLOCK(lock); + static unsigned long highest; + unsigned long used = scs_used(tsk); + + if (used <= highest) + return; + + spin_lock(&lock); + + if (used > highest) { + pr_info("%s: highest shadow stack usage %lu bytes\n", + __func__, used); + highest = used; + } + + spin_unlock(&lock); +} +#else +static inline void scs_check_usage(struct task_struct *tsk) +{ +} +#endif + bool scs_corrupted(struct task_struct *tsk) { return *scs_magic(tsk) != SCS_END_MAGIC; @@ -181,6 +219,7 @@ void scs_release(struct task_struct *tsk) return; WARN_ON(scs_corrupted(tsk)); + scs_check_usage(tsk); scs_account(tsk, -1); task_set_scs(tsk, NULL); From patchwork Thu Oct 31 16:46:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11221645 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BEE4815AB for ; Thu, 31 Oct 2019 17:00:21 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 1D5A7208C0 for ; Thu, 31 Oct 2019 17:00:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Ghjlrkb1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1D5A7208C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17193-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 30623 invoked by uid 550); 31 Oct 2019 16:59:24 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 15408 invoked from network); 31 Oct 2019 16:47:17 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Ut7YUpLvWAiZLonPATqZ5OL9tOEZJcNF7+tDsNdHEWU=; b=Ghjlrkb1pnfNur1PCuCrGu3KmSL84JUhV/VRyXQlGUkgpAsmk5kMXvp57Mk7EYOtSq L/8VDDHfXZ3KUDC1xcym7naR9TXkHtFoyXvddQwvFISLnm28nEJXtrhVKjiDWc99o+wm bIXKaCoHuKC2xbc2hbdVG9x55NXltyby/oVTLxiYMqg5L3AXj+Jse9QRq7ocqCtz8Woo EwpS6ecvA2GO0JpKJUVbqL7jZeQw+99QXy+fwi9BAQMGs5rPs+dkXDB42GUWcoOq+vfR K9Q0cwMWNIBZXS43c69SS1vt9YQQD9Wr2zCStwQNshudap8C/A8PTXXPpqKAD3KVELa9 kr2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Ut7YUpLvWAiZLonPATqZ5OL9tOEZJcNF7+tDsNdHEWU=; b=HQovgh4CxhNOyiiJgWQ35klfCD4geZ0aCyUa3GyHyMth0INaJOR8w3f2g/ea81aPA7 f3FWyMQoh+oO/UD3qMloE4qkZ2qk8l8fjGs+QqVCsV4gEdMiKJg8NAMLwAgFiiaCa93M CZ2u3nKuHeC+Df1sCQYK9YYlZ22fiUuNCO183BGKF004BW/3UjVh1AUluq6wcxI1SfeL zUxt7F7FbSzjKRJA0TTtfZ+ZaDwjfSL2n8KzySRVJLv0EQLghnMIWKOySJcpeF7DoGbj fl3mpXJZ/aDPB0NdaRLKpeIiPuNBn3fVrPaJ69QCgRukgTd60PDRaoEQxJVkYKY0xc3K LOHw== X-Gm-Message-State: APjAAAWDS1e86kR3N03zwNqq7ahBDG/Q20NzisVyDSK7Zr3HUWriuXQn 1vH+MjHEBkn4vQPjRHDumnaOkDqLV/7iuM93JB8= X-Google-Smtp-Source: APXvYqymaacyz4eRHwtAvxRw6d3+sUW8PqKg+fCQUsJPaPuhQFDN545Kov10eZWMR0T+MDCojgj91Rt4gt/RgGRC9ik= X-Received: by 2002:a63:151:: with SMTP id 78mr7160557pgb.95.1572540425150; Thu, 31 Oct 2019 09:47:05 -0700 (PDT) Date: Thu, 31 Oct 2019 09:46:28 -0700 In-Reply-To: <20191031164637.48901-1-samitolvanen@google.com> Message-Id: <20191031164637.48901-9-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191031164637.48901-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v3 08/17] kprobes: fix compilation without CONFIG_KRETPROBES From: samitolvanen@google.com To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen kprobe_on_func_entry and arch_kprobe_on_func_entry need to be available even if CONFIG_KRETPROBES is not selected. Signed-off-by: Sami Tolvanen Acked-by: Masami Hiramatsu Reviewed-by: Kees Cook --- kernel/kprobes.c | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 53534aa258a6..b5e20a4669b8 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -1829,6 +1829,25 @@ unsigned long __weak arch_deref_entry_point(void *entry) return (unsigned long)entry; } +bool __weak arch_kprobe_on_func_entry(unsigned long offset) +{ + return !offset; +} + +bool kprobe_on_func_entry(kprobe_opcode_t *addr, const char *sym, unsigned long offset) +{ + kprobe_opcode_t *kp_addr = _kprobe_addr(addr, sym, offset); + + if (IS_ERR(kp_addr)) + return false; + + if (!kallsyms_lookup_size_offset((unsigned long)kp_addr, NULL, &offset) || + !arch_kprobe_on_func_entry(offset)) + return false; + + return true; +} + #ifdef CONFIG_KRETPROBES /* * This kprobe pre_handler is registered with every kretprobe. When probe @@ -1885,25 +1904,6 @@ static int pre_handler_kretprobe(struct kprobe *p, struct pt_regs *regs) } NOKPROBE_SYMBOL(pre_handler_kretprobe); -bool __weak arch_kprobe_on_func_entry(unsigned long offset) -{ - return !offset; -} - -bool kprobe_on_func_entry(kprobe_opcode_t *addr, const char *sym, unsigned long offset) -{ - kprobe_opcode_t *kp_addr = _kprobe_addr(addr, sym, offset); - - if (IS_ERR(kp_addr)) - return false; - - if (!kallsyms_lookup_size_offset((unsigned long)kp_addr, NULL, &offset) || - !arch_kprobe_on_func_entry(offset)) - return false; - - return true; -} - int register_kretprobe(struct kretprobe *rp) { int ret = 0; From patchwork Thu Oct 31 16:46:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11221647 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6CF1F15AB for ; Thu, 31 Oct 2019 17:00:35 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 94A0C208C0 for ; Thu, 31 Oct 2019 17:00:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Q6nZNOFX" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 94A0C208C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17194-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 31840 invoked by uid 550); 31 Oct 2019 16:59:27 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 15562 invoked from network); 31 Oct 2019 16:47:19 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=K9E/B478ig/3FT/HPhGFdlo0v28kyi4gF0t5BJVte+4=; b=Q6nZNOFXJobNQh66SBURqXR9jjgGc7xPtfVgiH00Q94Wq/lt68cuARge++0iJSOHbl oW4NMEq+9cHqz9gbyOSyiBZcNrTjy/SOh+b8CYr8g09qvaCHNujmM1eqjZChkaUIta8N R1Nytm89Z4g9Csz0lbSdVY2lZ5qMNHUcg9EX/BaicjSL2hYM0k+G3ChifSzc8IJFDYk1 GpIbxO2R5s2CgPxCiu0JT+4D9XPEJ59tckUnFp1YBhgg19qHemtrJSnWnwdM64fHfqzH V51MYQa1MpvdRri6T2bUaJeLk+9cOeQ3PhUqIY7fT381QfU0I53VsE/DJ2wREqjXj4eZ 2Qsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=K9E/B478ig/3FT/HPhGFdlo0v28kyi4gF0t5BJVte+4=; b=mBTPOmnGiRADNbRTdjkAEVZtZmoMwGijYjan/swfp5SC3yoNceicaWA4zUEkBckUK2 n5wQlIiAAXM1VFfUn7N9DtAEd17lvxCZzWN9aKSjBCp8R6n83JG6ISA87rRkXyArsz2c 8EotHx00N8fQyh6koZZaJCgJM6iuLtfzoxpgCBHDHYvWtBcKCwcWX+KZ8CJvwUhySC/L A9mupofKS7negpmOnWNy2rOxCazivsjqwB4kwHn+xNTKrrtkYwAskEeuAa+D2F2lllGj gAen4JGNIeuRk2vbHWQzBfbDOanRREq/qLof1OmCs61KKsHYe62Bt2yExgeOzZfZ7N6P EAhA== X-Gm-Message-State: APjAAAX04KrUTN737u7PXpnpq8qOtiOR/xolnrcMjeTcBaSU7IiADmDw vFR4mJvxWLaTfmy1BlxkM+k4XtRHQDIqH0f9sjE= X-Google-Smtp-Source: APXvYqzC1qzspsODaXGFhM8lzAz2Kkvitd6Mcb2OnCrnHZg+dDRsPF4zisB/iRaRWzHFab+ubKlzQJYNZnG9GZBONHw= X-Received: by 2002:a63:e145:: with SMTP id h5mr7826628pgk.447.1572540427770; Thu, 31 Oct 2019 09:47:07 -0700 (PDT) Date: Thu, 31 Oct 2019 09:46:29 -0700 In-Reply-To: <20191031164637.48901-1-samitolvanen@google.com> Message-Id: <20191031164637.48901-10-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191031164637.48901-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v3 09/17] arm64: kprobes: fix kprobes without CONFIG_KRETPROBES From: samitolvanen@google.com To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen This allows CONFIG_KRETPROBES to be disabled without disabling kprobes entirely. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- arch/arm64/kernel/probes/kprobes.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/arm64/kernel/probes/kprobes.c b/arch/arm64/kernel/probes/kprobes.c index c4452827419b..98230ae979ca 100644 --- a/arch/arm64/kernel/probes/kprobes.c +++ b/arch/arm64/kernel/probes/kprobes.c @@ -551,6 +551,7 @@ void __kprobes __used *trampoline_probe_handler(struct pt_regs *regs) return (void *)orig_ret_address; } +#ifdef CONFIG_KRETPROBES void __kprobes arch_prepare_kretprobe(struct kretprobe_instance *ri, struct pt_regs *regs) { @@ -564,6 +565,7 @@ int __kprobes arch_trampoline_kprobe(struct kprobe *p) { return 0; } +#endif int __init arch_init_kprobes(void) { From patchwork Thu Oct 31 16:46:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11221649 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CFB7A15AB for ; Thu, 31 Oct 2019 17:00:48 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 2E0AA208C0 for ; Thu, 31 Oct 2019 17:00:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MFB8Vi7T" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2E0AA208C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17195-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 32000 invoked by uid 550); 31 Oct 2019 16:59:29 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 15707 invoked from network); 31 Oct 2019 16:47:22 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=9Ao7uN2sNi3wFwgEShB4rOY2NPtO1ve2nhewvDVkd70=; b=MFB8Vi7TeDoI2FwYG6EbbYc+LBTOUEvRqAUAKVWlm/Y9wlKU7ujBC6awj8P8GgNg15 SHrqse6Eby/JmzTas4YOpEbcYfE5zV02IRVEzTRd+/8pcGMuY5LmXsc1LvTsYyOPQRbj rWB9hKuYdsKAq5u+1gwPKMLhd3aFBv9y7s34IJD7qUWZFLaH/dCyw0RXobVGkHrXVve3 OIoYdwBvTIcedJKnwnczA/JcOUoSjLk5Niw2NS8hRSGXfbcB1RQzOd3+hhaLOuVMzDiz eb9NXbVSx5Tc5HE4+fuT64N6mKP3gK4YOR6cATCVdP01/rojf3F/IcVEmTxc2qVknVy3 MHuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9Ao7uN2sNi3wFwgEShB4rOY2NPtO1ve2nhewvDVkd70=; b=n9Jaldj6bP92UQnNoqu2yza7yj9x8RssTJSYzLtWtPQt2ebtAlsa7R5kzvgGiYOAQV eHMN9LNQ4zM0ryDTjEEDjCzARj7BXtj4OiHvU7lqy7+78/ooqA1uPjMA1zEf0YpZ8bM/ QRpT5RjjQeKkAWCpCDQrmckZkPzo1lVhFbItgTGiKuFdBrBT7nQS9aRZhjRPlQEOuQzq y4igPH0qp1F4idJxdQova1repBQZX2ntzypTcNKMdkV3IOxN80Hhmg338EhxwaC5Ikgs uFEJ+F2dSx6tcadW54KW1ghfIOLI/jdNqtjqRggYDpt/yz9UG3bBz5oulSM8UHczeN74 jzjQ== X-Gm-Message-State: APjAAAWFsqUAQfFbwfQG/rpJMf8w4kB/NAX7CMRvUZHlaSSltwR5Bs+G QsA+hSMbewW8HM5gy49dmw5/++uts8iZy+VP/ZY= X-Google-Smtp-Source: APXvYqyTGY27y+fXLM2c6y9Il6TlGdGiGRpFxeY59Reh3sr0g68TeqvBad+M0W7yvXKqIy6D21VCf66mb2yA9rq9620= X-Received: by 2002:a63:d258:: with SMTP id t24mr7711243pgi.289.1572540430252; Thu, 31 Oct 2019 09:47:10 -0700 (PDT) Date: Thu, 31 Oct 2019 09:46:30 -0700 In-Reply-To: <20191031164637.48901-1-samitolvanen@google.com> Message-Id: <20191031164637.48901-11-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191031164637.48901-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v3 10/17] arm64: disable kretprobes with SCS From: samitolvanen@google.com To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen With CONFIG_KRETPROBES, function return addresses are modified to redirect control flow to kretprobe_trampoline. This is incompatible with SCS. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- arch/arm64/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 3f047afb982c..e7b57a8a5531 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -165,7 +165,7 @@ config ARM64 select HAVE_STACKPROTECTOR select HAVE_SYSCALL_TRACEPOINTS select HAVE_KPROBES - select HAVE_KRETPROBES + select HAVE_KRETPROBES if !SHADOW_CALL_STACK select HAVE_GENERIC_VDSO select IOMMU_DMA if IOMMU_SUPPORT select IRQ_DOMAIN From patchwork Thu Oct 31 16:46:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11221651 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 74CB115AB for ; Thu, 31 Oct 2019 17:01:02 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id C5A46208C0 for ; Thu, 31 Oct 2019 17:01:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LiHsXWMw" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C5A46208C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17196-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 32185 invoked by uid 550); 31 Oct 2019 16:59:32 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 15821 invoked from network); 31 Oct 2019 16:47:25 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=B4VLNzxMuvBiHcofU7yya6+NRz/joAG8uuuPQJlxTTo=; b=LiHsXWMw7bxtDeGEYRXpMn16nSyLdNS5lx+NSaesHfyJwbpJ9dv/hg1+masT0ktx+u GX0uOBJ11DAuMgxht/T5EF39gfFO9xTW1PSK4PEU0jy3aCOEqI3nPxqbXhiO3dkUpjDo urzDfwcuWckptE/Ao67JmdtLQrusiSCYPZDDMBcxYj/czYbCp7NNwbd7zcOwa1VvwCwl bCLZbQgxKTiU/SHhlKA9CxZwmprb/+rX2Rq8iGPkaL2PUwfngGsbWXLDbVI9z7gs6RBM TDT5FjyYB3G7P0cpX1mb1ilVQEka4Aw0h0I4BCIiujvaIlw0NIXAczBNyr3B++VVcB+3 +Cow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=B4VLNzxMuvBiHcofU7yya6+NRz/joAG8uuuPQJlxTTo=; b=s6rliqYjrmqfcSbZeDPiLYwYtxWzPOiasOqm9OO4HTzumZ1sWO9VC97ktqBfyLBmmb 1K8ZbDOCIE1qF9Ifm8mB6Ic2YU1TeeZdKC/DzDWS542KqYRjc6lPjQcrToDxAI4gy3sg cp89/Lv3Se+Kp3Tn2mzFwFE5v8f0zv8HItl+3PzwL06aseOKCPw/WfJg9Mf0V4ba3DHv 4+jIqyHSSn6d6Hm2fr6vZgQ7dNA0Cm1snZ9EKNgHbwq89Sb07i2MAeekkKjbtCitiHtf iHGX8j7Qq6kcM2adXMFKMEjkFb0pQ5BV4UBwk4XHswAncr/NEqg3XWMwazpayOm4MyZQ j0PQ== X-Gm-Message-State: APjAAAXFGi1je3EP9W/KRBGYItPif3naAWzfnLJBSPOKKHMFy/zKqfii TQ6I0faLhxuIzuR/UdM3mJRQkeQLCyU2uSoe2jc= X-Google-Smtp-Source: APXvYqwoggK1YVUJpQXUwc7FiBvv7A/DJq1W/EwQk4BCJ6LoqoLRq9NRXAmJ6C3gzBh1eCK8VeecRlAeD8DpBndQofs= X-Received: by 2002:ac8:22c4:: with SMTP id g4mr5716668qta.45.1572540432881; Thu, 31 Oct 2019 09:47:12 -0700 (PDT) Date: Thu, 31 Oct 2019 09:46:31 -0700 In-Reply-To: <20191031164637.48901-1-samitolvanen@google.com> Message-Id: <20191031164637.48901-12-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191031164637.48901-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v3 11/17] arm64: disable function graph tracing with SCS From: samitolvanen@google.com To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen With CONFIG_FUNCTION_GRAPH_TRACER, function return addresses are modified in ftrace_graph_caller and prepare_ftrace_return to redirect control flow to ftrace_return_to_handler. This is incompatible with SCS. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- arch/arm64/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index e7b57a8a5531..42867174920f 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -148,7 +148,7 @@ config ARM64 select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_ERROR_INJECTION - select HAVE_FUNCTION_GRAPH_TRACER + select HAVE_FUNCTION_GRAPH_TRACER if !SHADOW_CALL_STACK select HAVE_GCC_PLUGINS select HAVE_HW_BREAKPOINT if PERF_EVENTS select HAVE_IRQ_TIME_ACCOUNTING From patchwork Thu Oct 31 16:46:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11221653 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3211D14DB for ; Thu, 31 Oct 2019 17:01:16 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 7D865208C0 for ; Thu, 31 Oct 2019 17:01:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FyBd2DZR" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7D865208C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17197-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 1364 invoked by uid 550); 31 Oct 2019 16:59:49 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 15861 invoked from network); 31 Oct 2019 16:47:27 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=P4MEC/JNWY+R+03CT6BMcXkN4eq/H40mvA6s8gty/0o=; b=FyBd2DZREBGldSQmdAl4ctWmHh0W3zmoUaOrqpBjG9EgJjGFba3+Gk8Ux29MjmAeBe sicne4I37jGUHXgzNedjLIMvsWVJI0ltGloZSO14XUHh9UTS3ARo7k97sP37e/Sb7gOS DkHrE6/vOZ1GtiQPangd/sumad9YQICceEEnhyVa//W66pwrVwkXh36b+vmOtbBKreKQ Cj7tQe1j/bGJXIWzMcMzK/I2qD4i61lE8pjYfsfIy6BaWm8QzTnAU3MchHhGnejhk8Hg PI+E7X6+lVUF1d6bWAow+VePWBv3sp0SROOStKCkYLe92oec5OgOSb4SSowmvPYz6/dp uItw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=P4MEC/JNWY+R+03CT6BMcXkN4eq/H40mvA6s8gty/0o=; b=SNX4z5ME5IOF8kTN5ejx6v8s/4zLL7vsNNBCwvrt9NsrRC2Tojd8uTRtSkwcEQ9gfm wJXivegxk6gT84/v6IhMYnoPZDIRwLTki7crsVeChfHwvAFVpcSE/0Swkoih29von2j5 hRPLK0hAnZbPsYbZoymkd6JaDaNcZxdj9AGnw06wkMUswqW+Nx6ydEb7oLQRJ0GrLJqR pDyeC1AT18aNOTT34XYDooCLdBeQO+aCR6tCMU+Gh9H+caYBzln6CgYcucLyyi0WG96T 6VKyd9dbZEHGW3XlimJe2eQI8i+/AEUlTrovdulpGEGoYfgkDKq3odefDgT7ZX5VdW51 gZvw== X-Gm-Message-State: APjAAAVit/cqUW0yIoIFwzVQJRao1NTkQ8/CmSV3PTVwWFZJmPE0p7c4 CDMU9VBn/iYJ6O202PQM9FzBdP4D5OeLvrX86/Y= X-Google-Smtp-Source: APXvYqyE0l4dMPPmCmPFvy8Hsugb5xtiMU18+Uv3gR8RxmoCWwbgYDkks7ERiHhw5CjwrT6msMO6tlilVlKLUvoeCVQ= X-Received: by 2002:a63:6cf:: with SMTP id 198mr7655687pgg.259.1572540435556; Thu, 31 Oct 2019 09:47:15 -0700 (PDT) Date: Thu, 31 Oct 2019 09:46:32 -0700 In-Reply-To: <20191031164637.48901-1-samitolvanen@google.com> Message-Id: <20191031164637.48901-13-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191031164637.48901-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v3 12/17] arm64: reserve x18 from general allocation with SCS From: samitolvanen@google.com To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Reserve the x18 register from general allocation when SCS is enabled, because the compiler uses the register to store the current task's shadow stack pointer. Note that all external kernel modules must also be compiled with -ffixed-x18 if the kernel has SCS enabled. Signed-off-by: Sami Tolvanen Reviewed-by: Nick Desaulniers Reviewed-by: Kees Cook --- arch/arm64/Makefile | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 2c0238ce0551..ef76101201b2 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -72,6 +72,10 @@ stack_protector_prepare: prepare0 include/generated/asm-offsets.h)) endif +ifeq ($(CONFIG_SHADOW_CALL_STACK), y) +KBUILD_CFLAGS += -ffixed-x18 +endif + ifeq ($(CONFIG_CPU_BIG_ENDIAN), y) KBUILD_CPPFLAGS += -mbig-endian CHECKFLAGS += -D__AARCH64EB__ From patchwork Thu Oct 31 16:46:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11221655 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1B56D14DB for ; Thu, 31 Oct 2019 17:01:28 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 6E6B6208C0 for ; Thu, 31 Oct 2019 17:01:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="M2bQgEs0" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6E6B6208C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17198-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 1488 invoked by uid 550); 31 Oct 2019 16:59:51 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 15938 invoked from network); 31 Oct 2019 16:47:30 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=3L2fKWe0mnTKbOXtDJWS5MdWfgbISjm7SM0mf8jhMMs=; b=M2bQgEs0bbsNq8MNMFthK/YaXaNa7lsVfAcQUrIhxB6WHow3p7zT6DMI2hPzQ12JyE dBDu7tLmmK0EEcTfQyug+zjuDgAcD7mKPl6tI6HWHtIIlglIWBDrRvtighs9FmRdTeuA Utjfp/h/JCLg0sI4eJJnpaaKwajzXzqbouGFFnbQW8bF3vHks01IkHDwLbusJqIJbbtz HjJPj0I7fONVzT+V+kvjMVqj2r1bUa9I2Mu1YNcXlrszcM7cVDBp5tO3PZ8B+1C1keZf wVg7wmq11bbXPEPFRcmQQMxy7vNpqsJV3Tc3WBhABGzYCzx8tPiXjJJoVrXeY98hn2vr fLvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=3L2fKWe0mnTKbOXtDJWS5MdWfgbISjm7SM0mf8jhMMs=; b=aW92NdETH6OsOGpInqNt013PWjD2HRaEjT3QGxrQazQ8Br/inWcVmMJdU39UR4huxt eQ248RwZ9lz5myRpfZA/LMw9/E7F2Pmn90IgXzEHujUV9UPGb1Eocqcx1doFAUhRZ0Nf 9oRiPdzSVQZFAmWWfAkEegPIvqR7vJKPlfnRe+G2lGjWvRuTkP+NTdJqHbBTpbEI8bJR XIehsmA9Yv/svopGKfPNMW3WtPMhisdM2wI5BS2tzM0ZvRnXjrFo/1ymue1MtBw9rRo7 936r2m3f9w6MKy40fXSbijG79uDe0MJb5YIDaAo7SpYlQU5UDhTacqCqNN5UHEART291 FqJw== X-Gm-Message-State: APjAAAXkt2/+EauWQ628BFoUkgxM81g+YscGRk7Gb2IIPdAsMjjYfhIs T0Sw+jIWXMlC286T1McgzolV4iYyUgz1312QMQI= X-Google-Smtp-Source: APXvYqydEhVHkKqut2+3OaRd8Srg3EG2C5xvERxBciOyTLScaCHzKIIF3dA+VkAqRL7gbd3nEjDN/XqnvRbx9STn/rY= X-Received: by 2002:a63:d0d:: with SMTP id c13mr7797535pgl.138.1572540438115; Thu, 31 Oct 2019 09:47:18 -0700 (PDT) Date: Thu, 31 Oct 2019 09:46:33 -0700 In-Reply-To: <20191031164637.48901-1-samitolvanen@google.com> Message-Id: <20191031164637.48901-14-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191031164637.48901-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v3 13/17] arm64: preserve x18 when CPU is suspended From: samitolvanen@google.com To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Don't lose the current task's shadow stack when the CPU is suspended. Signed-off-by: Sami Tolvanen Reviewed-by: Nick Desaulniers Reviewed-by: Kees Cook --- arch/arm64/include/asm/suspend.h | 2 +- arch/arm64/mm/proc.S | 9 +++++++++ 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/suspend.h b/arch/arm64/include/asm/suspend.h index 8939c87c4dce..0cde2f473971 100644 --- a/arch/arm64/include/asm/suspend.h +++ b/arch/arm64/include/asm/suspend.h @@ -2,7 +2,7 @@ #ifndef __ASM_SUSPEND_H #define __ASM_SUSPEND_H -#define NR_CTX_REGS 12 +#define NR_CTX_REGS 13 #define NR_CALLEE_SAVED_REGS 12 /* diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index fdabf40a83c8..0e7c353c9dfd 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -49,6 +49,8 @@ * cpu_do_suspend - save CPU registers context * * x0: virtual address of context pointer + * + * This must be kept in sync with struct cpu_suspend_ctx in . */ ENTRY(cpu_do_suspend) mrs x2, tpidr_el0 @@ -73,6 +75,9 @@ alternative_endif stp x8, x9, [x0, #48] stp x10, x11, [x0, #64] stp x12, x13, [x0, #80] +#ifdef CONFIG_SHADOW_CALL_STACK + str x18, [x0, #96] +#endif ret ENDPROC(cpu_do_suspend) @@ -89,6 +94,10 @@ ENTRY(cpu_do_resume) ldp x9, x10, [x0, #48] ldp x11, x12, [x0, #64] ldp x13, x14, [x0, #80] +#ifdef CONFIG_SHADOW_CALL_STACK + ldr x18, [x0, #96] + str xzr, [x0, #96] +#endif msr tpidr_el0, x2 msr tpidrro_el0, x3 msr contextidr_el1, x4 From patchwork Thu Oct 31 16:46:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11221657 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DBA4D15AB for ; Thu, 31 Oct 2019 17:01:39 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 33D9E208C0 for ; Thu, 31 Oct 2019 17:01:38 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kjwr8ND7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 33D9E208C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17199-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 1643 invoked by uid 550); 31 Oct 2019 16:59:54 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 15989 invoked from network); 31 Oct 2019 16:47:32 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+uiLq+iUoKYbKnLFQDjwDIIzqGudQy8o1UFEOkmkrHU=; b=kjwr8ND7m1cv4iSNuEZOGhNW0Jvsmvr2aFrQFvCwGpKkkbxdVb8El383SJnvJL4kdI iUDB/VVToctJt6509RMF+oFsBcmbyjOjvFn9KZDRf7pUssUIl1plh0aaN7R1AxB1fYrG hVxnJ8tSwgg+2SqSgKWSR+qrYYrzF4/c+dAi5buVvIC0R4crMHKtWJhzr9H5KMHgiHKt Yi8bhEI9jYa9tcUZiNkBfQ5fzHUOBy9Tpz2CR4BwPRaTdxRCiczEcBH+q/e/DZVOR+5O j3GZxAU0PHjXfHImzbELWB2v0FfyOuNABYuHhuZD/+DaNds085oMgYh3IsgJGrFnX19+ vt8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+uiLq+iUoKYbKnLFQDjwDIIzqGudQy8o1UFEOkmkrHU=; b=qNIj5AlgsubUhOyK1Kvrv7Y5EqAkC6zfb0s6QwxhDDZeTgmuVTkvxseqNZM1GrjcdW +eVqwGttXWQS1O8U9Mvosz90HZRRqaEusJyfQabpt3648PFjPiq9RJOhzT9mh+BQD0Cs 2rpElBfqhFzFy2iUyzhF3Qwn7DAeiK9AZUYuoHUqcsjMufuJIxKHpZIZuPvODV0dET07 4Q7pyYuUGcaA6/4JpYFn9JMA0IYDZnkvbQY3jZsM3CR36BeKrdiF0LwMIC6Wq7CKivCf PxMyyqnz4KeU6f5zx2hXb8DmzriPTe33/4x093yl6l07sdtPdlKykRrifsz6kxFS/R30 p4pQ== X-Gm-Message-State: APjAAAWALxpkjac8mM2fP7eMjQXXJXazlc+e9L869W8N2FyxpSxidGoN uPSfCXyw8eyzwRUDQ26HAT9y/czBO/wTFE9uFmY= X-Google-Smtp-Source: APXvYqzlFxyF121CIe85RASYbsEhGryD+IkK99FgQ3Ziz6Hy8dGPnJj8WzzVEyeNtwDP9s/uUld1eg/MwcrX7ost1fA= X-Received: by 2002:a63:64c4:: with SMTP id y187mr1758578pgb.150.1572540440772; Thu, 31 Oct 2019 09:47:20 -0700 (PDT) Date: Thu, 31 Oct 2019 09:46:34 -0700 In-Reply-To: <20191031164637.48901-1-samitolvanen@google.com> Message-Id: <20191031164637.48901-15-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191031164637.48901-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v3 14/17] arm64: efi: restore x18 if it was corrupted From: samitolvanen@google.com To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen If we detect a corrupted x18 and SCS is enabled, restore the register before jumping back to instrumented code. This is safe, because the wrapper is called with preemption disabled and a separate shadow stack is used for interrupt handling. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- arch/arm64/kernel/efi-rt-wrapper.S | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/efi-rt-wrapper.S b/arch/arm64/kernel/efi-rt-wrapper.S index 3fc71106cb2b..945744f16086 100644 --- a/arch/arm64/kernel/efi-rt-wrapper.S +++ b/arch/arm64/kernel/efi-rt-wrapper.S @@ -34,5 +34,10 @@ ENTRY(__efi_rt_asm_wrapper) ldp x29, x30, [sp], #32 b.ne 0f ret -0: b efi_handle_corrupted_x18 // tail call +0: +#ifdef CONFIG_SHADOW_CALL_STACK + /* Restore x18 before returning to instrumented code. */ + mov x18, x2 +#endif + b efi_handle_corrupted_x18 // tail call ENDPROC(__efi_rt_asm_wrapper) From patchwork Thu Oct 31 16:46:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11221659 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 46AED14DB for ; Thu, 31 Oct 2019 17:01:52 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 95618208C0 for ; Thu, 31 Oct 2019 17:01:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="P67vDmmf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 95618208C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17200-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 1779 invoked by uid 550); 31 Oct 2019 16:59:57 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 16035 invoked from network); 31 Oct 2019 16:47:35 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=hi8Cu8vAZVRE/aapAAyha6S0ojR+GJofj/wY0IAnOZ0=; b=P67vDmmf7l8iN6O8NA0rTRghuQ+TRNhS1arwHEnYxr1ViR1HY50ozBsDIgmKlmnXOV md0XEh+hEaxcVLZPP1QTlLYLvdf0FZ5h5YQ+hUhhwDYcR//ZTfxu2AYXNsiA1wS45nTi SVUQGq6SEiGtWg0iKnZxq+hsmFHJd94RUh0NNXXCkm0GrJaiN4pxdntxDEe2SE7qvN8R 1E85Ba2ueIH7RNmmMjNsZslAzajrycwYYcrlzySe7vhPJyXlXhUCKyZb31QYebgcsA/j hbzgteQSWbLyHD9/3p76Xl00fiAX0l+Zr68AJyUeyf5tCJXRfOLQIH0Q1QaNskmKKaMW A+Aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=hi8Cu8vAZVRE/aapAAyha6S0ojR+GJofj/wY0IAnOZ0=; b=W3s5FjcQoMUirTzd+CyEffgiDp5PBcQ7T2suD/Vol+LkcZjiN7/BbyHEcWiiNjobE9 QaY+1QG+wEvdDFgsut1VjApxG3rKwDYJnoQb47dbW1HfYpUrYJuYGvdaJLz8Zfj+i8C0 UwY3H9eBQk2vp8K8ujFDIxKpzw/P6QRYMf9gvZB/Poha5rxE1qdKPeaYBA3jEkf6sLLa 1U3T3B5sxG1VZj8YH1tAAogr4US0ZElHiFJxpUxySGmIb3cAx+Ei/aInd1w9m4KOIivF oEvSM1bntOi/2eNLUDVO76xDL+lidAqR2OKBvp08mMGJLD+OcD0g5gdkjxOiTROKs18G aZhw== X-Gm-Message-State: APjAAAUt5Gnn4w17ZgeBWkggEdmC51E0AeBHHAp6/Kd3ksFlD9eAjGE+ XYrgt/dm8tVK/sixELFOmnpdGAwr5MZXyWvV8TA= X-Google-Smtp-Source: APXvYqz6YRDXL8kUecfGxnqeQq30ocBHNq/zxXBU9i1Pelv7UcRC8hJJlGnCkmCBQjmZ8Y0bj93wNqvrOEFGlVoQDyk= X-Received: by 2002:a63:134a:: with SMTP id 10mr7622711pgt.441.1572540443216; Thu, 31 Oct 2019 09:47:23 -0700 (PDT) Date: Thu, 31 Oct 2019 09:46:35 -0700 In-Reply-To: <20191031164637.48901-1-samitolvanen@google.com> Message-Id: <20191031164637.48901-16-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191031164637.48901-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v3 15/17] arm64: vdso: disable Shadow Call Stack From: samitolvanen@google.com To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Signed-off-by: Sami Tolvanen Reviewed-by: Nick Desaulniers Reviewed-by: Kees Cook --- arch/arm64/kernel/vdso/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile index dd2514bb1511..a87a4f11724e 100644 --- a/arch/arm64/kernel/vdso/Makefile +++ b/arch/arm64/kernel/vdso/Makefile @@ -25,7 +25,7 @@ ccflags-y += -DDISABLE_BRANCH_PROFILING VDSO_LDFLAGS := -Bsymbolic -CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os +CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) KBUILD_CFLAGS += $(DISABLE_LTO) KASAN_SANITIZE := n UBSAN_SANITIZE := n From patchwork Thu Oct 31 16:46:36 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11221661 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1FBF414DB for ; Thu, 31 Oct 2019 17:02:05 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 6FE3F208C0 for ; Thu, 31 Oct 2019 17:02:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JwFwWyNM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6FE3F208C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17201-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 9578 invoked by uid 550); 31 Oct 2019 17:01:00 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 16082 invoked from network); 31 Oct 2019 16:47:38 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=J83AV+8pmxy6MQ8Uszleewce6gVoMir4DYxUv0MBtSY=; b=JwFwWyNMv/RSq1lmIG7yeu7XzK5N0yGFebd9HD0oWd+L64cVHCjNrLKXvpd31A8m+o zacFSlAh7/S3GrsWYw8wOsz2riJzwAmUFphK9R3y/YJdaVSd/jkXF/Q3qSE7wm8QON6m 8mWRne5R6DOJ3KZmWMsv6rx2Y/vxcHDIQsZfg/5fmf+Zp5HlVd7VP1G5fKtX4FB8a3Bm 8RVCvD6+cudu/Eg1HxbSqa7xROHq/x3T25smtETOf5KVnSr4x5SBcV1jFg774W893DEo G58Q3hgq+5V0/nL/ilZ23sMkgZGieLq55EJ/1/61Mg1lGY8hcPLmSpXvq/AV5DgETbO1 ZBgQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=J83AV+8pmxy6MQ8Uszleewce6gVoMir4DYxUv0MBtSY=; b=t3sQUSqzbU9TMhYbXOHOEBF1E192qlv91aXufUXV8N46aJaXoMJLqm7EpWQs0gytze zQgHNQFsDBoN6b+XFHrqjn4TbLRogfp8M+R/OqikUsydKRPrB6Kui95bIqpW7Di2Jg5n rW1OQKTWoO7rOed60yklIEK9RRqySa7D9ZT792BVQgV4UQI54Jurp4DxqiGAvXxtE6/Y 5i8Ojdv47RSo53yKVTK+rZQbY+0sIHNqhthzh+9Zz1uRSuRutI2Bmj332gSq20h0Xsi+ GS2P5+mkkJ70v9RfjRsAN9n5rPVRqt9ChFXJK/CQj/pK/6XmdoFv3WUUQj1+e+bOTTmD HzaA== X-Gm-Message-State: APjAAAXOwUrgRINVn7VRQ26xKWDLgsUGcFE1i5f3z5vyMXRQHbS+G8Xr vn73UQi008Lw5fQmKYDg+2B7MyufzjFIvxhOSK0= X-Google-Smtp-Source: APXvYqz5tyIdUbH5XenlPdO5u18Bp097Z3l5i1ng81ta9dMEULM4omVmWKyeE1529Bf0daIn3cRdGv4Ho7NXy7c+5zY= X-Received: by 2002:a63:e056:: with SMTP id n22mr7564302pgj.73.1572540445818; Thu, 31 Oct 2019 09:47:25 -0700 (PDT) Date: Thu, 31 Oct 2019 09:46:36 -0700 In-Reply-To: <20191031164637.48901-1-samitolvanen@google.com> Message-Id: <20191031164637.48901-17-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191031164637.48901-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v3 16/17] arm64: disable SCS for hypervisor code From: samitolvanen@google.com To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Filter out CC_FLAGS_SCS for code that runs at a different exception level. Suggested-by: Steven Rostedt (VMware) Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Reviewed-by: Kees Cook --- arch/arm64/kvm/hyp/Makefile | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile index ea710f674cb6..17ea3da325e9 100644 --- a/arch/arm64/kvm/hyp/Makefile +++ b/arch/arm64/kvm/hyp/Makefile @@ -28,3 +28,6 @@ GCOV_PROFILE := n KASAN_SANITIZE := n UBSAN_SANITIZE := n KCOV_INSTRUMENT := n + +# remove the SCS flags from all objects in this directory +KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_SCS), $(KBUILD_CFLAGS)) From patchwork Thu Oct 31 16:46:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11221663 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E5D8915AB for ; Thu, 31 Oct 2019 17:02:17 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id DD17F208C0 for ; Thu, 31 Oct 2019 17:02:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pCqpJJRf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DD17F208C0 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17202-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 9923 invoked by uid 550); 31 Oct 2019 17:01:07 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Delivered-To: moderator for kernel-hardening@lists.openwall.com Received: (qmail 16121 invoked from network); 31 Oct 2019 16:47:40 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xG+BwVY6gCyCJ8KezEiO+iiPcdSZVTyBluevK1GqTqA=; b=pCqpJJRffGXXkzjhoiogXJsQ+agxYt78FxeGpWqKrVNvmJBbjkwjctdA7v56hoKZEr 2OLiHXYJvn/OZHNGaE5Kp0cxQZkLJXqzRqp4C84JDueZbXKq56PmWw+tYh6HYEAUfKJl /R2hEG/plR2mup6B/7oTSvhhueljF92RKHaCve054A8vMC/5FgsNn3cUZEKoBVHdCXnS OxTUm90qm91QZev+ATmtcIyiJE7/Ewc9bJLr6fN0A53DJ+7K+HaTPjnWh3VcdDHREWbA zT9QrzDEX4WBboU5hykaOXCJsA2o9LMrCmU4Zpc1dLd0f2mOXvU5ihYD8/rtVT/eDkRj q+UQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xG+BwVY6gCyCJ8KezEiO+iiPcdSZVTyBluevK1GqTqA=; b=oG4rGbFqxPTbMmLMINXFWYQAjL6K4zbAtY5O2VPU2snKF7HWI2OfNXwlQ8/SB+2CuD 1N8qGbEFcpyxozLdIv+52mhJogAt4qzrTw4i6UqKBkxIiBawyVbJOLI+rlbHPgJ6joy4 5g+MW5y5iwnRlM+K7wtcGAofNLgMFrEerAlkkFsLihVXyAhaLtLFdRO0f3DSW5OBBbL4 r9m1NVwq9noK8DlRjB1+BaaWJJXoEg/TN1GrhiommeFpMamTAf2UHMEjVveXSdVIbJ6P nSo7iax1YO6cHgzsNwdpYvkJKvtPFuDhQRPW6uejYOY39T2zG8KunX+LmilEpXZ3XD0O Fkpg== X-Gm-Message-State: APjAAAW6lfd/VR+XPUSqQgU8j5UxJhcZfN78/HqvMRBl9bNTU0ZgWGKy uge/esk/228HBb4ValO6qWYjWTqI4/OvspgD49w= X-Google-Smtp-Source: APXvYqzBpZY1iauHiKZQsN/GrGfZkCD99EaAlgOG02GK6PfiQB5Q1YER167Wevp+yQ+FjxUyylPZwo1ZwbW76vQw7Yg= X-Received: by 2002:a65:4843:: with SMTP id i3mr12883pgs.184.1572540448495; Thu, 31 Oct 2019 09:47:28 -0700 (PDT) Date: Thu, 31 Oct 2019 09:46:37 -0700 In-Reply-To: <20191031164637.48901-1-samitolvanen@google.com> Message-Id: <20191031164637.48901-18-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191031164637.48901-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc0.303.g954a862665-goog Subject: [PATCH v3 17/17] arm64: implement Shadow Call Stack From: samitolvanen@google.com To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen This change implements shadow stack switching, initial SCS set-up, and interrupt shadow stacks for arm64. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- arch/arm64/Kconfig | 5 ++++ arch/arm64/include/asm/scs.h | 37 ++++++++++++++++++++++++++ arch/arm64/include/asm/stacktrace.h | 4 +++ arch/arm64/include/asm/thread_info.h | 3 +++ arch/arm64/kernel/Makefile | 1 + arch/arm64/kernel/asm-offsets.c | 3 +++ arch/arm64/kernel/entry.S | 28 ++++++++++++++++++++ arch/arm64/kernel/head.S | 9 +++++++ arch/arm64/kernel/irq.c | 2 ++ arch/arm64/kernel/process.c | 2 ++ arch/arm64/kernel/scs.c | 39 ++++++++++++++++++++++++++++ arch/arm64/kernel/smp.c | 4 +++ 12 files changed, 137 insertions(+) create mode 100644 arch/arm64/include/asm/scs.h create mode 100644 arch/arm64/kernel/scs.c diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 42867174920f..f4c94c5e8012 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -66,6 +66,7 @@ config ARM64 select ARCH_USE_QUEUED_RWLOCKS select ARCH_USE_QUEUED_SPINLOCKS select ARCH_SUPPORTS_MEMORY_FAILURE + select ARCH_SUPPORTS_SHADOW_CALL_STACK if CC_HAVE_SHADOW_CALL_STACK select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_INT128 if GCC_VERSION >= 50000 || CC_IS_CLANG select ARCH_SUPPORTS_NUMA_BALANCING @@ -948,6 +949,10 @@ config ARCH_HAS_CACHE_LINE_SIZE config ARCH_ENABLE_SPLIT_PMD_PTLOCK def_bool y if PGTABLE_LEVELS > 2 +# Supported by clang >= 7.0 +config CC_HAVE_SHADOW_CALL_STACK + def_bool $(cc-option, -fsanitize=shadow-call-stack -ffixed-x18) + config SECCOMP bool "Enable seccomp to safely compute untrusted bytecode" ---help--- diff --git a/arch/arm64/include/asm/scs.h b/arch/arm64/include/asm/scs.h new file mode 100644 index 000000000000..c50d2b0c6c5f --- /dev/null +++ b/arch/arm64/include/asm/scs.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_SCS_H +#define _ASM_SCS_H + +#ifndef __ASSEMBLY__ + +#include + +#ifdef CONFIG_SHADOW_CALL_STACK + +extern void scs_init_irq(void); + +static __always_inline void scs_save(struct task_struct *tsk) +{ + void *s; + + asm volatile("mov %0, x18" : "=r" (s)); + task_set_scs(tsk, s); +} + +static inline void scs_overflow_check(struct task_struct *tsk) +{ + if (unlikely(scs_corrupted(tsk))) + panic("corrupted shadow stack detected inside scheduler\n"); +} + +#else /* CONFIG_SHADOW_CALL_STACK */ + +static inline void scs_init_irq(void) {} +static inline void scs_save(struct task_struct *tsk) {} +static inline void scs_overflow_check(struct task_struct *tsk) {} + +#endif /* CONFIG_SHADOW_CALL_STACK */ + +#endif /* __ASSEMBLY __ */ + +#endif /* _ASM_SCS_H */ diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index 4d9b1f48dc39..b6cf32fb4efe 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -68,6 +68,10 @@ extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk); DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); +#ifdef CONFIG_SHADOW_CALL_STACK +DECLARE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); +#endif + static inline bool on_irq_stack(unsigned long sp, struct stack_info *info) { diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h index f0cec4160136..8c73764b9ed2 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -41,6 +41,9 @@ struct thread_info { #endif } preempt; }; +#ifdef CONFIG_SHADOW_CALL_STACK + void *shadow_call_stack; +#endif }; #define thread_saved_pc(tsk) \ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 478491f07b4f..b3995329d9e5 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -63,6 +63,7 @@ obj-$(CONFIG_CRASH_CORE) += crash_core.o obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o obj-$(CONFIG_ARM64_SSBD) += ssbd.o obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o +obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o obj-y += vdso/ probes/ obj-$(CONFIG_COMPAT_VDSO) += vdso32/ diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 214685760e1c..f6762b9ae1e1 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -33,6 +33,9 @@ int main(void) DEFINE(TSK_TI_ADDR_LIMIT, offsetof(struct task_struct, thread_info.addr_limit)); #ifdef CONFIG_ARM64_SW_TTBR0_PAN DEFINE(TSK_TI_TTBR0, offsetof(struct task_struct, thread_info.ttbr0)); +#endif +#ifdef CONFIG_SHADOW_CALL_STACK + DEFINE(TSK_TI_SCS, offsetof(struct task_struct, thread_info.shadow_call_stack)); #endif DEFINE(TSK_STACK, offsetof(struct task_struct, stack)); #ifdef CONFIG_STACKPROTECTOR diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index cf3bd2976e57..12a5bc209280 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -172,6 +172,10 @@ alternative_cb_end apply_ssbd 1, x22, x23 +#ifdef CONFIG_SHADOW_CALL_STACK + ldr x18, [tsk, #TSK_TI_SCS] // Restore shadow call stack + str xzr, [tsk, #TSK_TI_SCS] +#endif .else add x21, sp, #S_FRAME_SIZE get_current_task tsk @@ -278,6 +282,12 @@ alternative_else_nop_endif ct_user_enter .endif +#ifdef CONFIG_SHADOW_CALL_STACK + .if \el == 0 + str x18, [tsk, #TSK_TI_SCS] // Save shadow call stack + .endif +#endif + #ifdef CONFIG_ARM64_SW_TTBR0_PAN /* * Restore access to TTBR0_EL1. If returning to EL0, no need for SPSR @@ -383,6 +393,9 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 .macro irq_stack_entry mov x19, sp // preserve the original sp +#ifdef CONFIG_SHADOW_CALL_STACK + mov x20, x18 // preserve the original shadow stack +#endif /* * Compare sp with the base of the task stack. @@ -400,6 +413,12 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 /* switch to the irq stack */ mov sp, x26 + +#ifdef CONFIG_SHADOW_CALL_STACK + /* also switch to the irq shadow stack */ + ldr_this_cpu x18, irq_shadow_call_stack_ptr, x26 +#endif + 9998: .endm @@ -409,6 +428,10 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 */ .macro irq_stack_exit mov sp, x19 +#ifdef CONFIG_SHADOW_CALL_STACK + /* x20 is also preserved */ + mov x18, x20 +#endif .endm /* GPRs used by entry code */ @@ -1155,6 +1178,11 @@ ENTRY(cpu_switch_to) ldr lr, [x8] mov sp, x9 msr sp_el0, x1 +#ifdef CONFIG_SHADOW_CALL_STACK + str x18, [x0, #TSK_TI_SCS] + ldr x18, [x1, #TSK_TI_SCS] + str xzr, [x1, #TSK_TI_SCS] +#endif ret ENDPROC(cpu_switch_to) NOKPROBE(cpu_switch_to) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 989b1944cb71..2be977c6496f 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include @@ -424,6 +425,10 @@ __primary_switched: stp xzr, x30, [sp, #-16]! mov x29, sp +#ifdef CONFIG_SHADOW_CALL_STACK + adr_l x18, init_shadow_call_stack // Set shadow call stack +#endif + str_l x21, __fdt_pointer, x5 // Save FDT pointer ldr_l x4, kimage_vaddr // Save the offset between @@ -731,6 +736,10 @@ __secondary_switched: ldr x2, [x0, #CPU_BOOT_TASK] cbz x2, __secondary_too_slow msr sp_el0, x2 +#ifdef CONFIG_SHADOW_CALL_STACK + ldr x18, [x2, #TSK_TI_SCS] // Set shadow call stack + str xzr, [x2, #TSK_TI_SCS] +#endif mov x29, #0 mov x30, #0 b secondary_start_kernel diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c index 04a327ccf84d..fe0ca522ff60 100644 --- a/arch/arm64/kernel/irq.c +++ b/arch/arm64/kernel/irq.c @@ -21,6 +21,7 @@ #include #include #include +#include unsigned long irq_err_count; @@ -63,6 +64,7 @@ static void init_irq_stacks(void) void __init init_IRQ(void) { init_irq_stacks(); + scs_init_irq(); irqchip_init(); if (!handle_arch_irq) panic("No interrupt controller found."); diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 71f788cd2b18..5f0aec285848 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -52,6 +52,7 @@ #include #include #include +#include #include #if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_STACKPROTECTOR_PER_TASK) @@ -507,6 +508,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, uao_thread_switch(next); ptrauth_thread_switch(next); ssbs_thread_switch(next); + scs_overflow_check(next); /* * Complete any pending TLB or cache maintenance on this CPU in case diff --git a/arch/arm64/kernel/scs.c b/arch/arm64/kernel/scs.c new file mode 100644 index 000000000000..6f255072c9a9 --- /dev/null +++ b/arch/arm64/kernel/scs.c @@ -0,0 +1,39 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#include +#include +#include + +DEFINE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); + +#ifndef CONFIG_SHADOW_CALL_STACK_VMAP +DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], irq_shadow_call_stack) + __aligned(SCS_SIZE); +#endif + +void scs_init_irq(void) +{ + int cpu; + + for_each_possible_cpu(cpu) { +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP + unsigned long *p; + + p = __vmalloc_node_range(SCS_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + SCS_GFP, PAGE_KERNEL, + 0, cpu_to_node(cpu), + __builtin_return_address(0)); + + per_cpu(irq_shadow_call_stack_ptr, cpu) = p; +#else + per_cpu(irq_shadow_call_stack_ptr, cpu) = + per_cpu(irq_shadow_call_stack, cpu); +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + } +} diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index dc9fe879c279..cc1938a585d2 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -44,6 +44,7 @@ #include #include #include +#include #include #include #include @@ -357,6 +358,9 @@ void cpu_die(void) { unsigned int cpu = smp_processor_id(); + /* Save the shadow stack pointer before exiting the idle task */ + scs_save(current); + idle_task_exit(); local_daif_mask();