From patchwork Tue Nov 5 23:55:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11228959 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D1B081599 for ; Tue, 5 Nov 2019 23:56:36 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 1914C21D7C for ; Tue, 5 Nov 2019 23:56:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LjxE0fDU" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1914C21D7C Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17301-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 21748 invoked by uid 550); 5 Nov 2019 23:56:31 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 21632 invoked from network); 5 Nov 2019 23:56:30 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=g+FZBFl4/jtEHUrecvJjSJDyfo/JMaY/R05FvQww7Ew=; b=LjxE0fDUxx+3kVeg7vE5j0MDU6Zrcc+p5G7NQHNr0s5wHIkvFvwli6ZXOWnme85ije jBJFUcvifEowqIp9Q6kPG5XRfqnPWYKqq4lT3u/yEAAZ5inT/AF+WQUIdE+FH5coJnOx 45x+2YFn2EYwM9VkqAdU3t2ESgAwh/7S4v2oNrjoZrg07nQ2hDPts2xOAKtZcY1tbEs6 3BRjCQrnrIFpGViCfkicQ6TBibwhSYlPM7ALdbEMaEB9hM3UBZtHRO6ILY/gCNjMJwRI YnbDHHQcPorEK5ZZXqtDFbBOJQNgPXXh+FRps8QgyWzlYJKUT+fzyLMVa7bzsM74FsHD faAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=g+FZBFl4/jtEHUrecvJjSJDyfo/JMaY/R05FvQww7Ew=; b=dDCmFLCoKptwpB4lu/XVyIhDeSZ6iHyaBRmPWo7PicTpNzp7Ic3bC5hM/FsUBrMIr9 LPTpg/UCVkePwgJns1q6wESlR2OvGdNVQ8AbnGzUNGjRye7tNjm9Ra3DEqQmPgbcy4/9 ezGNA2eqVj6bazAV+o/a81zpuf/kdsj4cOEJVlYv56qFCg6hC5UwHSWtl5jHcS9JVY9Z StfOFVIHbdC0ns3V6Y2QgUDC0tR3ytECKg18Enm46JqEdX+xWwkIFq/BGJZaeCgGxx1x K27Aegr0R1r5KQlR+diJq0KIZkHb0PEZuCkpzkn0NgMVLmhmqHEvX5BkO8HjhLRlkrmH bv5A== X-Gm-Message-State: APjAAAUgP2epFXruzbxJk5ZBM2J/iNLNAX4fbsX5rYe/DRpAuyKCv759 xQvRthA0gbK7GVVt3RsQGwCOAQla0HLvm6VneQg= X-Google-Smtp-Source: APXvYqzW2J087dvbO8FxXmHSAb+jdYJ+R4A3JyspfZmNdbBUl+tfRqF1Deu+G73dGWQ5VPZ8TBLnfQxrufA2nMsUivc= X-Received: by 2002:a63:6cf:: with SMTP id 198mr39116524pgg.259.1572998178011; Tue, 05 Nov 2019 15:56:18 -0800 (PST) Date: Tue, 5 Nov 2019 15:55:55 -0800 In-Reply-To: <20191105235608.107702-1-samitolvanen@google.com> Message-Id: <20191105235608.107702-2-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191105235608.107702-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc1.363.gb1bccd3e3d-goog Subject: [PATCH v5 01/14] arm64: mm: avoid x18 in idmap_kpti_install_ng_mappings From: Sami Tolvanen To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen idmap_kpti_install_ng_mappings uses x18 as a temporary register, which will result in a conflict when x18 is reserved. Use x16 and x17 instead where needed. Signed-off-by: Sami Tolvanen Reviewed-by: Nick Desaulniers Reviewed-by: Mark Rutland --- arch/arm64/mm/proc.S | 63 ++++++++++++++++++++++---------------------- 1 file changed, 32 insertions(+), 31 deletions(-) diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index a1e0592d1fbc..fdabf40a83c8 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -250,15 +250,15 @@ ENTRY(idmap_kpti_install_ng_mappings) /* We're the boot CPU. Wait for the others to catch up */ sevl 1: wfe - ldaxr w18, [flag_ptr] - eor w18, w18, num_cpus - cbnz w18, 1b + ldaxr w17, [flag_ptr] + eor w17, w17, num_cpus + cbnz w17, 1b /* We need to walk swapper, so turn off the MMU. */ pre_disable_mmu_workaround - mrs x18, sctlr_el1 - bic x18, x18, #SCTLR_ELx_M - msr sctlr_el1, x18 + mrs x17, sctlr_el1 + bic x17, x17, #SCTLR_ELx_M + msr sctlr_el1, x17 isb /* Everybody is enjoying the idmap, so we can rewrite swapper. */ @@ -281,9 +281,9 @@ skip_pgd: isb /* We're done: fire up the MMU again */ - mrs x18, sctlr_el1 - orr x18, x18, #SCTLR_ELx_M - msr sctlr_el1, x18 + mrs x17, sctlr_el1 + orr x17, x17, #SCTLR_ELx_M + msr sctlr_el1, x17 isb /* @@ -353,46 +353,47 @@ skip_pte: b.ne do_pte b next_pmd + .unreq cpu + .unreq num_cpus + .unreq swapper_pa + .unreq cur_pgdp + .unreq end_pgdp + .unreq pgd + .unreq cur_pudp + .unreq end_pudp + .unreq pud + .unreq cur_pmdp + .unreq end_pmdp + .unreq pmd + .unreq cur_ptep + .unreq end_ptep + .unreq pte + /* Secondary CPUs end up here */ __idmap_kpti_secondary: /* Uninstall swapper before surgery begins */ - __idmap_cpu_set_reserved_ttbr1 x18, x17 + __idmap_cpu_set_reserved_ttbr1 x16, x17 /* Increment the flag to let the boot CPU we're ready */ -1: ldxr w18, [flag_ptr] - add w18, w18, #1 - stxr w17, w18, [flag_ptr] +1: ldxr w16, [flag_ptr] + add w16, w16, #1 + stxr w17, w16, [flag_ptr] cbnz w17, 1b /* Wait for the boot CPU to finish messing around with swapper */ sevl 1: wfe - ldxr w18, [flag_ptr] - cbnz w18, 1b + ldxr w16, [flag_ptr] + cbnz w16, 1b /* All done, act like nothing happened */ - offset_ttbr1 swapper_ttb, x18 + offset_ttbr1 swapper_ttb, x16 msr ttbr1_el1, swapper_ttb isb ret - .unreq cpu - .unreq num_cpus - .unreq swapper_pa .unreq swapper_ttb .unreq flag_ptr - .unreq cur_pgdp - .unreq end_pgdp - .unreq pgd - .unreq cur_pudp - .unreq end_pudp - .unreq pud - .unreq cur_pmdp - .unreq end_pmdp - .unreq pmd - .unreq cur_ptep - .unreq end_ptep - .unreq pte ENDPROC(idmap_kpti_install_ng_mappings) .popsection #endif From patchwork Tue Nov 5 23:55:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11228961 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5298215AB for ; Tue, 5 Nov 2019 23:56:43 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 8D66E21D71 for ; Tue, 5 Nov 2019 23:56:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LANLIsSL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8D66E21D71 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17302-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 21986 invoked by uid 550); 5 Nov 2019 23:56:33 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 21903 invoked from network); 5 Nov 2019 23:56:33 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=OH2SjYe0B8esWxTAcCgR+62MsPomBC+RzRSTMU1rFk8=; b=LANLIsSLRt+jERn9z/YC1fWtMQVX8zCypiEu6yw+A7ph5wykxHwT9wsDMQDlDMl2rg stArYiAU/nb/XpN8wDOhlGyfTc0fdCPVyLoqsctOyYeGq24aghzkvptD+XVbn9Xq4Qx2 yxJPdKckjUE2Le0UST7HQhi45E/yHrtn4L5mNNxo+7JL+AcIOJugxBvuaOWMfQaRKd19 pgc7iEsvYu0XarG32KP1cibZw6Yh6odpoBcAbe/UiVxBTEh1uk9MrGBjI2ktk5L02xPX x9VA45PGyhIT4mZVHxYbG80ZMkQKYGWocfpS2nmfUWNgOvxc1mQu8MLaMnzS4vac3Qt1 JBZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=OH2SjYe0B8esWxTAcCgR+62MsPomBC+RzRSTMU1rFk8=; b=Nawmh3NxPWG/IAlKjutxzH6JjtSjfWeNYOTK3qXL/YxVQ6hBMQnKlmBVGEbnOZ7Xlz Ujv1uXH8nWY0ATkTjCepAvcUKWQX1lqOCZ2/+0WDQ0stlBRvLyg114QwPRPpTX/iw/SL avkRRUqruqF9uLQ2UIVzViRKFP6Nsx+TFqanEKVYvgluHMxsk646d5g5CRrU3aJr+usQ SK2qm4fTdg4qjesHnRXTr2IXIgbBZONhN0YFnWTsHhJ4NVyePbcFRfE7AK7RHxF4YAQc 1VKlzOqbWObg+RjqyMsZB6sdB382+QIgFdqXBTvAryknPFEKv3pDvLyR0csRMF7imC0T hoUg== X-Gm-Message-State: APjAAAWewXUt7cJIcsdT/o1E/jZyKfNIXg7E/bi2QdsLQg5jO3JxjmEP dNe/lBk0PjtdenmZdBbQeGhshE+94P9wg5BM2+0= X-Google-Smtp-Source: APXvYqxFzno9MSXI4Ly95n1ixFuKJLy5PBpFiCEvi59EZLe1JMvD55HUp/RJhtm2+ZnxbDY2T+pgxUE1vx1ZzWw2zK4= X-Received: by 2002:a67:ef0c:: with SMTP id j12mr9381038vsr.201.1572998180740; Tue, 05 Nov 2019 15:56:20 -0800 (PST) Date: Tue, 5 Nov 2019 15:55:56 -0800 In-Reply-To: <20191105235608.107702-1-samitolvanen@google.com> Message-Id: <20191105235608.107702-3-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191105235608.107702-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc1.363.gb1bccd3e3d-goog Subject: [PATCH v5 02/14] arm64/lib: copy_page: avoid x18 register in assembler code From: Sami Tolvanen To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen From: Ard Biesheuvel Register x18 will no longer be used as a caller save register in the future, so stop using it in the copy_page() code. Link: https://patchwork.kernel.org/patch/9836869/ Signed-off-by: Ard Biesheuvel [Sami: changed the offset and bias to be explicit] Signed-off-by: Sami Tolvanen Reviewed-by: Mark Rutland --- arch/arm64/lib/copy_page.S | 38 +++++++++++++++++++------------------- 1 file changed, 19 insertions(+), 19 deletions(-) diff --git a/arch/arm64/lib/copy_page.S b/arch/arm64/lib/copy_page.S index bbb8562396af..290dd3c5266c 100644 --- a/arch/arm64/lib/copy_page.S +++ b/arch/arm64/lib/copy_page.S @@ -34,45 +34,45 @@ alternative_else_nop_endif ldp x14, x15, [x1, #96] ldp x16, x17, [x1, #112] - mov x18, #(PAGE_SIZE - 128) + add x0, x0, #256 add x1, x1, #128 1: - subs x18, x18, #128 + tst x0, #(PAGE_SIZE - 1) alternative_if ARM64_HAS_NO_HW_PREFETCH prfm pldl1strm, [x1, #384] alternative_else_nop_endif - stnp x2, x3, [x0] + stnp x2, x3, [x0, #-256] ldp x2, x3, [x1] - stnp x4, x5, [x0, #16] + stnp x4, x5, [x0, #16 - 256] ldp x4, x5, [x1, #16] - stnp x6, x7, [x0, #32] + stnp x6, x7, [x0, #32 - 256] ldp x6, x7, [x1, #32] - stnp x8, x9, [x0, #48] + stnp x8, x9, [x0, #48 - 256] ldp x8, x9, [x1, #48] - stnp x10, x11, [x0, #64] + stnp x10, x11, [x0, #64 - 256] ldp x10, x11, [x1, #64] - stnp x12, x13, [x0, #80] + stnp x12, x13, [x0, #80 - 256] ldp x12, x13, [x1, #80] - stnp x14, x15, [x0, #96] + stnp x14, x15, [x0, #96 - 256] ldp x14, x15, [x1, #96] - stnp x16, x17, [x0, #112] + stnp x16, x17, [x0, #112 - 256] ldp x16, x17, [x1, #112] add x0, x0, #128 add x1, x1, #128 - b.gt 1b + b.ne 1b - stnp x2, x3, [x0] - stnp x4, x5, [x0, #16] - stnp x6, x7, [x0, #32] - stnp x8, x9, [x0, #48] - stnp x10, x11, [x0, #64] - stnp x12, x13, [x0, #80] - stnp x14, x15, [x0, #96] - stnp x16, x17, [x0, #112] + stnp x2, x3, [x0, #-256] + stnp x4, x5, [x0, #16 - 256] + stnp x6, x7, [x0, #32 - 256] + stnp x8, x9, [x0, #48 - 256] + stnp x10, x11, [x0, #64 - 256] + stnp x12, x13, [x0, #80 - 256] + stnp x14, x15, [x0, #96 - 256] + stnp x16, x17, [x0, #112 - 256] ret ENDPROC(copy_page) From patchwork Tue Nov 5 23:55:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11228963 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 38DCE1599 for ; Tue, 5 Nov 2019 23:56:51 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 47F94222CD for ; Tue, 5 Nov 2019 23:56:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bDliSoyY" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 47F94222CD Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17303-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 22238 invoked by uid 550); 5 Nov 2019 23:56:36 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 22129 invoked from network); 5 Nov 2019 23:56:35 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=dHd2oAd25v1QFamm49bKYnI+SDWrPmb9Vs4EgTfDEXk=; b=bDliSoyYGxx43rEjkFDy5V4ntr7XfyvBRO13dbvsBo0S8JQUy+CqxbLaHNdLfREnCw a0I8taTdiam8ApYbUculPMioDAysaU8DW84rQmV6vlUONLQUEhV8wGdC9CtCyCEtcyXZ zFMpPoI1eGpbZbiga1Th/X377A0R2Az7nA63db6p9Ay7zYwET083itnXUljV0iaN0/Qz 3izH+/Yw9gDCV9sWl9Wgc4I4gMwOn+2k1tVCgkrd6MAFSPLlao809xh49sSGLtt0qEy2 S+2sqJnqZhdClVXEacu79v4G5BJbF1/6V1tkhH9qEzhiNW9ETU59eVlVQfWrcC5/QOpD IKUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=dHd2oAd25v1QFamm49bKYnI+SDWrPmb9Vs4EgTfDEXk=; b=X02g790KH45JI8mXmAl+fsFLDhdfbxv/z9qsquTHBwMAfQ/J7e678oLMePCAtVrWzD O/TMoeJ/2CnqDKYg2aYpxrzoYbSRIAz8KLt3EZnCkQBV3LZ1yMWEE9grVE9s4lUzalgm KjFGvrFWt4Fj9WyWB7D/z7JB/R+97PKdUq+OA0t7lakhlPzyqDlGfjKoDbzY85Z5xtH4 /x0BB/UlmhCie0zcnyNjZkRfK7P6TD8Xcg6P00HA0OwPW7mIXFLEItZ7dLgy8vuRC8mO rUQy9FTZ//h1UDSot0BSteyL8Cy2ABCAiFv9Vjbtm1lSmMAqpobJ61s98c+PeNeBf/mt GMPQ== X-Gm-Message-State: APjAAAX16IXDxU7Ojl1I1p+Ecj2ZEZE1dAG+NN0q1x68P7aro2biYbZi 7V7w6ra2gCimCNA4iwtFjKbkSyqbUp2VNSPtJfw= X-Google-Smtp-Source: APXvYqwoXaWLPx1zp6EIPHoWFJ4RyvAcLtwdFx+RmyQn+CTcUAB7w5M2fVP0LN47NIlc/qsCuoDGGAdmwSoWPYgVFrU= X-Received: by 2002:a63:750f:: with SMTP id q15mr14598354pgc.422.1572998183307; Tue, 05 Nov 2019 15:56:23 -0800 (PST) Date: Tue, 5 Nov 2019 15:55:57 -0800 In-Reply-To: <20191105235608.107702-1-samitolvanen@google.com> Message-Id: <20191105235608.107702-4-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191105235608.107702-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc1.363.gb1bccd3e3d-goog Subject: [PATCH v5 03/14] arm64: kvm: stop treating register x18 as caller save From: Sami Tolvanen To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen From: Ard Biesheuvel In preparation of reserving x18, stop treating it as caller save in the KVM guest entry/exit code. Currently, the code assumes there is no need to preserve it for the host, given that it would have been assumed clobbered anyway by the function call to __guest_enter(). Instead, preserve its value and restore it upon return. Link: https://patchwork.kernel.org/patch/9836891/ Signed-off-by: Ard Biesheuvel [Sami: updated commit message, switched from x18 to x29 for the guest context] Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Reviewed-by: Marc Zyngier Reviewed-by: Mark Rutland --- arch/arm64/kvm/hyp/entry.S | 45 ++++++++++++++++++++------------------ 1 file changed, 24 insertions(+), 21 deletions(-) diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S index e5cc8d66bf53..0c6832ec52b1 100644 --- a/arch/arm64/kvm/hyp/entry.S +++ b/arch/arm64/kvm/hyp/entry.S @@ -22,7 +22,12 @@ .text .pushsection .hyp.text, "ax" +/* + * We treat x18 as callee-saved as the host may use it as a platform + * register (e.g. for shadow call stack). + */ .macro save_callee_saved_regs ctxt + str x18, [\ctxt, #CPU_XREG_OFFSET(18)] stp x19, x20, [\ctxt, #CPU_XREG_OFFSET(19)] stp x21, x22, [\ctxt, #CPU_XREG_OFFSET(21)] stp x23, x24, [\ctxt, #CPU_XREG_OFFSET(23)] @@ -32,6 +37,8 @@ .endm .macro restore_callee_saved_regs ctxt + // We require \ctxt is not x18-x28 + ldr x18, [\ctxt, #CPU_XREG_OFFSET(18)] ldp x19, x20, [\ctxt, #CPU_XREG_OFFSET(19)] ldp x21, x22, [\ctxt, #CPU_XREG_OFFSET(21)] ldp x23, x24, [\ctxt, #CPU_XREG_OFFSET(23)] @@ -48,7 +55,7 @@ ENTRY(__guest_enter) // x0: vcpu // x1: host context // x2-x17: clobbered by macros - // x18: guest context + // x29: guest context // Store the host regs save_callee_saved_regs x1 @@ -67,31 +74,28 @@ alternative_else_nop_endif ret 1: - add x18, x0, #VCPU_CONTEXT + add x29, x0, #VCPU_CONTEXT // Macro ptrauth_switch_to_guest format: // ptrauth_switch_to_guest(guest cxt, tmp1, tmp2, tmp3) // The below macro to restore guest keys is not implemented in C code // as it may cause Pointer Authentication key signing mismatch errors // when this feature is enabled for kernel code. - ptrauth_switch_to_guest x18, x0, x1, x2 + ptrauth_switch_to_guest x29, x0, x1, x2 // Restore guest regs x0-x17 - ldp x0, x1, [x18, #CPU_XREG_OFFSET(0)] - ldp x2, x3, [x18, #CPU_XREG_OFFSET(2)] - ldp x4, x5, [x18, #CPU_XREG_OFFSET(4)] - ldp x6, x7, [x18, #CPU_XREG_OFFSET(6)] - ldp x8, x9, [x18, #CPU_XREG_OFFSET(8)] - ldp x10, x11, [x18, #CPU_XREG_OFFSET(10)] - ldp x12, x13, [x18, #CPU_XREG_OFFSET(12)] - ldp x14, x15, [x18, #CPU_XREG_OFFSET(14)] - ldp x16, x17, [x18, #CPU_XREG_OFFSET(16)] - - // Restore guest regs x19-x29, lr - restore_callee_saved_regs x18 - - // Restore guest reg x18 - ldr x18, [x18, #CPU_XREG_OFFSET(18)] + ldp x0, x1, [x29, #CPU_XREG_OFFSET(0)] + ldp x2, x3, [x29, #CPU_XREG_OFFSET(2)] + ldp x4, x5, [x29, #CPU_XREG_OFFSET(4)] + ldp x6, x7, [x29, #CPU_XREG_OFFSET(6)] + ldp x8, x9, [x29, #CPU_XREG_OFFSET(8)] + ldp x10, x11, [x29, #CPU_XREG_OFFSET(10)] + ldp x12, x13, [x29, #CPU_XREG_OFFSET(12)] + ldp x14, x15, [x29, #CPU_XREG_OFFSET(14)] + ldp x16, x17, [x29, #CPU_XREG_OFFSET(16)] + + // Restore guest regs x18-x29, lr + restore_callee_saved_regs x29 // Do not touch any register after this! eret @@ -114,7 +118,7 @@ ENTRY(__guest_exit) // Retrieve the guest regs x0-x1 from the stack ldp x2, x3, [sp], #16 // x0, x1 - // Store the guest regs x0-x1 and x4-x18 + // Store the guest regs x0-x1 and x4-x17 stp x2, x3, [x1, #CPU_XREG_OFFSET(0)] stp x4, x5, [x1, #CPU_XREG_OFFSET(4)] stp x6, x7, [x1, #CPU_XREG_OFFSET(6)] @@ -123,9 +127,8 @@ ENTRY(__guest_exit) stp x12, x13, [x1, #CPU_XREG_OFFSET(12)] stp x14, x15, [x1, #CPU_XREG_OFFSET(14)] stp x16, x17, [x1, #CPU_XREG_OFFSET(16)] - str x18, [x1, #CPU_XREG_OFFSET(18)] - // Store the guest regs x19-x29, lr + // Store the guest regs x18-x29, lr save_callee_saved_regs x1 get_host_ctxt x2, x3 From patchwork Tue Nov 5 23:55:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11228967 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D711415AB for ; Tue, 5 Nov 2019 23:56:58 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 1C2FB21A4A for ; Tue, 5 Nov 2019 23:56:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QDgKQccA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1C2FB21A4A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17304-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 23560 invoked by uid 550); 5 Nov 2019 23:56:38 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 22470 invoked from network); 5 Nov 2019 23:56:38 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=E806Ihq7Qr5kwAxZ8O8jq/5GK0i44oON304BuEfvZ08=; b=QDgKQccAQsH+R5h45PS8H6bNgjwH8RGSJOkAG/AGuoy5YWBicw4QukLKRyaCj/ZX4C S4zge4BSm47qikuijSk/1DeLDJdEzJKyjjpMkCVFyBe8iRT+BVtRIzaIEOM4Sg8kMqqp 19PNx89unPNFUWHLbXlCN0veqqqj3T/UMrnqvSODWT2d26uSn9YoutJHq9olqlmfvgF9 quHA5kvZggYfAPkKhiPNj+ueCkFnNP+twN85T1uWZaZ2Qu7624dIYDxVRZ2i6SnFYV+M zMwC8p5EGVUt8lQAk9+5QoxNLcfzBFJJ4Q7ld+hIBbOPsyda1IzIoSUjKf8hfnCm4zFw faOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=E806Ihq7Qr5kwAxZ8O8jq/5GK0i44oON304BuEfvZ08=; b=Xkj5ZDA44gnekDUa2Ki0daGCzs/2kVX0gJh0ALlgwA2lj1Z3tI2VA0wwVW8CCG5w6l j5yyHaps8B8hQ6s15hezPHceJlkD3WqkqM5I7e5MIL3J5lo4ZKug2lFC2S9vae2Ie3k2 Yv6i2QPMAnN2dK8wBaC9H6t4nfLKZeQ6FCIjbqlQa2QLUnUfVvU6ONwIQsndXFtdF9WZ btZEpSSklRuIqu7h0cZMo4CJ4iF/xXoJQinVTqEQ+o7Tnoc8LfFbJ4O1ezPBHPUeAKHu wvxRrJ7bWrSq5dRi+p27fDIc//IEdc1I+ZIkWmGoIMPwv1lud2DnV8WmJQAxQ3mmm2br +t7g== X-Gm-Message-State: APjAAAXKORyg5eIq+8yBbmU4qyaRaKpFcs/GK8WqV/MNkWtOVe2eTYYX S2Qw6IEErKNVA3wM7/XXyhNqJvKmJt1NzVxbi9g= X-Google-Smtp-Source: APXvYqwY9fgInvZ1xEGl2M4KIBWFyexAkAf44oIrqVL/dQVZsayL7xfxT7jFK1V9S1zdWFVq0Hf8BYVOaqf916ictzw= X-Received: by 2002:aed:22c8:: with SMTP id q8mr21652726qtc.0.1572998186120; Tue, 05 Nov 2019 15:56:26 -0800 (PST) Date: Tue, 5 Nov 2019 15:55:58 -0800 In-Reply-To: <20191105235608.107702-1-samitolvanen@google.com> Message-Id: <20191105235608.107702-5-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191105235608.107702-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc1.363.gb1bccd3e3d-goog Subject: [PATCH v5 04/14] arm64: kernel: avoid x18 in __cpu_soft_restart From: Sami Tolvanen To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen From: Ard Biesheuvel The code in __cpu_soft_restart() uses x18 as an arbitrary temp register, which will shortly be disallowed. So use x8 instead. Link: https://patchwork.kernel.org/patch/9836877/ Signed-off-by: Ard Biesheuvel [Sami: updated commit message] Signed-off-by: Sami Tolvanen Reviewed-by: Mark Rutland Reviewed-by: Kees Cook --- arch/arm64/kernel/cpu-reset.S | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/arm64/kernel/cpu-reset.S b/arch/arm64/kernel/cpu-reset.S index 6ea337d464c4..32c7bf858dd9 100644 --- a/arch/arm64/kernel/cpu-reset.S +++ b/arch/arm64/kernel/cpu-reset.S @@ -42,11 +42,11 @@ ENTRY(__cpu_soft_restart) mov x0, #HVC_SOFT_RESTART hvc #0 // no return -1: mov x18, x1 // entry +1: mov x8, x1 // entry mov x0, x2 // arg0 mov x1, x3 // arg1 mov x2, x4 // arg2 - br x18 + br x8 ENDPROC(__cpu_soft_restart) .popsection From patchwork Tue Nov 5 23:55:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11228971 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 803D61986 for ; Tue, 5 Nov 2019 23:57:07 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 5E98821D7C for ; Tue, 5 Nov 2019 23:57:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vD5+s4og" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5E98821D7C Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17305-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 23892 invoked by uid 550); 5 Nov 2019 23:56:41 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 23762 invoked from network); 5 Nov 2019 23:56:40 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=pwnEyvR3BkmSybUTmTjQ8/D3cz+orbxAnIJb4qPzfqA=; b=vD5+s4ogGg54jDdqgsvbOpymona94iC0eDbz2rhtbVaGqAbo3Uwdm+WTStNUf757oz rhOZ1W61xoHeOKkjAZCbQB7Bik/nXvs3uHyuRbPimWXWhDQRW2Q4FwtOgg7434ZCM740 Xd45VlbMFoddDFjVxBPBNZAG+PJ19bwu/XcsY4WbLXW4QRAk3A8yUgzo1NLeo/lxRurb IJBWaMRZNI5rUDQ5dDaDZwaY3O7t1LONSx14FUL2Z+PuX1bww6z9OT3L7eej6nKw70zY mU8S8c3YTKJw4GomD+ooYfTf4S5mIfgtoU+XKuteLY/uRvKPGplYpF78zhy63CocaByx tKqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pwnEyvR3BkmSybUTmTjQ8/D3cz+orbxAnIJb4qPzfqA=; b=NmqOB5BEO/bNgaYnbY0UFqtBaMbxQaV0iz3lSlKL3g7uuqTDweCX8Bqbka8OgtRDLG zt0zl0BY3XUKWjUvyK/tU3dDnFv7qiH9T3KgUYGS0QyC+5OjdFhDFnlAIya3JQ9FA3oO J6hFbIXymr7XiKuNV8CJ0hReZOCuZ0ZAu+z4RiL1PZjl2Krjjv2ArQBU+sWVt1JltV/u bRkXF1lbO0wGqt8UBMbd4R6URdFv11ebWxMN5L7G7HWK7CnnfROAwuPlcXeawrxH+CEq T6Uwqc5PE2J6p4HI6RWasqMNd+h6R16+pV8wu23WnwLqyzHExhp9hxcIgW4/xN65bo0R TgGg== X-Gm-Message-State: APjAAAU4ziX1TxS2PspGyJYug8bODQYqTNd8zDdOm6gXBTK2i5eNthe6 TuNQu9tamJv3TFs1aID7DSr41AD5DrlZ07m6TEg= X-Google-Smtp-Source: APXvYqymXeZsAaNK1Srze4bJplR9GLQBMft2PpnY6xpA5Gc7ZiU0TBsBEMJtjGeBQaErvS6oDlHdTvIBRiRwBiqNXzs= X-Received: by 2002:a05:6122:cc:: with SMTP id h12mr9670605vkc.52.1572998188778; Tue, 05 Nov 2019 15:56:28 -0800 (PST) Date: Tue, 5 Nov 2019 15:55:59 -0800 In-Reply-To: <20191105235608.107702-1-samitolvanen@google.com> Message-Id: <20191105235608.107702-6-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191105235608.107702-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc1.363.gb1bccd3e3d-goog Subject: [PATCH v5 05/14] add support for Clang's Shadow Call Stack (SCS) From: Sami Tolvanen To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen This change adds generic support for Clang's Shadow Call Stack, which uses a shadow stack to protect return addresses from being overwritten by an attacker. Details are available here: https://clang.llvm.org/docs/ShadowCallStack.html Note that security guarantees in the kernel differ from the ones documented for user space. The kernel must store addresses of shadow stacks used by other tasks and interrupt handlers in memory, which means an attacker capable reading and writing arbitrary memory may be able to locate them and hijack control flow by modifying shadow stacks that are not currently in use. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Reviewed-by: Miguel Ojeda --- Makefile | 6 ++ arch/Kconfig | 33 ++++++ include/linux/compiler-clang.h | 6 ++ include/linux/compiler_types.h | 4 + include/linux/scs.h | 57 ++++++++++ init/init_task.c | 8 ++ kernel/Makefile | 1 + kernel/fork.c | 9 ++ kernel/sched/core.c | 2 + kernel/scs.c | 187 +++++++++++++++++++++++++++++++++ 10 files changed, 313 insertions(+) create mode 100644 include/linux/scs.h create mode 100644 kernel/scs.c diff --git a/Makefile b/Makefile index b37d0e8fc61d..7f3a4c5c7dcc 100644 --- a/Makefile +++ b/Makefile @@ -846,6 +846,12 @@ ifdef CONFIG_LIVEPATCH KBUILD_CFLAGS += $(call cc-option, -flive-patching=inline-clone) endif +ifdef CONFIG_SHADOW_CALL_STACK +CC_FLAGS_SCS := -fsanitize=shadow-call-stack +KBUILD_CFLAGS += $(CC_FLAGS_SCS) +export CC_FLAGS_SCS +endif + # arch Makefile may override CC so keep this after arch Makefile is included NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include) diff --git a/arch/Kconfig b/arch/Kconfig index 5f8a5d84dbbe..5e34cbcd8d6a 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -521,6 +521,39 @@ config STACKPROTECTOR_STRONG about 20% of all kernel functions, which increases the kernel code size by about 2%. +config ARCH_SUPPORTS_SHADOW_CALL_STACK + bool + help + An architecture should select this if it supports Clang's Shadow + Call Stack, has asm/scs.h, and implements runtime support for shadow + stack switching. + +config SHADOW_CALL_STACK_VMAP + bool + depends on SHADOW_CALL_STACK + help + Use virtually mapped shadow call stacks. Selecting this option + provides better stack exhaustion protection, but increases per-thread + memory consumption as a full page is allocated for each shadow stack. + +config SHADOW_CALL_STACK + bool "Clang Shadow Call Stack" + depends on ARCH_SUPPORTS_SHADOW_CALL_STACK + help + This option enables Clang's Shadow Call Stack, which uses a + shadow stack to protect function return addresses from being + overwritten by an attacker. More information can be found from + Clang's documentation: + + https://clang.llvm.org/docs/ShadowCallStack.html + + Note that security guarantees in the kernel differ from the ones + documented for user space. The kernel must store addresses of shadow + stacks used by other tasks and interrupt handlers in memory, which + means an attacker capable reading and writing arbitrary memory may + be able to locate them and hijack control flow by modifying shadow + stacks that are not currently in use. + config HAVE_ARCH_WITHIN_STACK_FRAMES bool help diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h index 333a6695a918..18fc4d29ef27 100644 --- a/include/linux/compiler-clang.h +++ b/include/linux/compiler-clang.h @@ -42,3 +42,9 @@ * compilers, like ICC. */ #define barrier() __asm__ __volatile__("" : : : "memory") + +#if __has_feature(shadow_call_stack) +# define __noscs __attribute__((__no_sanitize__("shadow-call-stack"))) +#else +# define __noscs +#endif diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 72393a8c1a6c..be5d5be4b1ae 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -202,6 +202,10 @@ struct ftrace_likely_data { # define randomized_struct_fields_end #endif +#ifndef __noscs +# define __noscs +#endif + #ifndef asm_volatile_goto #define asm_volatile_goto(x...) asm goto(x) #endif diff --git a/include/linux/scs.h b/include/linux/scs.h new file mode 100644 index 000000000000..c5572fd770b0 --- /dev/null +++ b/include/linux/scs.h @@ -0,0 +1,57 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#ifndef _LINUX_SCS_H +#define _LINUX_SCS_H + +#include +#include +#include + +#ifdef CONFIG_SHADOW_CALL_STACK + +/* + * In testing, 1 KiB shadow stack size (i.e. 128 stack frames on a 64-bit + * architecture) provided ~40% safety margin on stack usage while keeping + * memory allocation overhead reasonable. + */ +#define SCS_SIZE 1024UL +#define GFP_SCS (GFP_KERNEL | __GFP_ZERO) + +/* + * A random number outside the kernel's virtual address space to mark the + * end of the shadow stack. + */ +#define SCS_END_MAGIC 0xaf0194819b1635f6UL + +#define task_scs(tsk) (task_thread_info(tsk)->shadow_call_stack) + +static inline void task_set_scs(struct task_struct *tsk, void *s) +{ + task_scs(tsk) = s; +} + +extern void scs_init(void); +extern void scs_task_reset(struct task_struct *tsk); +extern int scs_prepare(struct task_struct *tsk, int node); +extern bool scs_corrupted(struct task_struct *tsk); +extern void scs_release(struct task_struct *tsk); + +#else /* CONFIG_SHADOW_CALL_STACK */ + +#define task_scs(tsk) NULL + +static inline void task_set_scs(struct task_struct *tsk, void *s) {} +static inline void scs_init(void) {} +static inline void scs_task_reset(struct task_struct *tsk) {} +static inline int scs_prepare(struct task_struct *tsk, int node) { return 0; } +static inline bool scs_corrupted(struct task_struct *tsk) { return false; } +static inline void scs_release(struct task_struct *tsk) {} + +#endif /* CONFIG_SHADOW_CALL_STACK */ + +#endif /* _LINUX_SCS_H */ diff --git a/init/init_task.c b/init/init_task.c index 9e5cbe5eab7b..cbd40460e903 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -184,6 +185,13 @@ struct task_struct init_task }; EXPORT_SYMBOL(init_task); +#ifdef CONFIG_SHADOW_CALL_STACK +unsigned long init_shadow_call_stack[SCS_SIZE / sizeof(long)] __init_task_data + __aligned(SCS_SIZE) = { + [(SCS_SIZE / sizeof(long)) - 1] = SCS_END_MAGIC +}; +#endif + /* * Initial thread structure. Alignment of this is handled by a special * linker map entry. diff --git a/kernel/Makefile b/kernel/Makefile index daad787fb795..313dbd44d576 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -102,6 +102,7 @@ obj-$(CONFIG_TRACEPOINTS) += trace/ obj-$(CONFIG_IRQ_WORK) += irq_work.o obj-$(CONFIG_CPU_PM) += cpu_pm.o obj-$(CONFIG_BPF) += bpf/ +obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o obj-$(CONFIG_PERF_EVENTS) += events/ diff --git a/kernel/fork.c b/kernel/fork.c index 55af6931c6ec..6c4266019935 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -94,6 +94,7 @@ #include #include #include +#include #include #include @@ -451,6 +452,8 @@ void put_task_stack(struct task_struct *tsk) void free_task(struct task_struct *tsk) { + scs_release(tsk); + #ifndef CONFIG_THREAD_INFO_IN_TASK /* * The task is finally done with both the stack and thread_info, @@ -834,6 +837,8 @@ void __init fork_init(void) NULL, free_vm_stack_cache); #endif + scs_init(); + lockdep_init_task(&init_task); uprobes_init(); } @@ -893,6 +898,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) if (err) goto free_stack; + err = scs_prepare(tsk, node); + if (err) + goto free_stack; + #ifdef CONFIG_SECCOMP /* * We must handle setting up seccomp filters once we're under diff --git a/kernel/sched/core.c b/kernel/sched/core.c index dd05a378631a..6769e27052bf 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -11,6 +11,7 @@ #include #include +#include #include #include @@ -6018,6 +6019,7 @@ void init_idle(struct task_struct *idle, int cpu) idle->se.exec_start = sched_clock(); idle->flags |= PF_IDLE; + scs_task_reset(idle); kasan_unpoison_task_stack(idle); #ifdef CONFIG_SMP diff --git a/kernel/scs.c b/kernel/scs.c new file mode 100644 index 000000000000..e3234a4b92ec --- /dev/null +++ b/kernel/scs.c @@ -0,0 +1,187 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +static inline void *__scs_base(struct task_struct *tsk) +{ + /* + * To minimize risk the of exposure, architectures may clear a + * task's thread_info::shadow_call_stack while that task is + * running, and only save/restore the active shadow call stack + * pointer when the usual register may be clobbered (e.g. across + * context switches). + * + * The shadow call stack is aligned to SCS_SIZE, and grows + * upwards, so we can mask out the low bits to extract the base + * when the task is not running. + */ + return (void *)((unsigned long)task_scs(tsk) & ~(SCS_SIZE - 1)); +} + +static inline unsigned long *scs_magic(void *s) +{ + return (unsigned long *)(s + SCS_SIZE) - 1; +} + +static inline void scs_set_magic(void *s) +{ + *scs_magic(s) = SCS_END_MAGIC; +} + +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP + +/* Matches NR_CACHED_STACKS for VMAP_STACK */ +#define NR_CACHED_SCS 2 +static DEFINE_PER_CPU(void *, scs_cache[NR_CACHED_SCS]); + +static void *scs_alloc(int node) +{ + int i; + void *s; + + for (i = 0; i < NR_CACHED_SCS; i++) { + s = this_cpu_xchg(scs_cache[i], NULL); + if (s) { + memset(s, 0, SCS_SIZE); + goto out; + } + } + + /* + * We allocate a full page for the shadow stack, which should be + * more than we need. Check the assumption nevertheless. + */ + BUILD_BUG_ON(SCS_SIZE > PAGE_SIZE); + + s = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, 0, + node, __builtin_return_address(0)); + +out: + if (s) + scs_set_magic(s); + /* TODO: poison for KASAN, unpoison in scs_free */ + + return s; +} + +static void scs_free(void *s) +{ + int i; + + for (i = 0; i < NR_CACHED_SCS; i++) + if (this_cpu_cmpxchg(scs_cache[i], 0, s) == NULL) + return; + + vfree_atomic(s); +} + +static int scs_cleanup(unsigned int cpu) +{ + int i; + void **cache = per_cpu_ptr(scs_cache, cpu); + + for (i = 0; i < NR_CACHED_SCS; i++) { + vfree(cache[i]); + cache[i] = NULL; + } + + return 0; +} + +void __init scs_init(void) +{ + WARN_ON(cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "scs:scs_cache", NULL, + scs_cleanup)); +} + +#else /* !CONFIG_SHADOW_CALL_STACK_VMAP */ + +static struct kmem_cache *scs_cache; + +static inline void *scs_alloc(int node) +{ + void *s; + + s = kmem_cache_alloc_node(scs_cache, GFP_SCS, node); + if (s) { + scs_set_magic(s); + /* + * Poison the allocation to catch unintentional accesses to + * the shadow stack when KASAN is enabled. + */ + kasan_poison_object_data(scs_cache, s); + } + + return s; +} + +static inline void scs_free(void *s) +{ + kasan_unpoison_object_data(scs_cache, s); + kmem_cache_free(scs_cache, s); +} + +void __init scs_init(void) +{ + scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, SCS_SIZE, + 0, NULL); + WARN_ON(!scs_cache); +} + +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + +void scs_task_reset(struct task_struct *tsk) +{ + /* + * Reset the shadow stack to the base address in case the task + * is reused. + */ + task_set_scs(tsk, __scs_base(tsk)); +} + +int scs_prepare(struct task_struct *tsk, int node) +{ + void *s; + + s = scs_alloc(node); + if (!s) + return -ENOMEM; + + task_set_scs(tsk, s); + return 0; +} + +bool scs_corrupted(struct task_struct *tsk) +{ + unsigned long *magic = scs_magic(__scs_base(tsk)); + + return READ_ONCE_NOCHECK(*magic) != SCS_END_MAGIC; +} + +void scs_release(struct task_struct *tsk) +{ + void *s; + + s = __scs_base(tsk); + if (!s) + return; + + WARN_ON(scs_corrupted(tsk)); + + task_set_scs(tsk, NULL); + scs_free(s); +} From patchwork Tue Nov 5 23:56:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11228973 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 795041599 for ; Tue, 5 Nov 2019 23:57:16 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 89D4A21A4A for ; Tue, 5 Nov 2019 23:57:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YpXrNKkr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 89D4A21A4A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17306-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 24245 invoked by uid 550); 5 Nov 2019 23:56:45 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 24145 invoked from network); 5 Nov 2019 23:56:44 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=wKSBnyU5hvnmEyO4DdclQOVrrLfbxubUILWn7lex0/Q=; b=YpXrNKkrrQB61otymI58QljHjkhlCFcr6vsRfi8wWXZ/iizz+r/Mhnaq53UJY+E5vc y9XiuigH2KwkZ6/wDNTQ1WKj1qyotLS89osuZxNT5YI6Ms/E1nLcfj86fTiIljFZzSQX WYxASTvwlys7voNElKm4l++2U6B8A1pQcQCZ8Lwd4BxL6ppVoEo0k81wqdxH69d5HrEQ VWQxHVgyxQ5ayYcl7GBcRoG+HJRY/qsng2mPaWIgH0LyIa3XA59H4Gwlm2TiC07BrqZj tD+dY9/1ZqvchaGj5lmZID68/y3JX/5PzPfh5WCy43NItu+4baK1EQdi7mR1d3qmbT8S Cf7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=wKSBnyU5hvnmEyO4DdclQOVrrLfbxubUILWn7lex0/Q=; b=U7JU0L0I+CUTiCc6xTMpOOk7GqxMXDQDQ/kGUE2s31wScatOCjAFYk/73pacvmSoY4 +KSV8a00Qgnp93aJ+HV7OOZNB/nxcS4gdfQozRXVF3Ss7W6D8nU+HaSM5MBA5WIhigex 4aH1uwBdqo9R4WBIJJkWOZ0VkvqOX8okoB2K06SdU/6dJ2cdcsvwaRbKJRHiT7SvdBSs DHtRDtk4gz3D/nXO2/PsqaOMQkH0zTFngCrX4cbd08FNRzo3gSb6zes5qSko8iNCq8Fy WZwWZDeReO1LkqS/jVytI4qICakzTm6oUz8rWJggmw2oMMzT/SSOHfkQf208GJ98Vc2b ZsOg== X-Gm-Message-State: APjAAAVbbnH7ULvz9PlcABZZHGvu4/0+YwOxcWm3AeUnHnrd3f5C9ljM K76R1ILCn3TBXWJebejtKGs314FAfJPy7yPVGKI= X-Google-Smtp-Source: APXvYqxAig6OIWN0bbQeu/FQrckzzNPwMyzJ1RbqEW/IeBJ/mypupZreTfMZKO3DBbYZ7Yw0c+mVELIRO6/Hy1z8+f0= X-Received: by 2002:ac8:1814:: with SMTP id q20mr20722949qtj.38.1572998192221; Tue, 05 Nov 2019 15:56:32 -0800 (PST) Date: Tue, 5 Nov 2019 15:56:00 -0800 In-Reply-To: <20191105235608.107702-1-samitolvanen@google.com> Message-Id: <20191105235608.107702-7-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191105235608.107702-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc1.363.gb1bccd3e3d-goog Subject: [PATCH v5 06/14] scs: add accounting From: Sami Tolvanen To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen This change adds accounting for the memory allocated for shadow stacks. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- drivers/base/node.c | 6 ++++++ fs/proc/meminfo.c | 4 ++++ include/linux/mmzone.h | 3 +++ kernel/scs.c | 20 ++++++++++++++++++++ mm/page_alloc.c | 6 ++++++ mm/vmstat.c | 3 +++ 6 files changed, 42 insertions(+) diff --git a/drivers/base/node.c b/drivers/base/node.c index 296546ffed6c..111e58ec231e 100644 --- a/drivers/base/node.c +++ b/drivers/base/node.c @@ -415,6 +415,9 @@ static ssize_t node_read_meminfo(struct device *dev, "Node %d AnonPages: %8lu kB\n" "Node %d Shmem: %8lu kB\n" "Node %d KernelStack: %8lu kB\n" +#ifdef CONFIG_SHADOW_CALL_STACK + "Node %d ShadowCallStack:%8lu kB\n" +#endif "Node %d PageTables: %8lu kB\n" "Node %d NFS_Unstable: %8lu kB\n" "Node %d Bounce: %8lu kB\n" @@ -438,6 +441,9 @@ static ssize_t node_read_meminfo(struct device *dev, nid, K(node_page_state(pgdat, NR_ANON_MAPPED)), nid, K(i.sharedram), nid, sum_zone_node_page_state(nid, NR_KERNEL_STACK_KB), +#ifdef CONFIG_SHADOW_CALL_STACK + nid, sum_zone_node_page_state(nid, NR_KERNEL_SCS_BYTES) / 1024, +#endif nid, K(sum_zone_node_page_state(nid, NR_PAGETABLE)), nid, K(node_page_state(pgdat, NR_UNSTABLE_NFS)), nid, K(sum_zone_node_page_state(nid, NR_BOUNCE)), diff --git a/fs/proc/meminfo.c b/fs/proc/meminfo.c index 8c1f1bb1a5ce..49768005a79e 100644 --- a/fs/proc/meminfo.c +++ b/fs/proc/meminfo.c @@ -103,6 +103,10 @@ static int meminfo_proc_show(struct seq_file *m, void *v) show_val_kb(m, "SUnreclaim: ", sunreclaim); seq_printf(m, "KernelStack: %8lu kB\n", global_zone_page_state(NR_KERNEL_STACK_KB)); +#ifdef CONFIG_SHADOW_CALL_STACK + seq_printf(m, "ShadowCallStack:%8lu kB\n", + global_zone_page_state(NR_KERNEL_SCS_BYTES) / 1024); +#endif show_val_kb(m, "PageTables: ", global_zone_page_state(NR_PAGETABLE)); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index bda20282746b..fcb8c1708f9e 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -200,6 +200,9 @@ enum zone_stat_item { NR_MLOCK, /* mlock()ed pages found and moved off LRU */ NR_PAGETABLE, /* used for pagetables */ NR_KERNEL_STACK_KB, /* measured in KiB */ +#if IS_ENABLED(CONFIG_SHADOW_CALL_STACK) + NR_KERNEL_SCS_BYTES, /* measured in bytes */ +#endif /* Second 128 byte cacheline */ NR_BOUNCE, #if IS_ENABLED(CONFIG_ZSMALLOC) diff --git a/kernel/scs.c b/kernel/scs.c index e3234a4b92ec..4f5774b6f27d 100644 --- a/kernel/scs.c +++ b/kernel/scs.c @@ -12,6 +12,7 @@ #include #include #include +#include #include static inline void *__scs_base(struct task_struct *tsk) @@ -89,6 +90,11 @@ static void scs_free(void *s) vfree_atomic(s); } +static struct page *__scs_page(struct task_struct *tsk) +{ + return vmalloc_to_page(__scs_base(tsk)); +} + static int scs_cleanup(unsigned int cpu) { int i; @@ -135,6 +141,11 @@ static inline void scs_free(void *s) kmem_cache_free(scs_cache, s); } +static struct page *__scs_page(struct task_struct *tsk) +{ + return virt_to_page(__scs_base(tsk)); +} + void __init scs_init(void) { scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, SCS_SIZE, @@ -153,6 +164,12 @@ void scs_task_reset(struct task_struct *tsk) task_set_scs(tsk, __scs_base(tsk)); } +static void scs_account(struct task_struct *tsk, int account) +{ + mod_zone_page_state(page_zone(__scs_page(tsk)), NR_KERNEL_SCS_BYTES, + account * SCS_SIZE); +} + int scs_prepare(struct task_struct *tsk, int node) { void *s; @@ -162,6 +179,8 @@ int scs_prepare(struct task_struct *tsk, int node) return -ENOMEM; task_set_scs(tsk, s); + scs_account(tsk, 1); + return 0; } @@ -182,6 +201,7 @@ void scs_release(struct task_struct *tsk) WARN_ON(scs_corrupted(tsk)); + scs_account(tsk, -1); task_set_scs(tsk, NULL); scs_free(s); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index ecc3dbad606b..fe17d69d98a7 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5361,6 +5361,9 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) " managed:%lukB" " mlocked:%lukB" " kernel_stack:%lukB" +#ifdef CONFIG_SHADOW_CALL_STACK + " shadow_call_stack:%lukB" +#endif " pagetables:%lukB" " bounce:%lukB" " free_pcp:%lukB" @@ -5382,6 +5385,9 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask) K(zone_managed_pages(zone)), K(zone_page_state(zone, NR_MLOCK)), zone_page_state(zone, NR_KERNEL_STACK_KB), +#ifdef CONFIG_SHADOW_CALL_STACK + zone_page_state(zone, NR_KERNEL_SCS_BYTES) / 1024, +#endif K(zone_page_state(zone, NR_PAGETABLE)), K(zone_page_state(zone, NR_BOUNCE)), K(free_pcp), diff --git a/mm/vmstat.c b/mm/vmstat.c index 6afc892a148a..9fe4afe670fe 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1118,6 +1118,9 @@ const char * const vmstat_text[] = { "nr_mlock", "nr_page_table_pages", "nr_kernel_stack", +#if IS_ENABLED(CONFIG_SHADOW_CALL_STACK) + "nr_shadow_call_stack_bytes", +#endif "nr_bounce", #if IS_ENABLED(CONFIG_ZSMALLOC) "nr_zspages", From patchwork Tue Nov 5 23:56:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11228977 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4302215AB for ; Tue, 5 Nov 2019 23:57:25 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 7FAA52178F for ; Tue, 5 Nov 2019 23:57:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="TKd2so4t" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7FAA52178F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17307-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 24525 invoked by uid 550); 5 Nov 2019 23:56:48 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 24420 invoked from network); 5 Nov 2019 23:56:47 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rtGYhrlT+v4glRCgBr5KwA8OmlbDFJXwY4ztX1r8N4Y=; b=TKd2so4t7KdR/7IGrMM7dl1XICyaB7Ep2xb2K+L1YFpFhAdGCuA+QVKoOX+2nBhRS1 PbhJFwrDkZsuR4g7Ll5jibhYYSoSbj6Qsk2MbBrD3K/3RVFYn3Ze4sq2kV7P6t82dOWN zANq01dIUJjLIJAHUFq7xMVXaYxiN41idA7UoNEeJpkXFp/ZWlp+zSbMC4BIJqBb48KR 4Yz6i7rx8RTibsrTijP0CrOXXl43bw03Swi1hDDM7ENVWaU1ch5tyPwrarZdzghv3p22 6EIfdhBRrf+kmU4AHHY+5aG/aq0yLasD+4SyCQWE2lPdStxSTwo8liAyHr/FcLSygxFM ArVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rtGYhrlT+v4glRCgBr5KwA8OmlbDFJXwY4ztX1r8N4Y=; b=jFdOehIYh4XJGB76NxWzQ5f5dJ+P/VQ+jLWSSxTlqAFBg33iBDPJNnrV6ciRmT0MlF Orm4lUzUgJEQaYQEoqciN/KA4pvQF62P8Wq9P5R9bpen1D4yzR34T7ZTDJMEOVTu5FyX wn/ZGBo9pO+oqEre1J9nb4grwNn6YOXA3277NvICiHaJkV/ckd4MilKFHWvx2D/OW/bs 8tshXNzPuee9sAl6+WybgP2sKIjhLgUiqifUg1Yh97rWiohQUQdpJWNMA3pXoF+xs+e9 chlMrriNt8yuUjjWDTeiN8+M9t0lITr2grgXZBfajL+VUPcvtrwLcdLvucsWV2DAf22K SGVA== X-Gm-Message-State: APjAAAW1VJUFEXDwawqiI4BtbpVOg0PCwxZgi2tfrJ72Jl2Gk9LhYobA +GTehniLANJkM1pfchVDS/IGRvlmxcGfh6a3Ogw= X-Google-Smtp-Source: APXvYqyeiXNhg0SMmYGCzlPE1MJNPVlZmSKCnBjsXssOcDuiJaeCfp0JdeBqZLhwpNQMgciNZVJzMgpXUPeRRsEDFOQ= X-Received: by 2002:a65:6149:: with SMTP id o9mr5335991pgv.228.1572998195111; Tue, 05 Nov 2019 15:56:35 -0800 (PST) Date: Tue, 5 Nov 2019 15:56:01 -0800 In-Reply-To: <20191105235608.107702-1-samitolvanen@google.com> Message-Id: <20191105235608.107702-8-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191105235608.107702-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc1.363.gb1bccd3e3d-goog Subject: [PATCH v5 07/14] scs: add support for stack usage debugging From: Sami Tolvanen To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Implements CONFIG_DEBUG_STACK_USAGE for shadow stacks. When enabled, also prints out the highest shadow stack usage per process. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- kernel/scs.c | 39 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 39 insertions(+) diff --git a/kernel/scs.c b/kernel/scs.c index 4f5774b6f27d..a47fae33efdc 100644 --- a/kernel/scs.c +++ b/kernel/scs.c @@ -184,6 +184,44 @@ int scs_prepare(struct task_struct *tsk, int node) return 0; } +#ifdef CONFIG_DEBUG_STACK_USAGE +static inline unsigned long scs_used(struct task_struct *tsk) +{ + unsigned long *p = __scs_base(tsk); + unsigned long *end = scs_magic(p); + unsigned long s = (unsigned long)p; + + while (p < end && READ_ONCE_NOCHECK(*p)) + p++; + + return (unsigned long)p - s; +} + +static void scs_check_usage(struct task_struct *tsk) +{ + static DEFINE_SPINLOCK(lock); + static unsigned long highest; + unsigned long used = scs_used(tsk); + + if (used <= highest) + return; + + spin_lock(&lock); + + if (used > highest) { + pr_info("%s (%d): highest shadow stack usage: %lu bytes\n", + tsk->comm, task_pid_nr(tsk), used); + highest = used; + } + + spin_unlock(&lock); +} +#else +static inline void scs_check_usage(struct task_struct *tsk) +{ +} +#endif + bool scs_corrupted(struct task_struct *tsk) { unsigned long *magic = scs_magic(__scs_base(tsk)); @@ -200,6 +238,7 @@ void scs_release(struct task_struct *tsk) return; WARN_ON(scs_corrupted(tsk)); + scs_check_usage(tsk); scs_account(tsk, -1); task_set_scs(tsk, NULL); From patchwork Tue Nov 5 23:56:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11228979 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EAF8C15AB for ; Tue, 5 Nov 2019 23:57:34 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 2FF5C206B8 for ; Tue, 5 Nov 2019 23:57:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MkeBlqOT" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2FF5C206B8 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17308-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 25771 invoked by uid 550); 5 Nov 2019 23:56:50 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 25678 invoked from network); 5 Nov 2019 23:56:49 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=m4cUlF+6BFA5FNFSy/SkP+A5WQosIpsKOuqRDiNS4Fs=; b=MkeBlqOT1wtRB47pSUzDXD8m2gj/abIK7oOlbgO8kye4NcSoywAeU5ZtSxATJfMS2n sLQRqxnA5iMAtD4OVyCqAKV61xGoMEiKHHv5WZLdZGpXH5vTzfmZlcB9JqF0UIw0+fxi c3v/IzYD4RKwck3PyJuzlC7Jvs44fd+x/pfQVweb8C0DMCYwaLsC3QIombbYFjvnDAcI 6fAyLaVNPBEmDJx0XgWILXvjSQ16pLXXh8OX20eaMdKqUAUb/IPjwkwA5StaodCgnT6k fOX3eGQKvjofeZjc0oKfmlzFJBlDBOoEIoLe2boRm/0W+bt6Nlx8c29PhG3Gh4cKijCZ KWjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=m4cUlF+6BFA5FNFSy/SkP+A5WQosIpsKOuqRDiNS4Fs=; b=tyGkmoT8CqQ4BFVr8ZR5wOlKLP7yMmIwi8TlHfLow1E/rUBSgxfpr7jC84FF2JZm0i UFi46Ybpshy6nSPi/HzmlMH7/ct4BMiSgdzQ31x2kq3unKRz3ghXF1VykEUjgLQS+tUS Lhh6GYLH86/41XfWTtwECnmrrAYEXRNeg1OImYW3OitwcFpH6A84i8IeZ3zKfmSl7z0p HbjhJYbhSQhJ1GBHoEG3VDK8jk9slEFJt3CsezCf64Eh4ookaH2ax3/prmjmdfMnNzKn 4bcItwg2LYzQPolQD4LqUgiDc4yDd1t8X8kyss0q1isUFnfyTrt2uLkuPXxrWHkka+JV Db+A== X-Gm-Message-State: APjAAAWJOAC6+hV6b/OI/EP2HW6ediJB3RMBfz2xnpTDWzR0M7XRGtDM V2tIaCV/+W3aP6NeSBETYLnmxIgYYBN4CkeWY5Y= X-Google-Smtp-Source: APXvYqym75lPAYmcFoKuMCRVIb2Z6qedUXVgqze3G/7tmrBj7VQss4UHqbqav7+LM2z+KkYxzkxHGz+pJ9MZsn3n848= X-Received: by 2002:a63:c40e:: with SMTP id h14mr39330366pgd.254.1572998197596; Tue, 05 Nov 2019 15:56:37 -0800 (PST) Date: Tue, 5 Nov 2019 15:56:02 -0800 In-Reply-To: <20191105235608.107702-1-samitolvanen@google.com> Message-Id: <20191105235608.107702-9-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191105235608.107702-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc1.363.gb1bccd3e3d-goog Subject: [PATCH v5 08/14] arm64: disable function graph tracing with SCS From: Sami Tolvanen To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen The graph tracer hooks returns by modifying frame records on the (regular) stack, but with SCS the return address is taken from the shadow stack, and the value in the frame record has no effect. As we don't currently have a mechanism to determine the corresponding slot on the shadow stack (and to pass this through the ftrace infrastructure), for now let's disable the graph tracer when SCS is enabled. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Reviewed-by: Mark Rutland --- arch/arm64/Kconfig | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 3f047afb982c..8cda176dad9a 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -148,7 +148,7 @@ config ARM64 select HAVE_FTRACE_MCOUNT_RECORD select HAVE_FUNCTION_TRACER select HAVE_FUNCTION_ERROR_INJECTION - select HAVE_FUNCTION_GRAPH_TRACER + select HAVE_FUNCTION_GRAPH_TRACER if !SHADOW_CALL_STACK select HAVE_GCC_PLUGINS select HAVE_HW_BREAKPOINT if PERF_EVENTS select HAVE_IRQ_TIME_ACCOUNTING From patchwork Tue Nov 5 23:56:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11228983 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 795DB1599 for ; Tue, 5 Nov 2019 23:57:43 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id B66D7206B8 for ; Tue, 5 Nov 2019 23:57:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WX/V/VTf" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B66D7206B8 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17309-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 26061 invoked by uid 550); 5 Nov 2019 23:56:53 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 25989 invoked from network); 5 Nov 2019 23:56:52 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xr3X122JfUZTOJ9HNPiJpW+Iyor6r4hGY/b4sjNxJZA=; b=WX/V/VTfUhYNVK63Q6o2nwuNZuF84nG4taeFhEo17XXi/wiIq5TBNma6L2QWkkxymH DvQMMy0aOWkuLzcjaD/VZ7I9iJTaYKRhuhci0d1Uv1dvow1PsPAbfcG7lofjVT32BZTo XfopDA2vhLhs1jhnHF0BtxTMW++6XgIwcQqObAyguiG5eCzS2AdaeI/mW/++SFMdkYFO 0sFgOE5K8+grQi8hixiOmdeRZME3HY40NRXpHJTOGJP9Yp/g21dwfZOSqMKsicYNTg7M PQxjmxBUU54ovXgSaLB6R/0V/BtxcVf6/NuKLrw9AecxZk0Kx+tl9gJtoHFRyt1RFLTp RO5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xr3X122JfUZTOJ9HNPiJpW+Iyor6r4hGY/b4sjNxJZA=; b=kOQpuElaIe2g4207Ot/B0uNZBTLwwd4a8yZsCg7LZqlNxwsyIOetbB0qaONjIFUYJr gHxHZCXJoJs4M00DLXLSZFzcO4IcPb77ajRy4KodzBDXUOxSh4qf816o7WZ6uzwoWzUZ yrtrNW0NbTuegc/6hJTaLAQpqjAbY2ZfRblOgpg86BwFkRuL6atVCnAmkWOXTgR80JMt CGHml8HI+Lp2xVL9Hgo60blV+LEgFCriMoHx951PmyEQtD7nqHAd0SmW94bpOmUJvk68 hk4MB6peb24fIhpEPvOpAmqHQoujDewjkZhvCO3bBwkKuQ33ZRIszyNE4A+lZDP3CTdi nWDw== X-Gm-Message-State: APjAAAXfEK9LWje5MI7HM8fQoAA2OZFymOWoJTZmcjCPIkzXS9sh4Ty/ Nf2z7/2IArMftDbrUM0+bPlaZrtWW3gU8XC9mFU= X-Google-Smtp-Source: APXvYqzws/iChASYEHCRVYS8P2kdCocZTmxir9yLCAbfQ2Y76yiotnVRg74apgBfcVGe/Rb1kCjSoUo8I9YbTGAoXKw= X-Received: by 2002:a65:5683:: with SMTP id v3mr19280245pgs.190.1572998200132; Tue, 05 Nov 2019 15:56:40 -0800 (PST) Date: Tue, 5 Nov 2019 15:56:03 -0800 In-Reply-To: <20191105235608.107702-1-samitolvanen@google.com> Message-Id: <20191105235608.107702-10-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191105235608.107702-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc1.363.gb1bccd3e3d-goog Subject: [PATCH v5 09/14] arm64: reserve x18 from general allocation with SCS From: Sami Tolvanen To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Reserve the x18 register from general allocation when SCS is enabled, because the compiler uses the register to store the current task's shadow stack pointer. Note that all external kernel modules must also be compiled with -ffixed-x18 if the kernel has SCS enabled. Signed-off-by: Sami Tolvanen Reviewed-by: Nick Desaulniers Reviewed-by: Kees Cook --- arch/arm64/Makefile | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile index 2c0238ce0551..ef76101201b2 100644 --- a/arch/arm64/Makefile +++ b/arch/arm64/Makefile @@ -72,6 +72,10 @@ stack_protector_prepare: prepare0 include/generated/asm-offsets.h)) endif +ifeq ($(CONFIG_SHADOW_CALL_STACK), y) +KBUILD_CFLAGS += -ffixed-x18 +endif + ifeq ($(CONFIG_CPU_BIG_ENDIAN), y) KBUILD_CPPFLAGS += -mbig-endian CHECKFLAGS += -D__AARCH64EB__ From patchwork Tue Nov 5 23:56:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11228985 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 48EB91599 for ; Tue, 5 Nov 2019 23:57:53 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 7FAA8206B8 for ; Tue, 5 Nov 2019 23:57:52 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jWGQrW7g" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7FAA8206B8 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17310-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 26290 invoked by uid 550); 5 Nov 2019 23:56:55 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 26205 invoked from network); 5 Nov 2019 23:56:54 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uzQ8cGBP/iiy1dmy29qxsrkA2y8A3VyyoGWk/X1qxUc=; b=jWGQrW7gYc6D9nJteHiWYUXaoKd47FGvgCnUcq1RN0FNUvS12DXoQv/Qs2ZSsEmdsr KIrZA7BzUYt42AyztGMVcz/QumFEYK2cJY5/FcPWOE01nPlBEr1XYQMmzvlCuOtePFHy BrO2pUitDMPbbMwQ+djW4zvLYrFatsGxSZFpwthI0l0tEUjCd15qBB7CofGFXzjONnIJ qdmdd/w16M48KK0143WbpUnJCYAPG0RUwEwhRL4ts3yhMXpE4SQA62x4WivBRaT/AZDF 1NajHEvW01XhI1VvxrXdeiyPREGJ54AL4BUaLlFQCeTXTsDjryTGcoQgi5Pmte9ZuS+r +4hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uzQ8cGBP/iiy1dmy29qxsrkA2y8A3VyyoGWk/X1qxUc=; b=uCXWF8IWhkA9RXtsN3R1Yf5JjnnTITYeecFfZ3H5MoJlNU5EIULJsWCyn1Jsnmoo6j 2OYM0KwavtwCnOeuPJeds3/iJMB5Q8DEP9G46hbNq6lJeCHmoPdwk5APBIqmzXp6S85d dC+dmzTHWi9ztQbqwjiJHzvVZjxxHHAzjYZsQbLrqbkwkoyQmC8/+T6silkmtHkXfeVN 7OJhEoyQyFGQtYIjdeTry1G0rQhvOOw9Ujph6qCVXTBU+zlcP/GBtVlfWg8eV7267N2I 0wCnfxEsW6hbjGsiNyKlqvf51FtlSDXosbJEeaGEYoae+5XHq4FC/mlTDXG503+JM3Pc tMBA== X-Gm-Message-State: APjAAAXH4RCGSHreDDJ81ZKVtepa2IhFLKb1pCaEBXi79e5haqq0jrA8 AgUVEwjgTWlAzZmlwoCLti9er8xh0JuRMvoerzw= X-Google-Smtp-Source: APXvYqzhnl/od1+4wqmG+HB56zcHmEodlg2M4/tt0l0rrKjOHl/n60b0sUyw2UaMRpO3OuPN2c7IjdKlcyCGNq68Qz4= X-Received: by 2002:a65:5382:: with SMTP id x2mr1469482pgq.420.1572998202872; Tue, 05 Nov 2019 15:56:42 -0800 (PST) Date: Tue, 5 Nov 2019 15:56:04 -0800 In-Reply-To: <20191105235608.107702-1-samitolvanen@google.com> Message-Id: <20191105235608.107702-11-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191105235608.107702-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc1.363.gb1bccd3e3d-goog Subject: [PATCH v5 10/14] arm64: preserve x18 when CPU is suspended From: Sami Tolvanen To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Don't lose the current task's shadow stack when the CPU is suspended. Signed-off-by: Sami Tolvanen Reviewed-by: Nick Desaulniers Reviewed-by: Kees Cook Reviewed-by: Mark Rutland --- arch/arm64/include/asm/suspend.h | 2 +- arch/arm64/mm/proc.S | 14 ++++++++++++++ 2 files changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/suspend.h b/arch/arm64/include/asm/suspend.h index 8939c87c4dce..0cde2f473971 100644 --- a/arch/arm64/include/asm/suspend.h +++ b/arch/arm64/include/asm/suspend.h @@ -2,7 +2,7 @@ #ifndef __ASM_SUSPEND_H #define __ASM_SUSPEND_H -#define NR_CTX_REGS 12 +#define NR_CTX_REGS 13 #define NR_CALLEE_SAVED_REGS 12 /* diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S index fdabf40a83c8..5c8219c55948 100644 --- a/arch/arm64/mm/proc.S +++ b/arch/arm64/mm/proc.S @@ -49,6 +49,8 @@ * cpu_do_suspend - save CPU registers context * * x0: virtual address of context pointer + * + * This must be kept in sync with struct cpu_suspend_ctx in . */ ENTRY(cpu_do_suspend) mrs x2, tpidr_el0 @@ -73,6 +75,11 @@ alternative_endif stp x8, x9, [x0, #48] stp x10, x11, [x0, #64] stp x12, x13, [x0, #80] + /* + * Save x18 as it may be used as a platform register, e.g. by shadow + * call stack. + */ + str x18, [x0, #96] ret ENDPROC(cpu_do_suspend) @@ -89,6 +96,13 @@ ENTRY(cpu_do_resume) ldp x9, x10, [x0, #48] ldp x11, x12, [x0, #64] ldp x13, x14, [x0, #80] + /* + * Restore x18, as it may be used as a platform register, and clear + * the buffer to minimize the risk of exposure when used for shadow + * call stack. + */ + ldr x18, [x0, #96] + str xzr, [x0, #96] msr tpidr_el0, x2 msr tpidrro_el0, x3 msr contextidr_el1, x4 From patchwork Tue Nov 5 23:56:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11228989 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0DE5515AB for ; Tue, 5 Nov 2019 23:58:04 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 4B9E6206B8 for ; Tue, 5 Nov 2019 23:58:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JN9cRXW3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4B9E6206B8 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17311-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 26529 invoked by uid 550); 5 Nov 2019 23:56:58 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 26465 invoked from network); 5 Nov 2019 23:56:57 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uZ/kQt4EYM2LvpfCusnWoxftHyD2DxgcHTICv7X7dqw=; b=JN9cRXW3eTB7Z2OFW88PXu3tlOY8dStgQEbY8qRUEgweBu4PUIX8TxJ8b95mM5amMG dY/gdqPeBwxcAj5lhK+CRHRd9Y9Vpp0cdvG07lguYzEIxlPQjlLTWMX36AhhMbrOp+bu Cl6hQENGtphPESeiB1wGFAywqSFr434R5npfiMW2A0Y8vIjGPZh4zUIbEexaOWFxdCOT CLAJQw1H0pC3jFL2n55/3yC6h22GnBsHily047age54Pgx3FfV4IqFsu4AOA5+wZCKOA Vh966NbyvN58mnGWJfBhXw2nRYEngLWud/TUyZ3xl1yXKU0boesH5G03NhQo6gYktAXR HQGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uZ/kQt4EYM2LvpfCusnWoxftHyD2DxgcHTICv7X7dqw=; b=mHJDvfs2b2GtkUm9UIIUB4iYD/qltOMLIFeD9e97UMbqhlAASzGFUcOitjd2siIUTS xzOvVI4Q72HE8IkojUVnSnLtrxs58ezp9jBJBf649dLirvKSbpefmZkU1deE/vw0Gsce B/Pr8Ua7/rSlCZYjqh6d1ncMVlKHBpuZ/e6wbz4GXkhdo1DrFiONZorxVDZmy3b6Ouah 3D4athl/OB9hHzYCsxQZnBvxWPUUa1/Cn49MqoWVXh6E+GAAmfZKXal+ouljFqZnxc/V CdWPw73B74/Tfebt1dmSfsSTAKdtN4FOeMbJ/ZbPwUqxQCZjmpHOUxsj/Q4naip7MSTn Ks6w== X-Gm-Message-State: APjAAAWBamBDZft1dMN4cjUBT/3ezc31hl872w/iclRqXH8qsW4O3Eg1 uf93GasWAzzjDCibxJ/1dNENMOM1N0IhW/1LK+E= X-Google-Smtp-Source: APXvYqxTWXWUBEJ0UiXFjDLjnNptoAXq2B85FgeAZV4DWhGGMZ0/dSlB0BXzaBE92E4TUaHE+e4XmBjxNCDfcOvMvRU= X-Received: by 2002:a65:5a8c:: with SMTP id c12mr39559106pgt.140.1572998205590; Tue, 05 Nov 2019 15:56:45 -0800 (PST) Date: Tue, 5 Nov 2019 15:56:05 -0800 In-Reply-To: <20191105235608.107702-1-samitolvanen@google.com> Message-Id: <20191105235608.107702-12-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191105235608.107702-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc1.363.gb1bccd3e3d-goog Subject: [PATCH v5 11/14] arm64: efi: restore x18 if it was corrupted From: Sami Tolvanen To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen If we detect a corrupted x18 and SCS is enabled, restore the register before jumping back to instrumented code. This is safe, because the wrapper is called with preemption disabled and a separate shadow stack is used for interrupt handling. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- arch/arm64/kernel/efi-rt-wrapper.S | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/arm64/kernel/efi-rt-wrapper.S b/arch/arm64/kernel/efi-rt-wrapper.S index 3fc71106cb2b..945744f16086 100644 --- a/arch/arm64/kernel/efi-rt-wrapper.S +++ b/arch/arm64/kernel/efi-rt-wrapper.S @@ -34,5 +34,10 @@ ENTRY(__efi_rt_asm_wrapper) ldp x29, x30, [sp], #32 b.ne 0f ret -0: b efi_handle_corrupted_x18 // tail call +0: +#ifdef CONFIG_SHADOW_CALL_STACK + /* Restore x18 before returning to instrumented code. */ + mov x18, x2 +#endif + b efi_handle_corrupted_x18 // tail call ENDPROC(__efi_rt_asm_wrapper) From patchwork Tue Nov 5 23:56:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11228991 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7957A1599 for ; Tue, 5 Nov 2019 23:58:14 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id B5A46206B8 for ; Tue, 5 Nov 2019 23:58:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QfiPTkyl" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B5A46206B8 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17312-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 27829 invoked by uid 550); 5 Nov 2019 23:57:00 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 27741 invoked from network); 5 Nov 2019 23:57:00 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=oB4lIvjzdBiHgRAUphuiQVPgI+jp2K5bMoUCGUiH4hk=; b=QfiPTkylKak2JP/fA92kLS2kHYHGvI0ROKKMGBuahal4YLtzvGBJN27XjNyiQH8HG6 yGlyTyrU/unO2GWm0Nv4aMpTNLKbJN+tCvpwionJjZcpDaxtYqxpVnIYWW0enJwD9cGu H3LPc6bHWpLSwRsuEbG1JVtUql2ni5CS0AKG2Gtvj/sOGcnFyVzwIGABxUDq4hnShW0d YrnkM62CleNkLuir4Rfyub+T9Qd7Ss628T/1ROy/StYjkO1JBGOTm6WgyQ7LRFXQEEsm JL468RJSiozD/j2M3Vi+OnMGSwmwEQH4MedG0ZFkg+53Rwgl2MeC2pLijfEUHj7jkGJ/ 9N/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=oB4lIvjzdBiHgRAUphuiQVPgI+jp2K5bMoUCGUiH4hk=; b=odInNgc3gP0BEo3K+LoCCrIXx9NBXXzLozEHzVLO61gvAESxvDcpRcuVym03+ouXAd 7XXE1tgHEfzPS8OSQG+gg6C9XwoG63HtmZ0CXzOkMKJ9b2W5eqfhYTFExEeTRo4jwJW+ ivvjnrop0xBg17H/7zzhRB9LCG6UekQam3snNdep0a6Ck81mttr4xDsFjuYsDIfRt0EN ntgROLuKtCnk2f4bAMxCOSgVgp0DTGZ4EcSKZCDs7eMNLXW2STKNT2N+8Hd8xdnMj6wt 8HwykhQIe/VRlgUp7Nj2ht1k+wroj6rtYbNgvmqe72mT9seph/O9r4I8az+Gzqo9Sk1Q t/mg== X-Gm-Message-State: APjAAAVtfB8yvuqIQXFD6xOpkimaFJk7GbnVPEa+0wwJNBRQFDW9wUwB zyoe6g5iaeg3kXoXOG9X5KnxpfZriGMBs2EionU= X-Google-Smtp-Source: APXvYqzD4X/kPtbk+w3gExO5Bezjz7X/3F/Tl84XtD5Pt1TYYSMyOibW/2GJGTtsQO1axnooT5H2E3H7HeWsFEsu76A= X-Received: by 2002:ac8:4543:: with SMTP id z3mr20376230qtn.41.1572998208364; Tue, 05 Nov 2019 15:56:48 -0800 (PST) Date: Tue, 5 Nov 2019 15:56:06 -0800 In-Reply-To: <20191105235608.107702-1-samitolvanen@google.com> Message-Id: <20191105235608.107702-13-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191105235608.107702-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc1.363.gb1bccd3e3d-goog Subject: [PATCH v5 12/14] arm64: vdso: disable Shadow Call Stack From: Sami Tolvanen To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Shadow stacks are only available in the kernel, so disable SCS instrumentation for the vDSO. Signed-off-by: Sami Tolvanen Reviewed-by: Nick Desaulniers Reviewed-by: Kees Cook Reviewed-by: Mark Rutland --- arch/arm64/kernel/vdso/Makefile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile index dd2514bb1511..a87a4f11724e 100644 --- a/arch/arm64/kernel/vdso/Makefile +++ b/arch/arm64/kernel/vdso/Makefile @@ -25,7 +25,7 @@ ccflags-y += -DDISABLE_BRANCH_PROFILING VDSO_LDFLAGS := -Bsymbolic -CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os +CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) KBUILD_CFLAGS += $(DISABLE_LTO) KASAN_SANITIZE := n UBSAN_SANITIZE := n From patchwork Tue Nov 5 23:56:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11228995 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AC8A215AB for ; Tue, 5 Nov 2019 23:58:23 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id E81AB206B8 for ; Tue, 5 Nov 2019 23:58:22 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="p/+qormW" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E81AB206B8 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17313-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 28170 invoked by uid 550); 5 Nov 2019 23:57:04 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28102 invoked from network); 5 Nov 2019 23:57:04 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=SBPjjox/K0nQBp9p/UReaX06mOCKZA5VG5oKrHSRp3U=; b=p/+qormWzpAk/hn6fTIAq1GD6e0WQRag5NF7NjqAhGo5V+9tCeqWxMigwtXWt/lGNp q9buszXWKYVWoziPQLZS51iBSuLA1nUdgVLAfYt05cUtAWebiilSqti+DjDN8Q0UKshx 57MAdb8441Kh4kMQAc47B2g6uSc/jktWoSENQoAO1SfBArC0JfH71OE1ZC9X88wFzf7D 1zKwIMNiGpyq7Ox2PdBV9bj7qpbfLeuEOQjRxH+b5fzmSdlQBlgmJqQb6wUCW+t/kkJd W22heVjnv5z8zfiAr2e8BVc/Krx9f9PGp3snmyjgH+VW1d5sU1rFDMtHZ42LB6knm6uR i+NA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=SBPjjox/K0nQBp9p/UReaX06mOCKZA5VG5oKrHSRp3U=; b=ToT881aPJP7hY4OVbMlO4a5xfay+CQsXxdWgAqyhgrFxEWKQq7IssXb8jEqTSmvZ2i yLGQN5tc82KuIXbAzPlSwoQBU1bJZJ7QLtCvgGHiF0OyCQZUkH7TQm4E3oBnXuS5kWnR exhP+I6JruAWiYjEYjDN/kEyJa3Awjngrh340ul8GuJyOMaNp+zNM9PEHNKb1YeE7o35 yvHL8VVQQy3lFAuWtcoC/3kYsfZO60wcIUp2RpMHtvfi97NYXMtJEbpyUSMcZZVoq9D+ yBN7gMKHxtFOC/gIPz2wAb5QSj4DbfmEt0l+mhWZ1/tppZgzuQo9QP7KE9sUkpppw/kB L/5A== X-Gm-Message-State: APjAAAXsTfMU6wKphWbdIkI+iW8NaKNa+0zbwn8eVJHquEp/r120DUy+ 5bNyXooMUS5a67Hjcm5L/QuhYlG4RCcbvOp+2qU= X-Google-Smtp-Source: APXvYqwMtHOZMoB23ssF7JNCZzaMw12LgXJO1Cx/fyHqurm8wqZn755XqIGbFFeB1mv3cEwMcIBCpgy6byzfZbrNKqg= X-Received: by 2002:a63:6483:: with SMTP id y125mr5619534pgb.444.1572998212169; Tue, 05 Nov 2019 15:56:52 -0800 (PST) Date: Tue, 5 Nov 2019 15:56:07 -0800 In-Reply-To: <20191105235608.107702-1-samitolvanen@google.com> Message-Id: <20191105235608.107702-14-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191105235608.107702-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc1.363.gb1bccd3e3d-goog Subject: [PATCH v5 13/14] arm64: disable SCS for hypervisor code From: Sami Tolvanen To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Filter out CC_FLAGS_SCS for code that runs at a different exception level. Suggested-by: Steven Rostedt (VMware) Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Reviewed-by: Mark Rutland --- arch/arm64/kvm/hyp/Makefile | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/arm64/kvm/hyp/Makefile b/arch/arm64/kvm/hyp/Makefile index ea710f674cb6..17ea3da325e9 100644 --- a/arch/arm64/kvm/hyp/Makefile +++ b/arch/arm64/kvm/hyp/Makefile @@ -28,3 +28,6 @@ GCOV_PROFILE := n KASAN_SANITIZE := n UBSAN_SANITIZE := n KCOV_INSTRUMENT := n + +# remove the SCS flags from all objects in this directory +KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_SCS), $(KBUILD_CFLAGS)) From patchwork Tue Nov 5 23:56:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sami Tolvanen X-Patchwork-Id: 11229003 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9EF6315AB for ; Tue, 5 Nov 2019 23:58:33 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id 7E45B206B8 for ; Tue, 5 Nov 2019 23:58:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LsQ5nRDi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7E45B206B8 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17314-patchwork-kernel-hardening=patchwork.kernel.org@lists.openwall.com Received: (qmail 28557 invoked by uid 550); 5 Nov 2019 23:57:09 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Delivered-To: mailing list kernel-hardening@lists.openwall.com Received: (qmail 28449 invoked from network); 5 Nov 2019 23:57:08 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=EORujaaoDl5CyEMEo7ESY7Wd3JkreW+Wpr9MONYQK/w=; b=LsQ5nRDiYJoBaz3wpSAfz+IBExiBTiactbC2peaTME2A8MJYOi/fZ2hEkrgcVt1TKk wTOltbi7OGIk3uLzPDXXl5I9LXf+BRDkSoFbWV8hzkb1wPzEKpbSpxvVNZaMaEQudSZX 5L7r9Em5E7BX/UGhZaRaGQ0LUMccY4ORBIa/4pSmG0QH8pLwKjd75rGv2xXesHNz0cK3 ICmz2nrNxT337KRn4Ueij8x/dFQBRCiMtrOJjpfwDNYFvb6BcSXyuwmy0WD7UZ95wSrQ gtS/RUeMPvJdQcoeZxKWJxek/vdhKBFEtgiscBt0+w44KT6szUUKYrTwXA5RTBuHRMDL Tvsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=EORujaaoDl5CyEMEo7ESY7Wd3JkreW+Wpr9MONYQK/w=; b=PK+R4jOYYyrMRI3Y7OTF+SYuCR0ms9/HDXXJJDH8osGZjVGyDbf50NAv2K9kuif2as aQzn6nF5AZ29c8nK055kqIZZQhcp5YAqELuC+Tvmlc1TmVfKRIiKOHdqxU62f/nOeyUL /dFp66J7Zyeleib/66InKYAwDvT+564J65MEOJInyp1S/o7KKq854GBMSFSxknijT0gv 476epTTLrV4SJ8plj7aSTCoSrmPaO9X4VZG824E85G90pA3XNVdituKwb+c5cCUEyukH FJk1GPQU6+zIsLndpAcsmU2bU6RLlsUpaCQ5dnFZZQ3Vd6IRjRH13xTjfGpZQanJxfWI P3Og== X-Gm-Message-State: APjAAAUbePe8Whn+WVBUI/3UZrlnMNB4x4N/EFZC5oYnwjRxuf/N/W9H df0sIKpklOEpCv+BjnK7kI6NonyfEl8kKKKHjd0= X-Google-Smtp-Source: APXvYqwtTzzg3YjCGifmzobRmC/mMAyCeF21/xE3WfWFkE/HSs9gXeQNn+UyOIg502UfBDxC9A7cCT9zBzntqoDLG2A= X-Received: by 2002:a1f:7d84:: with SMTP id y126mr15258241vkc.99.1572998216495; Tue, 05 Nov 2019 15:56:56 -0800 (PST) Date: Tue, 5 Nov 2019 15:56:08 -0800 In-Reply-To: <20191105235608.107702-1-samitolvanen@google.com> Message-Id: <20191105235608.107702-15-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191105235608.107702-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc1.363.gb1bccd3e3d-goog Subject: [PATCH v5 14/14] arm64: implement Shadow Call Stack From: Sami Tolvanen To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen This change implements shadow stack switching, initial SCS set-up, and interrupt shadow stacks for arm64. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook --- arch/arm64/Kconfig | 5 ++++ arch/arm64/include/asm/scs.h | 37 ++++++++++++++++++++++++++ arch/arm64/include/asm/stacktrace.h | 4 +++ arch/arm64/include/asm/thread_info.h | 3 +++ arch/arm64/kernel/Makefile | 1 + arch/arm64/kernel/asm-offsets.c | 3 +++ arch/arm64/kernel/entry.S | 28 ++++++++++++++++++++ arch/arm64/kernel/head.S | 9 +++++++ arch/arm64/kernel/irq.c | 2 ++ arch/arm64/kernel/process.c | 2 ++ arch/arm64/kernel/scs.c | 39 ++++++++++++++++++++++++++++ arch/arm64/kernel/smp.c | 4 +++ 12 files changed, 137 insertions(+) create mode 100644 arch/arm64/include/asm/scs.h create mode 100644 arch/arm64/kernel/scs.c diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 8cda176dad9a..76e32d01d759 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -66,6 +66,7 @@ config ARM64 select ARCH_USE_QUEUED_RWLOCKS select ARCH_USE_QUEUED_SPINLOCKS select ARCH_SUPPORTS_MEMORY_FAILURE + select ARCH_SUPPORTS_SHADOW_CALL_STACK if CC_HAVE_SHADOW_CALL_STACK select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_SUPPORTS_INT128 if GCC_VERSION >= 50000 || CC_IS_CLANG select ARCH_SUPPORTS_NUMA_BALANCING @@ -948,6 +949,10 @@ config ARCH_HAS_CACHE_LINE_SIZE config ARCH_ENABLE_SPLIT_PMD_PTLOCK def_bool y if PGTABLE_LEVELS > 2 +# Supported by clang >= 7.0 +config CC_HAVE_SHADOW_CALL_STACK + def_bool $(cc-option, -fsanitize=shadow-call-stack -ffixed-x18) + config SECCOMP bool "Enable seccomp to safely compute untrusted bytecode" ---help--- diff --git a/arch/arm64/include/asm/scs.h b/arch/arm64/include/asm/scs.h new file mode 100644 index 000000000000..c50d2b0c6c5f --- /dev/null +++ b/arch/arm64/include/asm/scs.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_SCS_H +#define _ASM_SCS_H + +#ifndef __ASSEMBLY__ + +#include + +#ifdef CONFIG_SHADOW_CALL_STACK + +extern void scs_init_irq(void); + +static __always_inline void scs_save(struct task_struct *tsk) +{ + void *s; + + asm volatile("mov %0, x18" : "=r" (s)); + task_set_scs(tsk, s); +} + +static inline void scs_overflow_check(struct task_struct *tsk) +{ + if (unlikely(scs_corrupted(tsk))) + panic("corrupted shadow stack detected inside scheduler\n"); +} + +#else /* CONFIG_SHADOW_CALL_STACK */ + +static inline void scs_init_irq(void) {} +static inline void scs_save(struct task_struct *tsk) {} +static inline void scs_overflow_check(struct task_struct *tsk) {} + +#endif /* CONFIG_SHADOW_CALL_STACK */ + +#endif /* __ASSEMBLY __ */ + +#endif /* _ASM_SCS_H */ diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index 4d9b1f48dc39..b6cf32fb4efe 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -68,6 +68,10 @@ extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk); DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); +#ifdef CONFIG_SHADOW_CALL_STACK +DECLARE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); +#endif + static inline bool on_irq_stack(unsigned long sp, struct stack_info *info) { diff --git a/arch/arm64/include/asm/thread_info.h b/arch/arm64/include/asm/thread_info.h index f0cec4160136..8c73764b9ed2 100644 --- a/arch/arm64/include/asm/thread_info.h +++ b/arch/arm64/include/asm/thread_info.h @@ -41,6 +41,9 @@ struct thread_info { #endif } preempt; }; +#ifdef CONFIG_SHADOW_CALL_STACK + void *shadow_call_stack; +#endif }; #define thread_saved_pc(tsk) \ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index 478491f07b4f..b3995329d9e5 100644 --- a/arch/arm64/kernel/Makefile +++ b/arch/arm64/kernel/Makefile @@ -63,6 +63,7 @@ obj-$(CONFIG_CRASH_CORE) += crash_core.o obj-$(CONFIG_ARM_SDE_INTERFACE) += sdei.o obj-$(CONFIG_ARM64_SSBD) += ssbd.o obj-$(CONFIG_ARM64_PTR_AUTH) += pointer_auth.o +obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o obj-y += vdso/ probes/ obj-$(CONFIG_COMPAT_VDSO) += vdso32/ diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index 214685760e1c..f6762b9ae1e1 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -33,6 +33,9 @@ int main(void) DEFINE(TSK_TI_ADDR_LIMIT, offsetof(struct task_struct, thread_info.addr_limit)); #ifdef CONFIG_ARM64_SW_TTBR0_PAN DEFINE(TSK_TI_TTBR0, offsetof(struct task_struct, thread_info.ttbr0)); +#endif +#ifdef CONFIG_SHADOW_CALL_STACK + DEFINE(TSK_TI_SCS, offsetof(struct task_struct, thread_info.shadow_call_stack)); #endif DEFINE(TSK_STACK, offsetof(struct task_struct, stack)); #ifdef CONFIG_STACKPROTECTOR diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index cf3bd2976e57..1eff08c71403 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -172,6 +172,10 @@ alternative_cb_end apply_ssbd 1, x22, x23 +#ifdef CONFIG_SHADOW_CALL_STACK + ldr x18, [tsk, #TSK_TI_SCS] // Restore shadow call stack + str xzr, [tsk, #TSK_TI_SCS] // Limit visibility of saved SCS +#endif .else add x21, sp, #S_FRAME_SIZE get_current_task tsk @@ -278,6 +282,12 @@ alternative_else_nop_endif ct_user_enter .endif +#ifdef CONFIG_SHADOW_CALL_STACK + .if \el == 0 + str x18, [tsk, #TSK_TI_SCS] // Save shadow call stack + .endif +#endif + #ifdef CONFIG_ARM64_SW_TTBR0_PAN /* * Restore access to TTBR0_EL1. If returning to EL0, no need for SPSR @@ -383,6 +393,9 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 .macro irq_stack_entry mov x19, sp // preserve the original sp +#ifdef CONFIG_SHADOW_CALL_STACK + mov x20, x18 // preserve the original shadow stack +#endif /* * Compare sp with the base of the task stack. @@ -400,6 +413,12 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 /* switch to the irq stack */ mov sp, x26 + +#ifdef CONFIG_SHADOW_CALL_STACK + /* also switch to the irq shadow stack */ + ldr_this_cpu x18, irq_shadow_call_stack_ptr, x26 +#endif + 9998: .endm @@ -409,6 +428,10 @@ alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0 */ .macro irq_stack_exit mov sp, x19 +#ifdef CONFIG_SHADOW_CALL_STACK + /* x20 is also preserved */ + mov x18, x20 +#endif .endm /* GPRs used by entry code */ @@ -1155,6 +1178,11 @@ ENTRY(cpu_switch_to) ldr lr, [x8] mov sp, x9 msr sp_el0, x1 +#ifdef CONFIG_SHADOW_CALL_STACK + str x18, [x0, #TSK_TI_SCS] + ldr x18, [x1, #TSK_TI_SCS] + str xzr, [x1, #TSK_TI_SCS] // limit visibility of saved SCS +#endif ret ENDPROC(cpu_switch_to) NOKPROBE(cpu_switch_to) diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S index 989b1944cb71..ca561de903d4 100644 --- a/arch/arm64/kernel/head.S +++ b/arch/arm64/kernel/head.S @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include @@ -424,6 +425,10 @@ __primary_switched: stp xzr, x30, [sp, #-16]! mov x29, sp +#ifdef CONFIG_SHADOW_CALL_STACK + adr_l x18, init_shadow_call_stack // Set shadow call stack +#endif + str_l x21, __fdt_pointer, x5 // Save FDT pointer ldr_l x4, kimage_vaddr // Save the offset between @@ -731,6 +736,10 @@ __secondary_switched: ldr x2, [x0, #CPU_BOOT_TASK] cbz x2, __secondary_too_slow msr sp_el0, x2 +#ifdef CONFIG_SHADOW_CALL_STACK + ldr x18, [x2, #TSK_TI_SCS] // set shadow call stack + str xzr, [x2, #TSK_TI_SCS] // limit visibility of saved SCS +#endif mov x29, #0 mov x30, #0 b secondary_start_kernel diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c index 04a327ccf84d..fe0ca522ff60 100644 --- a/arch/arm64/kernel/irq.c +++ b/arch/arm64/kernel/irq.c @@ -21,6 +21,7 @@ #include #include #include +#include unsigned long irq_err_count; @@ -63,6 +64,7 @@ static void init_irq_stacks(void) void __init init_IRQ(void) { init_irq_stacks(); + scs_init_irq(); irqchip_init(); if (!handle_arch_irq) panic("No interrupt controller found."); diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 71f788cd2b18..5f0aec285848 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -52,6 +52,7 @@ #include #include #include +#include #include #if defined(CONFIG_STACKPROTECTOR) && !defined(CONFIG_STACKPROTECTOR_PER_TASK) @@ -507,6 +508,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, uao_thread_switch(next); ptrauth_thread_switch(next); ssbs_thread_switch(next); + scs_overflow_check(next); /* * Complete any pending TLB or cache maintenance on this CPU in case diff --git a/arch/arm64/kernel/scs.c b/arch/arm64/kernel/scs.c new file mode 100644 index 000000000000..6f255072c9a9 --- /dev/null +++ b/arch/arm64/kernel/scs.c @@ -0,0 +1,39 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#include +#include +#include + +DEFINE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); + +#ifndef CONFIG_SHADOW_CALL_STACK_VMAP +DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], irq_shadow_call_stack) + __aligned(SCS_SIZE); +#endif + +void scs_init_irq(void) +{ + int cpu; + + for_each_possible_cpu(cpu) { +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP + unsigned long *p; + + p = __vmalloc_node_range(SCS_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + SCS_GFP, PAGE_KERNEL, + 0, cpu_to_node(cpu), + __builtin_return_address(0)); + + per_cpu(irq_shadow_call_stack_ptr, cpu) = p; +#else + per_cpu(irq_shadow_call_stack_ptr, cpu) = + per_cpu(irq_shadow_call_stack, cpu); +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + } +} diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c index dc9fe879c279..cc1938a585d2 100644 --- a/arch/arm64/kernel/smp.c +++ b/arch/arm64/kernel/smp.c @@ -44,6 +44,7 @@ #include #include #include +#include #include #include #include @@ -357,6 +358,9 @@ void cpu_die(void) { unsigned int cpu = smp_processor_id(); + /* Save the shadow stack pointer before exiting the idle task */ + scs_save(current); + idle_task_exit(); local_daif_mask();