From patchwork Sun Apr 12 00:24:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 11484257 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DB89214DD for ; Sun, 12 Apr 2020 00:27:21 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AAC4920936 for ; Sun, 12 Apr 2020 00:27:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="GHG3lNlK"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="Jl3Lsfq+" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AAC4920936 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=CxIGIZzYRDw8/7HKHdFFpY7UPHSpoRWufU3GAay6IXc=; b=GHG3lNlKmPYH+G jW0vHmid66OYmAI1ENnCIDIxcw25yJzuPDItxO4Gc+nD5FUe9J+lKY3AubJM+V4671P4w23Gi81kw AgT8zW6RrCdalYaoj8TMaHDuNvBbW+Tp2Ik10jXIChArjJ+1VXs//3ILz4jVkpQkou54B4q0rLuEM 1zBYJT809EwfmTwyzZ/Z7/t94E/jkFxJrmNKxiP5aTgaehHMAqZ8wS/V9qf/8Q/iOyoaDC2JMAQR/ RjzxnKDPKbeiUbEuDB68x95Qj1hX1naDPLsbabhdO9i8ci+I26VVwuZfH/PMR1X9zpunG6mCV3oNM dTQ+brrKheLVszmqF2Qw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jNQT0-0003eY-Kb; Sun, 12 Apr 2020 00:27:18 +0000 Received: from mail-lf1-x141.google.com ([2a00:1450:4864:20::141]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jNQSx-0003dz-7O for linux-arm-kernel@lists.infradead.org; Sun, 12 Apr 2020 00:27:16 +0000 Received: by mail-lf1-x141.google.com with SMTP id w145so3950679lff.3 for ; Sat, 11 Apr 2020 17:27:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0rfhuBhilZ5BYkVWnJbdei+KYDMZ0DkELBKZFF32cII=; b=Jl3Lsfq+cqPCiONOXserQ7XI+uPHwv+u2R4qIKymkiuxRSCJRRwN1Vd6xBSimEl564 GLp2KQxzPVc1Zgz13hUc+JTnP96uiytPfStB+jaonDCqh4Ke+fHB/4UCctt4GF64A3Ar ZKWQXf/Eso+IAogvZ4dAj90nJZoj6UVMmtQVOp9eQ5M8++sy3hJox6SOiOvUb2lsj/LX l7iL5yEDHphDdxch4iSYH8esWu6i5zwXY11KRS2/eAFcRdgmMcjmvh6X9QMcZmpg2oVx DZxVJQmwDqySLhZ7pCsB1I82o6cqlKTVsxSkR+D+grUrV8vyQL+141Egkl444kAG/M8J TDsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0rfhuBhilZ5BYkVWnJbdei+KYDMZ0DkELBKZFF32cII=; b=aXQVViQC51pxz9Fq3JrcT0d9QLjKTxU6uN6SPzTyP6IFWncV19EiCLI7WWeVSzlcJA ytPJiN3kG02CDAqO0z77aF+7ZGQ4C64iKdb/afEOuX7xf9VSKO3669mMFmm+tC71J3LS RPa66M5hQMQ0AGRlYVotaA+uYCpXoSFQi3JyspwjsttJSYkggU8tS2vb0N9AjpCA3QXz /NaoZ9Xa8tU4WMFGce1vWb0pzfcOrg43AdOh69O0bH5Rrn4kUhTC5SR/o9bPj/bAKemZ zkZA8o5wcEbfCQp1miuBw1xjfwtCJxdfHV/e7iz914aA7DAVuuFWvIiUqRcyBK8leP2A SkOA== X-Gm-Message-State: AGi0PuZJdcC2mwT+pDdaRX2avajT7SsVUDb9RybBdB5hSsaiaA+CBswI PFMhKXNSlMtV8vrufIHcauhYLA== X-Google-Smtp-Source: APiQypLJmrmAUhVPH2HUEdb0VBRtzsjgolPnlaJ2AnKFIPHcXxH/fmT1LwjO+g2AqZBGvuv0dXuYpQ== X-Received: by 2002:a05:6512:3189:: with SMTP id i9mr6502375lfe.178.1586651233408; Sat, 11 Apr 2020 17:27:13 -0700 (PDT) Received: from localhost.localdomain (c-f3d7225c.014-348-6c756e10.bbcust.telenor.se. [92.34.215.243]) by smtp.gmail.com with ESMTPSA id x29sm4907345lfn.64.2020.04.11.17.27.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 11 Apr 2020 17:27:12 -0700 (PDT) From: Linus Walleij To: Florian Fainelli , Abbott Liu , Russell King , Ard Biesheuvel , Andrey Ryabinin Subject: [PATCH 1/5 v2] ARM: Disable KASan instrumentation for some code Date: Sun, 12 Apr 2020 02:24:03 +0200 Message-Id: <20200412002407.396790-2-linus.walleij@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200412002407.396790-1-linus.walleij@linaro.org> References: <20200412002407.396790-1-linus.walleij@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200411_172715_263065_B517129F X-CRM114-Status: GOOD ( 15.51 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:141 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Marc Zyngier , Linus Walleij , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Andrey Ryabinin Disable instrumentation for arch/arm/boot/compressed/* since that code is executed before the kernel has even set up its mappings and definately out of scope for KASan. Disable instrumentation of arch/arm/vdso/* because that code is not linked with the kernel image, so the KASan management code would fail to link. Disable instrumentation of arch/arm/mm/physaddr.c. See commit ec6d06efb0ba ("arm64: Add support for CONFIG_DEBUG_VIRTUAL") for more details. Disable kasan check in the function unwind_pop_register because it does not matter that kasan checks failed when unwind_pop_register() reads the stack memory of a task. Reported-by: Florian Fainelli Reported-by: Marc Zyngier Signed-off-by: Abbott Liu Signed-off-by: Florian Fainelli Signed-off-by: Linus Walleij --- ChangeLog v1->v2: - Removed the KVM instrumentaton disablement since KVM on ARM32 is gone. --- arch/arm/boot/compressed/Makefile | 1 + arch/arm/kernel/unwind.c | 6 +++++- arch/arm/mm/Makefile | 1 + arch/arm/vdso/Makefile | 2 ++ 4 files changed, 9 insertions(+), 1 deletion(-) diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile index 9c11e7490292..abd6f3d5c2ba 100644 --- a/arch/arm/boot/compressed/Makefile +++ b/arch/arm/boot/compressed/Makefile @@ -24,6 +24,7 @@ OBJS += hyp-stub.o endif GCOV_PROFILE := n +KASAN_SANITIZE := n # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. KCOV_INSTRUMENT := n diff --git a/arch/arm/kernel/unwind.c b/arch/arm/kernel/unwind.c index 11a964fd66f4..739a77f39a8f 100644 --- a/arch/arm/kernel/unwind.c +++ b/arch/arm/kernel/unwind.c @@ -236,7 +236,11 @@ static int unwind_pop_register(struct unwind_ctrl_block *ctrl, if (*vsp >= (unsigned long *)ctrl->sp_high) return -URC_FAILURE; - ctrl->vrs[reg] = *(*vsp)++; + /* Use READ_ONCE_NOCHECK here to avoid this memory access + * from being tracked by KASAN. + */ + ctrl->vrs[reg] = READ_ONCE_NOCHECK(*(*vsp)); + (*vsp)++; return URC_OK; } diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile index 7cb1699fbfc4..432302911d6e 100644 --- a/arch/arm/mm/Makefile +++ b/arch/arm/mm/Makefile @@ -16,6 +16,7 @@ endif obj-$(CONFIG_ARM_PTDUMP_CORE) += dump.o obj-$(CONFIG_ARM_PTDUMP_DEBUGFS) += ptdump_debugfs.o obj-$(CONFIG_MODULES) += proc-syms.o +KASAN_SANITIZE_physaddr.o := n obj-$(CONFIG_DEBUG_VIRTUAL) += physaddr.o obj-$(CONFIG_ALIGNMENT_TRAP) += alignment.o diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile index d3c9f03e7e79..71d18d59bd35 100644 --- a/arch/arm/vdso/Makefile +++ b/arch/arm/vdso/Makefile @@ -42,6 +42,8 @@ GCOV_PROFILE := n # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. KCOV_INSTRUMENT := n +KASAN_SANITIZE := n + # Force dependency $(obj)/vdso.o : $(obj)/vdso.so From patchwork Sun Apr 12 00:24:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 11484259 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0017317D4 for ; Sun, 12 Apr 2020 00:27:36 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C184C20936 for ; Sun, 12 Apr 2020 00:27:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Fndzosj+"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="LXZp2q5l" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C184C20936 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=9K5SmhhVryskU+R33YFtJydA/9B8P5K0Xu/I+rXzFWE=; b=Fndzosj+UEoLd7 YIJCkMYFBv5fSCpt8cB4sSMD2BPeT1Mqm2sFqKC7aTB4f9u16g5HE7Mlj4eVZFnu5QiDnaqdZmG3H Zy868IpAW6HBSO9ptHGp8TFZk0BbGebN/vefXbd0E73XaUTOEbFQxuGENRxkJU/3gI9iVN6b8qX6W Ic4aU8Q4tBKkFkEvT4bzqGBhEuPBIzJDPZUEf1/2o+GOUr/YekLHyW0Ts4D3Gz0qhwPcYWZEDvpqE 68GxupBB0yBMvws6LuAituMmJ5JnPvXvV2MmoKi67bHtgZJrtcZYvofKa3PfFc+lX8g5ghK3uH7IH 7crbytFfGR3Mv5jO82mQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jNQTE-0003qv-0B; Sun, 12 Apr 2020 00:27:32 +0000 Received: from mail-lf1-x144.google.com ([2a00:1450:4864:20::144]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jNQSz-0003eD-40 for linux-arm-kernel@lists.infradead.org; Sun, 12 Apr 2020 00:27:18 +0000 Received: by mail-lf1-x144.google.com with SMTP id s13so3936465lfb.9 for ; Sat, 11 Apr 2020 17:27:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wwkk3VythQ0wU54UkyWVQEC2udEVTZXJIK9Z4kxpFQQ=; b=LXZp2q5leTqlGpI09Yxw9zMmgEAgwuna1AKO32tueFfa0eUudCEG5GsfrwXsRD8CJY mOeUVLf7d8snGxPBjlNnrJChoGVfcgHKqy4Gjhi3MV0+VbuscCMUlt+aXIfaav90oYC9 cuZbcb+FH+NzcDVIxqmvORgvQ2z6pC8Lu/3K0EK6e8PsOKfxn3HS9hF/YIk2Tusx3UBx bpM4NTWUOPGv1wlsnHPw+Y1w5xpwNWyFrsNkPNtnJZIwPE6CUYe9K6rEeN6fspdoDCgh xwmLrAyym4vHTAugBM+ywQXnFrRqiwqBb2b6luQiJDBNzkr54EKHi2f3AKpAQUty2CnO uKRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wwkk3VythQ0wU54UkyWVQEC2udEVTZXJIK9Z4kxpFQQ=; b=Yn4OAz83LBoGPRNyF/865KOdmyJRGXizPSCBrvOm8zXRKwjy4swzwi36+MeGPzLscR UaNT9pTO8cSdpvBAFeGQs1v4xjeWYIxpbzvk4b/YYJdw/tYx5XdDhKgEtc7AHrwENdwX CzyDYTBv9phDC0N/H8QvdCXqTohEMwC14YzFtfd0S+irQybHIOSuWq+PIlezqEhIkB0T NLNuPBz/h3EBs/P0BVCNktZpQSUyPEA+vWlpq9aYRtDzimvfDRtmPHhEtoKJEJE/ILRJ /0oDF9+fIC5M/ag9NV+E1ajH3J6WnQfSwOK96ytKPB7/m+l+5BTz0Da/KG8jA+i/WoG/ xptQ== X-Gm-Message-State: AGi0PuacMNAyECmCtQj96JOSkNPx27OH2vC83blpUZWJpfipkBvh/VkM vDFxWncvVZKc0u2TSyaxw7UR9A== X-Google-Smtp-Source: APiQypI+/g7RM24yq9OS3jpqQKBAKKQMOleL3612mQgF/X7hWShi9VBikG9QUNhUozaZ4nt6PB8aTA== X-Received: by 2002:ac2:519c:: with SMTP id u28mr6390162lfi.17.1586651235444; Sat, 11 Apr 2020 17:27:15 -0700 (PDT) Received: from localhost.localdomain (c-f3d7225c.014-348-6c756e10.bbcust.telenor.se. [92.34.215.243]) by smtp.gmail.com with ESMTPSA id x29sm4907345lfn.64.2020.04.11.17.27.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 11 Apr 2020 17:27:14 -0700 (PDT) From: Linus Walleij To: Florian Fainelli , Abbott Liu , Russell King , Ard Biesheuvel , Andrey Ryabinin Subject: [PATCH 2/5 v2] ARM: Replace memory functions for KASan Date: Sun, 12 Apr 2020 02:24:04 +0200 Message-Id: <20200412002407.396790-3-linus.walleij@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200412002407.396790-1-linus.walleij@linaro.org> References: <20200412002407.396790-1-linus.walleij@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200411_172717_157744_40E1C08C X-CRM114-Status: GOOD ( 20.02 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:144 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Linus Walleij , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Andrey Ryabinin Functions like memset()/memmove()/memcpy() do a lot of memory accesses. If a bad pointer is passed to one of these functions it is important to catch this. Compiler instrumentation cannot do this since these functions are written in assembly. KASan replaces these memory functions with instrumented variants. The original functions are declared as weak symbols so that the strong definitions in mm/kasan/kasan.c can replace them. The original functions have aliases with a '__' prefix in their name, so we can call the non-instrumented variant if needed. We must use __memcpy()/__memset() in place of memcpy()/memset() when we copy .data to RAM and when we clear .bss, because kasan_early_init cannot be called before the initialization of .data and .bss. For the kernel compression and EFI libstub we need to tag on the -D__SANITIZE_ADDRESS__ CFLAG, which is unfortunate: this is used only when building with KASan enabled. When building with KASan, all mem*() functions such as memcpy() etc are instrumented. However, code that disable KASan for an entire directory with KASAN_SANITIZE := n in the Kconfig file will run into problems if they try to use the mem*() functions, as the define that guards the uninstrumented variants look like so: #if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) (...) define memcpy(dst, src, len) __memcpy(dst, src, len) (...) As you see, KASAN is enabled, but the compilation procedure relies on the Makefile passing -fsanitize=kernel-address to the compiler for each instrumented file so that in turn __SANITIZE_ADDRESS__ gets set and these weakly defined functions are usually replaced with instrumented KASan variants. If KASAN is enabled but __SANITIZE_ADDRESS__ is not set, i.e. -fsanitize is not passed, the idea is that these #defines kick in and define e.g. memcpy() to __memcpy() - all good. Uncompress and libstub uses memcpy(), disables KASan with KASAN_SANITIZE := n so CONFIG_KASAN is defined, but not __SANITIZE_ADDRESS__ and we get a compilation error unless we patch in -D__SANITIZE_ADDRESS__ to these Makefiles. Reported-by: Russell King - ARM Linux Signed-off-by: Abbott Liu Signed-off-by: Florian Fainelli Signed-off-by: Linus Walleij --- ChangeLog v1->v2: - Move the hacks around __SANITIZE_ADDRESS__ into this file - Edit the commit message - Rebase on the other v2 patches --- arch/arm/boot/compressed/Makefile | 2 ++ arch/arm/include/asm/string.h | 17 +++++++++++++++++ arch/arm/kernel/head-common.S | 4 ++-- arch/arm/lib/memcpy.S | 3 +++ arch/arm/lib/memmove.S | 5 ++++- arch/arm/lib/memset.S | 3 +++ drivers/firmware/efi/libstub/Makefile | 5 ++++- 7 files changed, 35 insertions(+), 4 deletions(-) diff --git a/arch/arm/boot/compressed/Makefile b/arch/arm/boot/compressed/Makefile index abd6f3d5c2ba..9d0316876052 100644 --- a/arch/arm/boot/compressed/Makefile +++ b/arch/arm/boot/compressed/Makefile @@ -25,6 +25,8 @@ endif GCOV_PROFILE := n KASAN_SANITIZE := n +# Make sure we get the uninstrumented __mem*() functions +CFLAGS_KERNEL := -D__SANITIZE_ADDRESS__ # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. KCOV_INSTRUMENT := n diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h index 111a1d8a41dd..2e1dd7a0cf63 100644 --- a/arch/arm/include/asm/string.h +++ b/arch/arm/include/asm/string.h @@ -15,15 +15,18 @@ extern char * strchr(const char * s, int c); #define __HAVE_ARCH_MEMCPY extern void * memcpy(void *, const void *, __kernel_size_t); +extern void *__memcpy(void *dest, const void *src, __kernel_size_t n); #define __HAVE_ARCH_MEMMOVE extern void * memmove(void *, const void *, __kernel_size_t); +extern void *__memmove(void *dest, const void *src, __kernel_size_t n); #define __HAVE_ARCH_MEMCHR extern void * memchr(const void *, int, __kernel_size_t); #define __HAVE_ARCH_MEMSET extern void * memset(void *, int, __kernel_size_t); +extern void *__memset(void *s, int c, __kernel_size_t n); #define __HAVE_ARCH_MEMSET32 extern void *__memset32(uint32_t *, uint32_t v, __kernel_size_t); @@ -39,4 +42,18 @@ static inline void *memset64(uint64_t *p, uint64_t v, __kernel_size_t n) return __memset64(p, v, n * 8, v >> 32); } + + +#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) + +/* + * For files that are not instrumented (e.g. mm/slub.c) we + * must use non-instrumented versions of the mem* functions. + */ + +#define memcpy(dst, src, len) __memcpy(dst, src, len) +#define memmove(dst, src, len) __memmove(dst, src, len) +#define memset(s, c, n) __memset(s, c, n) +#endif + #endif diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S index 4a3982812a40..6840c7c60a85 100644 --- a/arch/arm/kernel/head-common.S +++ b/arch/arm/kernel/head-common.S @@ -95,7 +95,7 @@ __mmap_switched: THUMB( ldmia r4!, {r0, r1, r2, r3} ) THUMB( mov sp, r3 ) sub r2, r2, r1 - bl memcpy @ copy .data to RAM + bl __memcpy @ copy .data to RAM #endif ARM( ldmia r4!, {r0, r1, sp} ) @@ -103,7 +103,7 @@ __mmap_switched: THUMB( mov sp, r3 ) sub r2, r1, r0 mov r1, #0 - bl memset @ clear .bss + bl __memset @ clear .bss ldmia r4, {r0, r1, r2, r3} str r9, [r0] @ Save processor ID diff --git a/arch/arm/lib/memcpy.S b/arch/arm/lib/memcpy.S index 09a333153dc6..ad4625d16e11 100644 --- a/arch/arm/lib/memcpy.S +++ b/arch/arm/lib/memcpy.S @@ -58,6 +58,8 @@ /* Prototype: void *memcpy(void *dest, const void *src, size_t n); */ +.weak memcpy +ENTRY(__memcpy) ENTRY(mmiocpy) ENTRY(memcpy) @@ -65,3 +67,4 @@ ENTRY(memcpy) ENDPROC(memcpy) ENDPROC(mmiocpy) +ENDPROC(__memcpy) diff --git a/arch/arm/lib/memmove.S b/arch/arm/lib/memmove.S index b50e5770fb44..fd123ea5a5a4 100644 --- a/arch/arm/lib/memmove.S +++ b/arch/arm/lib/memmove.S @@ -24,12 +24,14 @@ * occurring in the opposite direction. */ +.weak memmove +ENTRY(__memmove) ENTRY(memmove) UNWIND( .fnstart ) subs ip, r0, r1 cmphi r2, ip - bls memcpy + bls __memcpy stmfd sp!, {r0, r4, lr} UNWIND( .fnend ) @@ -222,3 +224,4 @@ ENTRY(memmove) 18: backward_copy_shift push=24 pull=8 ENDPROC(memmove) +ENDPROC(__memmove) diff --git a/arch/arm/lib/memset.S b/arch/arm/lib/memset.S index 6ca4535c47fb..0e7ff0423f50 100644 --- a/arch/arm/lib/memset.S +++ b/arch/arm/lib/memset.S @@ -13,6 +13,8 @@ .text .align 5 +.weak memset +ENTRY(__memset) ENTRY(mmioset) ENTRY(memset) UNWIND( .fnstart ) @@ -132,6 +134,7 @@ UNWIND( .fnstart ) UNWIND( .fnend ) ENDPROC(memset) ENDPROC(mmioset) +ENDPROC(__memset) ENTRY(__memset32) UNWIND( .fnstart ) diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile index 094eabdecfe6..ea37c9c21273 100644 --- a/drivers/firmware/efi/libstub/Makefile +++ b/drivers/firmware/efi/libstub/Makefile @@ -19,9 +19,12 @@ cflags-$(CONFIG_X86) += -m$(BITS) -D__KERNEL__ -O2 \ # disable the stackleak plugin cflags-$(CONFIG_ARM64) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \ -fpie $(DISABLE_STACKLEAK_PLUGIN) +# Define __SANITIZE_ADDRESS__ in order to get the weak __mem*() functions +# when building with KASan. cflags-$(CONFIG_ARM) := $(subst $(CC_FLAGS_FTRACE),,$(KBUILD_CFLAGS)) \ -fno-builtin -fpic \ - $(call cc-option,-mno-single-pic-base) + $(call cc-option,-mno-single-pic-base) \ + -D__SANITIZE_ADDRESS__ cflags-$(CONFIG_EFI_ARMSTUB) += -I$(srctree)/scripts/dtc/libfdt From patchwork Sun Apr 12 00:24:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 11484261 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A36A914DD for ; Sun, 12 Apr 2020 00:27:56 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1635220936 for ; Sun, 12 Apr 2020 00:27:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Y4BmVek0"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="B+tgx/VV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1635220936 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=i0UH1TYtbbyRjntBiUIBzytOfzMvCJR8xtdIagMgRTU=; b=Y4BmVek0pwhQDw XWTLKTnetezOTu+4zWbR5wEJ3Expn8p4RqPMg7hrklw+vfMvteKQATa0SsTXD3SDhE0Ggz65Yr26Y VylBnpoijeJCd3amfea+1SLHsRUAbSePYzmu6j+a8umyMg2YnJaiD3W0iUUpWT+HigsYPBRz3fZVQ 51loSqRvt5ykHVxXZys1Hv5Ltbe2qTAXZLy0UoccFY8Zh4kl89D6cPuHt5zDj2OXi4RCHS9o8Qtzb sSnfUixO216dWIGHjSDRYUYEPqUv0CPva0zY7hajWoqal6lwGbmK//+yfUzWlOztRdjlIISFiZXF8 SzL6xjNKs78FeSMmpcZA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jNQTY-00047a-NY; Sun, 12 Apr 2020 00:27:52 +0000 Received: from mail-lj1-x241.google.com ([2a00:1450:4864:20::241]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jNQT2-0003f6-5t for linux-arm-kernel@lists.infradead.org; Sun, 12 Apr 2020 00:27:22 +0000 Received: by mail-lj1-x241.google.com with SMTP id 142so5411773ljj.7 for ; Sat, 11 Apr 2020 17:27:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Z3oZrovgYI6ibRC2dZZwgX91B7ApMZoWVdk7x6VtQNo=; b=B+tgx/VVtjDN44Tty48E4a/2JKhm4CL74PNbB5g7vG9eekDoujrD9NhZwtqLqZIkvJ a0K9+AUucaJgqe6vCF1arZ8okIfnpA4r75396gW+WJA4Fb/TCMNiKx48DofAWkSDH2IY vmp6nM5zIbuzbok6AYRkMaqZgml3znlE3YSGXaE8ATibzRQS5Cx6zWmPwzzu2c4tN7Vu jS0a9OvxAdGBel5bnlA4Q5vRcVY/o+qYa7TqAKCAgWl68OBNdpbUMBNJ54No3d4O/rhH qGRRSx5/q51l1prldUWmJQmjtNY4HBaXfuX0DygWlrh1sjm+dX/RA46YN/eLiMSrD7u/ wDkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Z3oZrovgYI6ibRC2dZZwgX91B7ApMZoWVdk7x6VtQNo=; b=W+ynXZexAcfgWp0djRisqWvtuPHg9OpvKk5su5UsQgPby2ph8srxXPNPOlWJKm5K9E lIVMiE0FBDJOZtqnZ2OkOBCJ6jsVOR70Au/n9NKdF070hyKV1dQPOiEoPzW9SBWnpTuU qE/A6fyLl6eHrNiEu1bMQNtgi1yms4Vx9MWYhKp9A+OhZAJ66zUlPv5247XTprS3Qiuw jCQTDOUmWPIE+9J2pFKD69Zv1yHAFnHXUZGFwjYZpZj9G90f39qtGkLsZZ6od3fLytOo nXDlvV28RBCm3futuAAELykXZA/NTiIBXIPSMqzALtohecAIcmFC2z4np0PFMk1rZ+nA yr6w== X-Gm-Message-State: AGi0PuYeZpztNLb3xgcXzoWlEOV2+TfLGxiQ7s0arRBU0uSun1KJZ54t 81OTwLXPgdi2Fe4yiHIUZkotvg== X-Google-Smtp-Source: APiQypK3iM0JrUHUAXAvcyH0ia3F0JAbewyDW0rlpyyKxrzLoSgyzdsazBVFQGL50pO9vw4XWwQUdA== X-Received: by 2002:a2e:8195:: with SMTP id e21mr6695043ljg.49.1586651238099; Sat, 11 Apr 2020 17:27:18 -0700 (PDT) Received: from localhost.localdomain (c-f3d7225c.014-348-6c756e10.bbcust.telenor.se. [92.34.215.243]) by smtp.gmail.com with ESMTPSA id x29sm4907345lfn.64.2020.04.11.17.27.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 11 Apr 2020 17:27:17 -0700 (PDT) From: Linus Walleij To: Florian Fainelli , Abbott Liu , Russell King , Ard Biesheuvel , Andrey Ryabinin Subject: [PATCH 3/5 v2] ARM: Define the virtual space of KASan's shadow region Date: Sun, 12 Apr 2020 02:24:05 +0200 Message-Id: <20200412002407.396790-4-linus.walleij@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200412002407.396790-1-linus.walleij@linaro.org> References: <20200412002407.396790-1-linus.walleij@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200411_172720_321692_AD4B71E5 X-CRM114-Status: GOOD ( 29.81 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:241 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Linus Walleij , linux-arm-kernel@lists.infradead.org, Ard Biesheuvel Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Abbott Liu Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for the Arm kernel address sanitizer. We are "stealing" lowmem (the 4GB addressable by a 32bit architecture) out of the virtual address space to use as shadow memory for KASan as follows: +----+ 0xffffffff | |\ | | |-> Static kernel image (vmlinux) BSS and page table | |/ +----+ PAGE_OFFSET | |\ | | |-> Loadable kernel modules virtual address space area | |/ +----+ MODULES_VADDR = KASAN_SHADOW_END | |\ | | |-> The shadow area of kernel virtual address. | |/ +----+-> TASK_SIZE (start of kernel space) = KASAN_SHADOW_START the | |\ shadow address of MODULES_VADDR | | | | | | | | |-> The user space area in lowmem. The kernel address | | | sanitizer do not use this space, nor does it map it. | | | | | | | | | | | | | |/ ------ 0 0 .. TASK_SIZE is the memory that can be used by shared userspace/kernelspace. It us used for userspace processes and for passing parameters and memory buffers in system calls etc. We do not need to shadow this area. KASAN_SHADOW_START: This value begins with the MODULE_VADDR's shadow address. It is the start of kernel virtual space. Since we have modules to load, we need to cover also that area with shadow memory so we can find memory bugs in modules. KASAN_SHADOW_END This value is the 0x100000000's shadow address: the mapping that would be after the end of the kernel memory at 0xffffffff. It is the end of kernel address sanitizer shadow area. It is also the start of the module area. KASAN_SHADOW_OFFSET: This value is used to map an address to the corresponding shadow address by the following formula: shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET; As you would expect, >> 3 is equal to dividing by 8, meaning each byte in the shadow memory covers 8 bytes of kernel memory, so one bit shadow memory per byte of kernel memory is used. The KASAN_SHADOW_OFFSET is provided in a Kconfig option depending on the VMSPLIT layout of the system: the kernel and userspace can split up lowmem in different ways according to needs, so we calculate the shadow offset depending on this. When kasan is enabled, the definition of TASK_SIZE is not an 8-bit rotated constant, so we need to modify the TASK_SIZE access code in the *.s file. The kernel and modules may use different amounts of memory, according to the VMSPLIT configuration, which in turn determines the PAGE_OFFSET. We use the following KASAN_SHADOW_OFFSETs depending on how the virtual memory is split up: - 0x1f000000 if we have 1G userspace / 3G kernelspace split: - The kernel address space is 3G (0xc0000000) - PAGE_OFFSET is then set to 0x40000000 so the kernel static image (vmlinux) uses addresses 0x40000000 .. 0xffffffff - On top of that we have the MODULES_VADDR which under the worst case (using ARM instructions) is PAGE_OFFSET - 16M (0x01000000) = 0x3f000000 so the modules use addresses 0x3f000000 .. 0x3fffffff - So the addresses 0x3f000000 .. 0xffffffff need to be covered with shadow memory. That is 0xc1000000 bytes of memory. - 1/8 of that is needed for its shadow memory, so 0x18200000 bytes of shadow memory is needed. We "steal" that from the remaining lowmem. - The KASAN_SHADOW_START becomes 0x26e00000, to KASAN_SHADOW_END at 0x3effffff. - Now we can calculate the KASAN_SHADOW_OFFSET for any kernel address as 0x3f000000 needs to map to the first byte of shadow memory and 0xffffffff needs to map to the last byte of shadow memory. Since: SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET 0x26e00000 = (0x3f000000 >> 3) + KASAN_SHADOW_OFFSET KASAN_SHADOW_OFFSET = 0x26e00000 - (0x3f000000 >> 3) KASAN_SHADOW_OFFSET = 0x26e00000 - 0x07e00000 KASAN_SHADOW_OFFSET = 0x1f000000 - 0x5f000000 if we have 2G userspace / 2G kernelspace split: - The kernel space is 2G (0x80000000) - PAGE_OFFSET is set to 0x80000000 so the kernel static image uses 0x80000000 .. 0xffffffff. - On top of that we have the MODULES_VADDR which under the worst case (using ARM instructions) is PAGE_OFFSET - 16M (0x01000000) = 0x7f000000 so the modules use addresses 0x7f000000 .. 0x7fffffff - So the addresses 0x7f000000 .. 0xffffffff need to be covered with shadow memory. That is 0x81000000 bytes of memory. - 1/8 of that is needed for its shadow memory, so 0x10200000 bytes of shadow memory is needed. We "steal" that from the remaining lowmem. - The KASAN_SHADOW_START becomes 0x6ee00000, to KASAN_SHADOW_END at 0x7effffff. - Now we can calculate the KASAN_SHADOW_OFFSET for any kernel address as 0x7f000000 needs to map to the first byte of shadow memory and 0xffffffff needs to map to the last byte of shadow memory. Since: SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET 0x6ee00000 = (0x7f000000 >> 3) + KASAN_SHADOW_OFFSET KASAN_SHADOW_OFFSET = 0x6ee00000 - (0x7f000000 >> 3) KASAN_SHADOW_OFFSET = 0x6ee00000 - 0x0fe00000 KASAN_SHADOW_OFFSET = 0x5f000000 - 0x9f000000 if we have 3G userspace / 1G kernelspace split, and this is the default split for ARM: - The kernel address space is 1GB (0x40000000) - PAGE_OFFSET is set to 0xc0000000 so the kernel static image uses 0xc0000000 .. 0xffffffff. - On top of that we have the MODULES_VADDR which under the worst case (using ARM instructions) is PAGE_OFFSET - 16M (0x01000000) = 0xbf000000 so the modules use addresses 0xbf000000 .. 0xbfffffff - So the addresses 0xbf000000 .. 0xffffffff need to be covered with shadow memory. That is 0x41000000 bytes of memory. - 1/8 of that is needed for its shadow memory, so 0x08200000 bytes of shadow memory is needed. We "steal" that from the remaining lowmem. - The KASAN_SHADOW_START becomes 0xb6e00000, to KASAN_SHADOW_END at 0xbfffffff. - Now we can calculate the KASAN_SHADOW_OFFSET for any kernel address as 0xbf000000 needs to map to the first byte of shadow memory and 0xffffffff needs to map to the last byte of shadow memory. Since: SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET 0xb6e00000 = (0xbf000000 >> 3) + KASAN_SHADOW_OFFSET KASAN_SHADOW_OFFSET = 0xb6e00000 - (0xbf000000 >> 3) KASAN_SHADOW_OFFSET = 0xb6e00000 - 0x17e00000 KASAN_SHADOW_OFFSET = 0x9f000000 - 0x8f000000 if we have 3G userspace / 1G kernelspace with full 1 GB low memory (VMSPLIT_3G_OPT): - The kernel address space is 1GB (0x40000000) - PAGE_OFFSET is set to 0xb0000000 so the kernel static image uses 0xb0000000 .. 0xffffffff. - On top of that we have the MODULES_VADDR which under the worst case (using ARM instructions) is PAGE_OFFSET - 16M (0x01000000) = 0xaf000000 so the modules use addresses 0xaf000000 .. 0xaffffff - So the addresses 0xaf000000 .. 0xffffffff need to be covered with shadow memory. That is 0x51000000 bytes of memory. - 1/8 of that is needed for its shadow memory, so 0x0a200000 bytes of shadow memory is needed. We "steal" that from the remaining lowmem. - The KASAN_SHADOW_START becomes 0xa4e00000, to KASAN_SHADOW_END at 0xaeffffff. - Now we can calculate the KASAN_SHADOW_OFFSET for any kernel address as 0xaf000000 needs to map to the first byte of shadow memory and 0xffffffff needs to map to the last byte of shadow memory. Since: SHADOW_ADDR = (address >> 3) + KASAN_SHADOW_OFFSET 0xa4e00000 = (0xaf000000 >> 3) + KASAN_SHADOW_OFFSET KASAN_SHADOW_OFFSET = 0xa4e00000 - (0xaf000000 >> 3) KASAN_SHADOW_OFFSET = 0xa4e00000 - 0x15e00000 KASAN_SHADOW_OFFSET = 0x8f000000 - The default value of 0xffffffff for KASAN_SHADOW_OFFSET is an error value. We should always match one of the above shadow offsets. When we do this, TASK_SIZE will sometimes get a bit odd values that will not fit into immediate mov assembly instructions. To account for this, we need to rewrite some assembly using TASK_SIZE like this: - mov r1, #TASK_SIZE + ldr r1, =TASK_SIZE or - cmp r4, #TASK_SIZE + ldr r0, =TASK_SIZE + cmp r4, r0 this is done to avoid the immediate #TASK_SIZE that need to fit into a limited number of bits. Cc: Andrey Ryabinin Reported-by: Ard Biesheuvel Signed-off-by: Abbott Liu Signed-off-by: Florian Fainelli Signed-off-by: Linus Walleij --- ChangeLog v1->v2: - Use the SPDX license identifier. - Rewrote the commit message and updates the illustration. - Move KASAN_OFFSET Kconfig set-up into this patch and put it right after PAGE_OFFSET so it is clear how this works, and we have all defines in one patch. - Added KASAN_SHADOW_OFFSET of 0x8f000000 for 3G_OPT. See the calculation in the commit message. - Updated the commit message with detailed information on how KASAN_SHADOW_OFFSET is obtained for the different VMSPLIT/PAGE_OFFSET options. --- arch/arm/Kconfig | 9 ++++ arch/arm/include/asm/kasan_def.h | 81 ++++++++++++++++++++++++++++++++ arch/arm/include/asm/memory.h | 5 ++ arch/arm/kernel/entry-armv.S | 5 +- arch/arm/kernel/entry-common.S | 9 ++-- arch/arm/mm/mmu.c | 7 ++- 6 files changed, 110 insertions(+), 6 deletions(-) create mode 100644 arch/arm/include/asm/kasan_def.h diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 66a04f6f4775..f6f2d3b202f5 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -1325,6 +1325,15 @@ config PAGE_OFFSET default 0xB0000000 if VMSPLIT_3G_OPT default 0xC0000000 +config KASAN_SHADOW_OFFSET + hex + depends on KASAN + default 0x1f000000 if PAGE_OFFSET=0x40000000 + default 0x5f000000 if PAGE_OFFSET=0x80000000 + default 0x9f000000 if PAGE_OFFSET=0xC0000000 + default 0x8f000000 if PAGE_OFFSET=0xB0000000 + default 0xffffffff + config NR_CPUS int "Maximum number of CPUs (2-32)" range 2 32 diff --git a/arch/arm/include/asm/kasan_def.h b/arch/arm/include/asm/kasan_def.h new file mode 100644 index 000000000000..5739605aa7cf --- /dev/null +++ b/arch/arm/include/asm/kasan_def.h @@ -0,0 +1,81 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * arch/arm/include/asm/kasan_def.h + * + * Copyright (c) 2018 Huawei Technologies Co., Ltd. + * + * Author: Abbott Liu + */ + +#ifndef __ASM_KASAN_DEF_H +#define __ASM_KASAN_DEF_H + +#ifdef CONFIG_KASAN + +/* + * Define KASAN_SHADOW_OFFSET,KASAN_SHADOW_START and KASAN_SHADOW_END for + * the Arm kernel address sanitizer. We are "stealing" lowmem (the 4GB + * addressable by a 32bit architecture) out of the virtual address + * space to use as shadow memory for KASan as follows: + * + * +----+ 0xffffffff + * | | \ + * | | |-> Static kernel image (vmlinux) BSS and page table + * | |/ + * +----+ PAGE_OFFSET + * | | \ + * | | |-> Loadable kernel modules virtual address space area + * | |/ + * +----+ MODULES_VADDR = KASAN_SHADOW_END + * | | \ + * | | |-> The shadow area of kernel virtual address. + * | |/ + * +----+-> TASK_SIZE (start of kernel space) = KASAN_SHADOW_START the + * | |\ shadow address of MODULES_VADDR + * | | | + * | | | + * | | |-> The user space area in lowmem. The kernel address + * | | | sanitizer do not use this space, nor does it map it. + * | | | + * | | | + * | | | + * | | | + * | |/ + * ------ 0 + * + * 1) KASAN_SHADOW_START + * This value begins with the MODULE_VADDR's shadow address. It is the + * start of kernel virtual space. Since we have modules to load, we need + * to cover also that area with shadow memory so we can find memory + * bugs in modules. + * + * 2) KASAN_SHADOW_END + * This value is the 0x100000000's shadow address: the mapping that would + * be after the end of the kernel memory at 0xffffffff. It is the end of + * kernel address sanitizer shadow area. It is also the start of the + * module area. + * + * 3) KASAN_SHADOW_OFFSET: + * This value is used to map an address to the corresponding shadow + * address by the following formula: + * + * shadow_addr = (address >> 3) + KASAN_SHADOW_OFFSET; + * + * As you would expect, >> 3 is equal to dividing by 8, meaning each + * byte in the shadow memory covers 8 bytes of kernel memory, so one + * bit shadow memory per byte of kernel memory is used. + * + * The KASAN_SHADOW_OFFSET is provided in a Kconfig option depending + * on the VMSPLIT layout of the system: the kernel and userspace can + * split up lowmem in different ways according to needs, so we calculate + * the shadow offset depending on this. + */ + +#define KASAN_SHADOW_SCALE_SHIFT 3 +#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) +#define KASAN_SHADOW_END ((UL(1) << (32 - KASAN_SHADOW_SCALE_SHIFT)) \ + + KASAN_SHADOW_OFFSET) +#define KASAN_SHADOW_START ((KASAN_SHADOW_END >> 3) + KASAN_SHADOW_OFFSET) + +#endif +#endif diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h index 99035b5891ef..5cfa9e5dc733 100644 --- a/arch/arm/include/asm/memory.h +++ b/arch/arm/include/asm/memory.h @@ -18,6 +18,7 @@ #ifdef CONFIG_NEED_MACH_MEMORY_H #include #endif +#include /* PAGE_OFFSET - the virtual address of the start of the kernel image */ #define PAGE_OFFSET UL(CONFIG_PAGE_OFFSET) @@ -28,7 +29,11 @@ * TASK_SIZE - the maximum size of a user space task. * TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area */ +#ifndef CONFIG_KASAN #define TASK_SIZE (UL(CONFIG_PAGE_OFFSET) - UL(SZ_16M)) +#else +#define TASK_SIZE (KASAN_SHADOW_START) +#endif #define TASK_UNMAPPED_BASE ALIGN(TASK_SIZE / 3, SZ_16M) /* diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S index 77f54830554c..7548d38599ae 100644 --- a/arch/arm/kernel/entry-armv.S +++ b/arch/arm/kernel/entry-armv.S @@ -180,7 +180,7 @@ ENDPROC(__und_invalid) get_thread_info tsk ldr r0, [tsk, #TI_ADDR_LIMIT] - mov r1, #TASK_SIZE + ldr r1, =TASK_SIZE str r1, [tsk, #TI_ADDR_LIMIT] str r0, [sp, #SVC_ADDR_LIMIT] @@ -434,7 +434,8 @@ ENDPROC(__fiq_abt) @ if it was interrupted in a critical region. Here we @ perform a quick test inline since it should be false @ 99.9999% of the time. The rest is done out of line. - cmp r4, #TASK_SIZE + ldr r0, =TASK_SIZE + cmp r4, r0 blhs kuser_cmpxchg64_fixup #endif #endif diff --git a/arch/arm/kernel/entry-common.S b/arch/arm/kernel/entry-common.S index 271cb8a1eba1..fee279e28a72 100644 --- a/arch/arm/kernel/entry-common.S +++ b/arch/arm/kernel/entry-common.S @@ -50,7 +50,8 @@ __ret_fast_syscall: UNWIND(.cantunwind ) disable_irq_notrace @ disable interrupts ldr r2, [tsk, #TI_ADDR_LIMIT] - cmp r2, #TASK_SIZE + ldr r1, =TASK_SIZE + cmp r2, r1 blne addr_limit_check_failed ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK @@ -87,7 +88,8 @@ __ret_fast_syscall: #endif disable_irq_notrace @ disable interrupts ldr r2, [tsk, #TI_ADDR_LIMIT] - cmp r2, #TASK_SIZE + ldr r1, =TASK_SIZE + cmp r2, r1 blne addr_limit_check_failed ldr r1, [tsk, #TI_FLAGS] @ re-check for syscall tracing tst r1, #_TIF_SYSCALL_WORK | _TIF_WORK_MASK @@ -128,7 +130,8 @@ ret_slow_syscall: disable_irq_notrace @ disable interrupts ENTRY(ret_to_user_from_irq) ldr r2, [tsk, #TI_ADDR_LIMIT] - cmp r2, #TASK_SIZE + ldr r1, =TASK_SIZE + cmp r2, r1 blne addr_limit_check_failed ldr r1, [tsk, #TI_FLAGS] tst r1, #_TIF_WORK_MASK diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 69a337df619f..0cc53cc903ec 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -1246,9 +1246,14 @@ static inline void prepare_page_table(void) /* * Clear out all the mappings below the kernel image. */ - for (addr = 0; addr < MODULES_VADDR; addr += PMD_SIZE) + for (addr = 0; addr < TASK_SIZE; addr += PMD_SIZE) pmd_clear(pmd_off_k(addr)); +#ifdef CONFIG_KASAN + /*TASK_SIZE ~ MODULES_VADDR is the KASAN's shadow area -- skip over it*/ + addr = MODULES_VADDR; +#endif + #ifdef CONFIG_XIP_KERNEL /* The XIP kernel is mapped in the module area -- skip over it */ addr = ((unsigned long)_exiprom + PMD_SIZE - 1) & PMD_MASK; From patchwork Sun Apr 12 00:24:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 11484263 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 99C6C14DD for ; Sun, 12 Apr 2020 00:28:18 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 7237020936 for ; Sun, 12 Apr 2020 00:28:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="b+BOQeBF"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="SAE2tFVP" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7237020936 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=1fl8O2j+CxowmQiXZlQPxN68oBPz8dWWNbZjSKAwM3c=; b=b+BOQeBFmuCdDL QvYoAc9GNWYVZ0+89EW4lPeDZnQCQn3z96iyuXVNWp9nOu+DMxOJ+Eyuzn99ltoZxQvr8K0mgs6qP 7BpkRRE2t/EsLMKQT30Qzgg5MhjAiaEIsU/iVs006ob3aieEFzWocLEKpNEMBIsy7XoibhuE+jwgi yfugZV8AB8jecvvlA/31+R/5WjROZ8zvWlupDoCWPMgJQZIbgT8uk5xBqiraJPpjvMUwLAzyDeO0i 6syl9IGwGebrkoRzpd2elymh6ekC8Tr3+daWAy8n2IuzywimiPkKw9+uvCj5AhfcTxBJ4vPrrXlvI GSm7xfS1xD63Z8Fr7Cjg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jNQTs-0004NM-H4; Sun, 12 Apr 2020 00:28:12 +0000 Received: from mail-lf1-x143.google.com ([2a00:1450:4864:20::143]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jNQT4-0003hw-FO for linux-arm-kernel@lists.infradead.org; Sun, 12 Apr 2020 00:27:24 +0000 Received: by mail-lf1-x143.google.com with SMTP id t11so3947375lfe.4 for ; Sat, 11 Apr 2020 17:27:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Cox5VeemZP+TgsSXalF4nMmkvhk4TV83JSxgaFwTEyM=; b=SAE2tFVPvxQUH1kRR3brf2vLwZghnpDoyqtkpsJ6HMlvbbx6ONSAx+GpWlayoQ9QYl ZTuQnVmcfytLmh7fWUTZR0tDD/90WDUOhgtcBC/xdgnHh0CkC9qL3O14jy3jiNPrp/ra tajVM0pCLT1xk5DUF3N0AuKjPQ15ukreRr9DNCoyK34bEYN3WUsmLDj/eAe3RyMxn6V0 tySZ8rX8b7Mk2exYA5bSBv2SKriQ38SY792XV93wl9DG7KK1VoN4hzAXciXAl+KZ4NjZ 81rUkgV2x9hCjOTjSLvHPzEO32N6kIYR3bTwyMnuBAgM4BGA3qtkCwjF6FXr+A6oBHej MOOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Cox5VeemZP+TgsSXalF4nMmkvhk4TV83JSxgaFwTEyM=; b=QlUXA2Ols/IPuTL8X0obRYQSKVpkREXAqfk7oVzJj8xwvbmTFlkWWD2Zx+A/cJ3ysF pZZpQGIR0H1rbNRb/073eBiemZojiIbb/I8hIJwLT6y3ET0W93Z7zY+ultcFYAV2uyXH chfApQ6CqDKqVhEHaly3aCHJChOh/z3/r0DCWe+M0geI8tM5cBooaPr6XZT6fYzbiepl GmeojjQpF6RVwOX63yECrsFO2a+J1UOcHZedy3nAh+1Cvn5qEoJg4Ejxa/4XNH4mQz4N kRLqxKxpUpM+L6s7zOoOJ8T8ikYBUlQQm9YxBIPENio9JMLwog5tyb01YHQTAoJbkUiN p02A== X-Gm-Message-State: AGi0PuaKZ6DmGe4AghqxnCKEslQVtOSO1LpZtBxpSxOY/pYm0k97jWSW ZUhvsfSgRi8nJJwUt/6eAXb3dw== X-Google-Smtp-Source: APiQypKLS9L7eWYPF6eMIjSFybW2B6h1dhM0yR19/NrkB6TyiUuoKKdOIrM4ApU1G3sJ46daIGjKYw== X-Received: by 2002:a19:7008:: with SMTP id h8mr2699720lfc.43.1586651240439; Sat, 11 Apr 2020 17:27:20 -0700 (PDT) Received: from localhost.localdomain (c-f3d7225c.014-348-6c756e10.bbcust.telenor.se. [92.34.215.243]) by smtp.gmail.com with ESMTPSA id x29sm4907345lfn.64.2020.04.11.17.27.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 11 Apr 2020 17:27:19 -0700 (PDT) From: Linus Walleij To: Florian Fainelli , Abbott Liu , Russell King , Ard Biesheuvel , Andrey Ryabinin Subject: [PATCH 4/5 v2] ARM: Initialize the mapping of KASan shadow memory Date: Sun, 12 Apr 2020 02:24:06 +0200 Message-Id: <20200412002407.396790-5-linus.walleij@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200412002407.396790-1-linus.walleij@linaro.org> References: <20200412002407.396790-1-linus.walleij@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200411_172722_574167_8A4CF841 X-CRM114-Status: GOOD ( 25.57 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:143 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Linus Walleij , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Andrey Ryabinin This patch initializes KASan shadow region's page table and memory. There are two stage for KASan initializing: 1. At early boot stage the whole shadow region is mapped to just one physical page (kasan_zero_page). It is finished by the function kasan_early_init which is called by __mmap_switched(arch/arm/kernel/ head-common.S) ---Andrey Ryabinin 2. After the calling of paging_init, we use kasan_zero_page as zero shadow for some memory that KASan does not need to track, and we allocate a new shadow space for the other memory that KASan need to track. These issues are finished by the function kasan_init which is call by setup_arch. ---Andrey Ryabinin 3. Add support ARM LPAE If LPAE is enabled, KASan shadow region's mapping table need be copied in the pgd_alloc() function. ---Abbott Liu 4. Change kasan_pte_populate,kasan_pmd_populate,kasan_pud_populate, kasan_pgd_populate from .meminit.text section to .init.text section. ---Reported by: Florian Fainelli ---Signed off by: Abbott Liu Cc: Andrey Ryabinin Co-Developed-by: Abbott Liu Reported-by: Russell King - ARM Linux Reported-by: Florian Fainelli Signed-off-by: Abbott Liu Signed-off-by: Florian Fainelli Signed-off-by: Linus Walleij --- ChangeLog v1->v2: - Use SPDX identifer for the license. - Move the TTBR0 accessor calls into this patch. --- arch/arm/include/asm/kasan.h | 32 +++ arch/arm/include/asm/pgalloc.h | 9 +- arch/arm/include/asm/thread_info.h | 4 + arch/arm/kernel/head-common.S | 3 + arch/arm/kernel/setup.c | 2 + arch/arm/mm/Makefile | 3 + arch/arm/mm/kasan_init.c | 324 +++++++++++++++++++++++++++++ arch/arm/mm/pgd.c | 14 ++ 8 files changed, 389 insertions(+), 2 deletions(-) create mode 100644 arch/arm/include/asm/kasan.h create mode 100644 arch/arm/mm/kasan_init.c diff --git a/arch/arm/include/asm/kasan.h b/arch/arm/include/asm/kasan.h new file mode 100644 index 000000000000..56b954db160e --- /dev/null +++ b/arch/arm/include/asm/kasan.h @@ -0,0 +1,32 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * arch/arm/include/asm/kasan.h + * + * Copyright (c) 2015 Samsung Electronics Co., Ltd. + * Author: Andrey Ryabinin + * + */ + +#ifndef __ASM_KASAN_H +#define __ASM_KASAN_H + +#ifdef CONFIG_KASAN + +#include + +#define KASAN_SHADOW_SCALE_SHIFT 3 + +/* + * The compiler uses a shadow offset assuming that addresses start + * from 0. Kernel addresses don't start from 0, so shadow + * for kernel really starts from 'compiler's shadow offset' + + * ('kernel address space start' >> KASAN_SHADOW_SCALE_SHIFT) + */ + +extern void kasan_init(void); + +#else +static inline void kasan_init(void) { } +#endif + +#endif diff --git a/arch/arm/include/asm/pgalloc.h b/arch/arm/include/asm/pgalloc.h index 069da393110c..d969f8058b26 100644 --- a/arch/arm/include/asm/pgalloc.h +++ b/arch/arm/include/asm/pgalloc.h @@ -21,6 +21,7 @@ #define _PAGE_KERNEL_TABLE (PMD_TYPE_TABLE | PMD_BIT4 | PMD_DOMAIN(DOMAIN_KERNEL)) #ifdef CONFIG_ARM_LPAE +#define PGD_SIZE (PTRS_PER_PGD * sizeof(pgd_t)) static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) { @@ -39,14 +40,18 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd) } #else /* !CONFIG_ARM_LPAE */ +#define PGD_SIZE (PAGE_SIZE << 2) /* * Since we have only two-level page tables, these are trivial */ #define pmd_alloc_one(mm,addr) ({ BUG(); ((pmd_t *)2); }) #define pmd_free(mm, pmd) do { } while (0) -#define pud_populate(mm,pmd,pte) BUG() - +#ifndef CONFIG_KASAN +#define pud_populate(mm, pmd, pte) BUG() +#else +#define pud_populate(mm, pmd, pte) do { } while (0) +#endif #endif /* CONFIG_ARM_LPAE */ extern pgd_t *pgd_alloc(struct mm_struct *mm); diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h index 3609a6980c34..cf47cf9c4742 100644 --- a/arch/arm/include/asm/thread_info.h +++ b/arch/arm/include/asm/thread_info.h @@ -13,7 +13,11 @@ #include #include +#ifdef CONFIG_KASAN +#define THREAD_SIZE_ORDER 2 +#else #define THREAD_SIZE_ORDER 1 +#endif #define THREAD_SIZE (PAGE_SIZE << THREAD_SIZE_ORDER) #define THREAD_START_SP (THREAD_SIZE - 8) diff --git a/arch/arm/kernel/head-common.S b/arch/arm/kernel/head-common.S index 6840c7c60a85..89c80154b9ef 100644 --- a/arch/arm/kernel/head-common.S +++ b/arch/arm/kernel/head-common.S @@ -111,6 +111,9 @@ __mmap_switched: str r8, [r2] @ Save atags pointer cmp r3, #0 strne r10, [r3] @ Save control register values +#ifdef CONFIG_KASAN + bl kasan_early_init +#endif mov lr, #0 b start_kernel ENDPROC(__mmap_switched) diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c index d8e18cdd96d3..b0820847bb92 100644 --- a/arch/arm/kernel/setup.c +++ b/arch/arm/kernel/setup.c @@ -58,6 +58,7 @@ #include #include #include +#include #include "atags.h" @@ -1130,6 +1131,7 @@ void __init setup_arch(char **cmdline_p) early_ioremap_reset(); paging_init(mdesc); + kasan_init(); request_standard_resources(mdesc); if (mdesc->restart) diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile index 432302911d6e..1c937135c9c4 100644 --- a/arch/arm/mm/Makefile +++ b/arch/arm/mm/Makefile @@ -112,3 +112,6 @@ obj-$(CONFIG_CACHE_L2X0_PMU) += cache-l2x0-pmu.o obj-$(CONFIG_CACHE_XSC3L2) += cache-xsc3l2.o obj-$(CONFIG_CACHE_TAUROS2) += cache-tauros2.o obj-$(CONFIG_CACHE_UNIPHIER) += cache-uniphier.o + +KASAN_SANITIZE_kasan_init.o := n +obj-$(CONFIG_KASAN) += kasan_init.o diff --git a/arch/arm/mm/kasan_init.c b/arch/arm/mm/kasan_init.c new file mode 100644 index 000000000000..b4a18863c29f --- /dev/null +++ b/arch/arm/mm/kasan_init.c @@ -0,0 +1,324 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * This file contains kasan initialization code for ARM. + * + * Copyright (c) 2018 Samsung Electronics Co., Ltd. + * Author: Andrey Ryabinin + */ + +#define pr_fmt(fmt) "kasan: " fmt +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "mm.h" + +static pgd_t tmp_pgd_table[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE); + +pmd_t tmp_pmd_table[PTRS_PER_PMD] __page_aligned_bss; + +static __init void *kasan_alloc_block(size_t size, int node) +{ + return memblock_alloc_try_nid(size, size, __pa(MAX_DMA_ADDRESS), + MEMBLOCK_ALLOC_KASAN, node); +} + +static void __init kasan_early_pmd_populate(unsigned long start, + unsigned long end, pud_t *pud) +{ + unsigned long addr; + unsigned long next; + pmd_t *pmd; + + pmd = pmd_offset(pud, start); + for (addr = start; addr < end;) { + pmd_populate_kernel(&init_mm, pmd, kasan_early_shadow_pte); + next = pmd_addr_end(addr, end); + addr = next; + flush_pmd_entry(pmd); + pmd++; + } +} + +static void __init kasan_early_pud_populate(unsigned long start, + unsigned long end, pgd_t *pgd) +{ + unsigned long addr; + unsigned long next; + pud_t *pud; + + pud = pud_offset(pgd, start); + for (addr = start; addr < end;) { + next = pud_addr_end(addr, end); + kasan_early_pmd_populate(addr, next, pud); + addr = next; + pud++; + } +} + +void __init kasan_map_early_shadow(pgd_t *pgdp) +{ + int i; + unsigned long start = KASAN_SHADOW_START; + unsigned long end = KASAN_SHADOW_END; + unsigned long addr; + unsigned long next; + pgd_t *pgd; + + for (i = 0; i < PTRS_PER_PTE; i++) + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE, + &kasan_early_shadow_pte[i], pfn_pte( + virt_to_pfn(kasan_early_shadow_page), + __pgprot(_L_PTE_DEFAULT | L_PTE_DIRTY + | L_PTE_XN))); + + pgd = pgd_offset_k(start); + for (addr = start; addr < end;) { + next = pgd_addr_end(addr, end); + kasan_early_pud_populate(addr, next, pgd); + addr = next; + pgd++; + } +} + +extern struct proc_info_list *lookup_processor_type(unsigned int); + +void __init kasan_early_init(void) +{ + struct proc_info_list *list; + + /* + * locate processor in the list of supported processor + * types. The linker builds this table for us from the + * entries in arch/arm/mm/proc-*.S + */ + list = lookup_processor_type(read_cpuid_id()); + if (list) { +#ifdef MULTI_CPU + processor = *list->proc; +#endif + } + + BUILD_BUG_ON((KASAN_SHADOW_END - (1UL << 29)) != KASAN_SHADOW_OFFSET); + kasan_map_early_shadow(swapper_pg_dir); +} + +static void __init clear_pgds(unsigned long start, + unsigned long end) +{ + for (; start && start < end; start += PMD_SIZE) + pmd_clear(pmd_off_k(start)); +} + +pte_t * __init kasan_pte_populate(pmd_t *pmd, unsigned long addr, int node) +{ + pte_t *pte = pte_offset_kernel(pmd, addr); + + if (pte_none(*pte)) { + pte_t entry; + void *p = kasan_alloc_block(PAGE_SIZE, node); + + if (!p) + return NULL; + entry = pfn_pte(virt_to_pfn(p), + __pgprot(pgprot_val(PAGE_KERNEL))); + set_pte_at(&init_mm, addr, pte, entry); + } + return pte; +} + +pmd_t * __init kasan_pmd_populate(pud_t *pud, unsigned long addr, int node) +{ + pmd_t *pmd = pmd_offset(pud, addr); + + if (pmd_none(*pmd)) { + void *p = kasan_alloc_block(PAGE_SIZE, node); + + if (!p) + return NULL; + pmd_populate_kernel(&init_mm, pmd, p); + } + return pmd; +} + +pud_t * __init kasan_pud_populate(pgd_t *pgd, unsigned long addr, int node) +{ + pud_t *pud = pud_offset(pgd, addr); + + if (pud_none(*pud)) { + void *p = kasan_alloc_block(PAGE_SIZE, node); + + if (!p) + return NULL; + pr_err("populating pud addr %lx\n", addr); + pud_populate(&init_mm, pud, p); + } + return pud; +} + +pgd_t * __init kasan_pgd_populate(unsigned long addr, int node) +{ + pgd_t *pgd = pgd_offset_k(addr); + + if (pgd_none(*pgd)) { + void *p = kasan_alloc_block(PAGE_SIZE, node); + + if (!p) + return NULL; + pgd_populate(&init_mm, pgd, p); + } + return pgd; +} + +static int __init create_mapping(unsigned long start, unsigned long end, + int node) +{ + unsigned long addr = start; + pgd_t *pgd; + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + + pr_info("populating shadow for %lx, %lx\n", start, end); + + for (; addr < end; addr += PAGE_SIZE) { + pgd = kasan_pgd_populate(addr, node); + if (!pgd) + return -ENOMEM; + + pud = kasan_pud_populate(pgd, addr, node); + if (!pud) + return -ENOMEM; + + pmd = kasan_pmd_populate(pud, addr, node); + if (!pmd) + return -ENOMEM; + + pte = kasan_pte_populate(pmd, addr, node); + if (!pte) + return -ENOMEM; + } + return 0; +} + +/* + * TTBR0 accessors + * + * FIXME: this TTBR0 direct-manipulation does not seem to be the + * right solution at all: we need something that works on ALL ARM + * CPUs not just recent ones with TTBR0/1. + */ +#define TTBR0_32 __ACCESS_CP15(c2, 0, c0, 0) +#define TTBR0_64 __ACCESS_CP15_64(0, c2) + +static inline void set_ttbr0(u64 val) +{ + if (IS_ENABLED(CONFIG_ARM_LPAE)) + write_sysreg(val, TTBR0_64); + else + write_sysreg(val, TTBR0_32); +} + +static inline u64 get_ttbr0(void) +{ + if (IS_ENABLED(CONFIG_ARM_LPAE)) + return read_sysreg(TTBR0_64); + else + return read_sysreg(TTBR0_32); +} + +void __init kasan_init(void) +{ + struct memblock_region *reg; + u64 orig_ttbr0; + int i; + + /* + * We are going to perform proper setup of shadow memory. + * At first we should unmap early shadow (clear_pgds() call bellow). + * However, instrumented code couldn't execute without shadow memory. + * tmp_pgd_table and tmp_pmd_table used to keep early shadow mapped + * until full shadow setup will be finished. + */ + orig_ttbr0 = get_ttbr0(); + +#ifdef CONFIG_ARM_LPAE + memcpy(tmp_pmd_table, + pgd_page_vaddr(*pgd_offset_k(KASAN_SHADOW_START)), + sizeof(tmp_pmd_table)); + memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table)); + set_pgd(&tmp_pgd_table[pgd_index(KASAN_SHADOW_START)], + __pgd(__pa(tmp_pmd_table) | PMD_TYPE_TABLE | L_PGD_SWAPPER)); + set_ttbr0(__pa(tmp_pgd_table)); +#else + memcpy(tmp_pgd_table, swapper_pg_dir, sizeof(tmp_pgd_table)); + set_ttbr0((u64)__pa(tmp_pgd_table)); +#endif + flush_cache_all(); + local_flush_bp_all(); + local_flush_tlb_all(); + + clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END); + + kasan_populate_early_shadow(kasan_mem_to_shadow((void *)VMALLOC_START), + kasan_mem_to_shadow((void *)-1UL) + 1); + + for_each_memblock(memory, reg) { + void *start = __va(reg->base); + void *end = __va(reg->base + reg->size); + + if (reg->base + reg->size > arm_lowmem_limit) + end = __va(arm_lowmem_limit); + if (start >= end) + break; + + create_mapping((unsigned long)kasan_mem_to_shadow(start), + (unsigned long)kasan_mem_to_shadow(end), + NUMA_NO_NODE); + } + + /* + * 1. The module global variables are in MODULES_VADDR ~ MODULES_END, + * so we need to map this area. + * 2. PKMAP_BASE ~ PKMAP_BASE+PMD_SIZE's shadow and MODULES_VADDR + * ~ MODULES_END's shadow is in the same PMD_SIZE, so we can't + * use kasan_populate_zero_shadow. + */ + create_mapping( + (unsigned long)kasan_mem_to_shadow((void *)MODULES_VADDR), + + (unsigned long)kasan_mem_to_shadow((void *)(PKMAP_BASE + + PMD_SIZE)), + NUMA_NO_NODE); + + /* + * KAsan may reuse the contents of kasan_early_shadow_pte directly, so + * we should make sure that it maps the zero page read-only. + */ + for (i = 0; i < PTRS_PER_PTE; i++) + set_pte_at(&init_mm, KASAN_SHADOW_START + i*PAGE_SIZE, + &kasan_early_shadow_pte[i], + pfn_pte(virt_to_pfn(kasan_early_shadow_page), + __pgprot(pgprot_val(PAGE_KERNEL) + | L_PTE_RDONLY))); + memset(kasan_early_shadow_page, 0, PAGE_SIZE); + set_ttbr0(orig_ttbr0); + flush_cache_all(); + local_flush_bp_all(); + local_flush_tlb_all(); + pr_info("Kernel address sanitizer initialized\n"); + init_task.kasan_depth = 0; +} diff --git a/arch/arm/mm/pgd.c b/arch/arm/mm/pgd.c index 478bd2c6aa50..92a408262df2 100644 --- a/arch/arm/mm/pgd.c +++ b/arch/arm/mm/pgd.c @@ -61,6 +61,20 @@ pgd_t *pgd_alloc(struct mm_struct *mm) new_pmd = pmd_alloc(mm, new_pud, 0); if (!new_pmd) goto no_pmd; +#ifdef CONFIG_KASAN + /* + *Copy PMD table for KASAN shadow mappings. + */ + init_pgd = pgd_offset_k(TASK_SIZE); + init_pud = pud_offset(init_pgd, TASK_SIZE); + init_pmd = pmd_offset(init_pud, TASK_SIZE); + new_pmd = pmd_offset(new_pud, TASK_SIZE); + memcpy(new_pmd, init_pmd, + (pmd_index(MODULES_VADDR)-pmd_index(TASK_SIZE)) + * sizeof(pmd_t)); + clean_dcache_area(new_pmd, PTRS_PER_PMD*sizeof(pmd_t)); +#endif + #endif if (!vectors_high()) { From patchwork Sun Apr 12 00:24:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Linus Walleij X-Patchwork-Id: 11484265 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4F42214DD for ; Sun, 12 Apr 2020 00:28:43 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id DC12F20936 for ; Sun, 12 Apr 2020 00:28:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="jiHxTL9c"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=linaro.org header.i=@linaro.org header.b="AITvCVps" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org DC12F20936 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linaro.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=xSvR/WLGvScItq2K4+dS9RE7wQnNYoe/lEs4DOWgi/A=; b=jiHxTL9cgNJxgI PfK1SGPrpEDMW+imv4fsUeVItNd+9SEEziPVdxFmqs/ytQFnCL8bA9hE/SOn0RM8UwFT6sQtL3maB wnJd0oWnkOtm66/3oXliavL+GmV4G8O+o8NIPLzrTU6u2eELwYyoCl3zrQMec6irfkm7ZfkTe/w0Q xgXk4sxOX16B6FzLEi6+se198nV873STHU3OiOPCshoYIU/5p0SyZPPLLELJZblzSgZsm4fQRduW/ WnsKwHlESgKz9sAAo1ce3RpJo5fu++0ZJawROm1AlS/pDqyDfF+dbKF9fHxNrbw8fDZ6TfoXBMLef NFleW8F4qmJiFz0DpEIA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jNQUE-0004fO-Lq; Sun, 12 Apr 2020 00:28:34 +0000 Received: from mail-lf1-x144.google.com ([2a00:1450:4864:20::144]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jNQT6-0003kJ-Dc for linux-arm-kernel@lists.infradead.org; Sun, 12 Apr 2020 00:27:25 +0000 Received: by mail-lf1-x144.google.com with SMTP id 198so3937289lfo.7 for ; Sat, 11 Apr 2020 17:27:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fHKoplYFxoQLQhXHoGF+ijfq6s6m1RbHqeEl83lbyFI=; b=AITvCVpscrZGcYEtTdPZ8bnHXm70GmWU4CODY3O08StjYpkXpDhz6oaV80pshUG1/1 MnpixoLYd4VEH8sspUi262Urr4JUkx29UMKL+Rbz9Psq2uaGoh917WU/Xm3djtps8ib0 UUDwQ//IAIontRmMF6YHDQB7cFaCW7BQ1+iThL/5wBib13Dh9LLvv7kJjVBSF8q86Hri KTsy/Vl3lPCqiR8U2NTc0DEmBVHl0nzScdcp2XB7/AmdIKKRQGm015O/sJv4Rc/oKzkK oObVkjWijFcVqChrkAYsyv0Iky5aevH8j7O7aM8VsGvvJ3+JYrzQKiUpEVzDEz8t129f nYDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fHKoplYFxoQLQhXHoGF+ijfq6s6m1RbHqeEl83lbyFI=; b=Dru45MC10zzx/KIgT6lz/FoYPbcijIHMMdS+r5YZZoDBjw72knb5DxFtnuSrFDYQm4 0tUYsV8J6ZoeGmPGbTDzACj2aHlDUACzVStCifcrzW4Vc0hlEpHxZttKddriz4O/v4e7 LfGYaYl731tZ6MG1TO7Ft9ytjKxKlQQFraEgKoQBet06VDrzXNIxImCBEAI4CJZGuIFE vSAe6yKByAunYWXUQBknXqWurQp/8vuQShLDZRUtfvE57K6m1pX7cjiZ3xm7QHVXiWxY EEN8u/HgmWL1oUZGYdU71kdUpEdoDy/qgKB4d+uO1Ec5UYNI9G84I88F/t93SnySQEdR pblg== X-Gm-Message-State: AGi0PubUrWbXqjs1Iw3OBzxEqN4skC8wGwELVn9cTDkkUcTodqnlGTTe OM75r2XlZn+MDGLy0wW+LdX0HC305yw= X-Google-Smtp-Source: APiQypLgxFrsFMBdZBlylEQWno/RTMJhyM9w8GGGFnbvkFtkLCFNpSHl/OjCpXSsdDZFHrjAWRmV4Q== X-Received: by 2002:ac2:548f:: with SMTP id t15mr6336083lfk.107.1586651242640; Sat, 11 Apr 2020 17:27:22 -0700 (PDT) Received: from localhost.localdomain (c-f3d7225c.014-348-6c756e10.bbcust.telenor.se. [92.34.215.243]) by smtp.gmail.com with ESMTPSA id x29sm4907345lfn.64.2020.04.11.17.27.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 11 Apr 2020 17:27:21 -0700 (PDT) From: Linus Walleij To: Florian Fainelli , Abbott Liu , Russell King , Ard Biesheuvel , Andrey Ryabinin Subject: [PATCH 5/5 v2] ARM: Enable KASan for ARM Date: Sun, 12 Apr 2020 02:24:07 +0200 Message-Id: <20200412002407.396790-6-linus.walleij@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200412002407.396790-1-linus.walleij@linaro.org> References: <20200412002407.396790-1-linus.walleij@linaro.org> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200411_172724_499054_72C0A34F X-CRM114-Status: GOOD ( 14.30 ) X-Spam-Score: -0.2 (/) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-0.2 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:144 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID_EF Message has a valid DKIM or DK signature from envelope-from domain -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Dmitry Vyukov , Linus Walleij , Andrey Ryabinin , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org From: Andrey Ryabinin This patch enables the kernel address sanitizer for ARM. XIP_KERNEL has not been tested and is therefore not allowed for now. Acked-by: Dmitry Vyukov Signed-off-by: Abbott Liu Signed-off-by: Florian Fainelli Signed-off-by: Linus Walleij --- ChangeLog v1->v2: - Moved the hacks to __ADDRESS_SANITIZE__ to the patch replacing the memory access functions. - Moved the definition of KASAN_OFFSET out of this patch and to the patch that defines the virtual memory used by KASan. --- Documentation/dev-tools/kasan.rst | 4 ++-- arch/arm/Kconfig | 1 + 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/Documentation/dev-tools/kasan.rst b/Documentation/dev-tools/kasan.rst index c652d740735d..0962365e1405 100644 --- a/Documentation/dev-tools/kasan.rst +++ b/Documentation/dev-tools/kasan.rst @@ -21,8 +21,8 @@ global variables yet. Tag-based KASAN is only supported in Clang and requires version 7.0.0 or later. -Currently generic KASAN is supported for the x86_64, arm64, xtensa, s390 and -riscv architectures, and tag-based KASAN is supported only for arm64. +Currently generic KASAN is supported for the x86_64, arm, arm64, xtensa, s390 +and riscv architectures, and tag-based KASAN is supported only for arm64. Usage ----- diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index f6f2d3b202f5..f5d26cbe2f42 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -64,6 +64,7 @@ config ARM select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6 select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU + select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_SECCOMP_FILTER if AEABI && !OABI_COMPAT select HAVE_ARCH_THREAD_STRUCT_WHITELIST