From patchwork Thu Jun 30 07:47:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Gow X-Patchwork-Id: 12901314 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABB9FC433EF for ; Thu, 30 Jun 2022 07:48:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E6F568E0002; Thu, 30 Jun 2022 03:48:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E1DD28E0001; Thu, 30 Jun 2022 03:48:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D0C528E0002; Thu, 30 Jun 2022 03:48:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id C39B08E0001 for ; Thu, 30 Jun 2022 03:48:07 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 97B9B610E7 for ; Thu, 30 Jun 2022 07:48:07 +0000 (UTC) X-FDA: 79634123814.08.8227441 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf14.hostedemail.com (Postfix) with ESMTP id 30D1410002F for ; Thu, 30 Jun 2022 07:48:07 +0000 (UTC) Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-31797057755so148321637b3.16 for ; Thu, 30 Jun 2022 00:48:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:to:cc; bh=XzzQ2qeoRIHvL3OIKjNQCubCMmqjty1j1wRkBLGrTgU=; b=Cd/RaHs88ciLdA0+wMopNk5kjW44si1O/9QcrziajxdH20xK3UlUrP0+vLXx20MY1D RUAPNooz1wh14yhNsjZ2DPh04r1TSjAWbZGLdxIfADNCLsKIMVoe0rCwGDelPXBS/q07 bfBF8ggVR84eJmwoJcidKJXXyP0skwxvM4oMo5jDBjsdLzw0hq+urk9Vt1El1I0pNXOY Ui9qvmcqqGBAUjeUZNZ2TMfeCjGGtWzShSjtdHoo+rJ8+Q89q47gWg1OYS6AOODEA8yv r7cKAeIge6MraBq4l/vTR1evClWD+lPEjdb8uPAySVuqmLU0+iIs7L3MgEN1SzQNTG1I CBDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc; bh=XzzQ2qeoRIHvL3OIKjNQCubCMmqjty1j1wRkBLGrTgU=; b=44++Vr/lNWD+q8WMu6rhjrIjgRwQPTIobY2GZIZPUOLsh3D6t4YWeyi0vYPaPuhA6L n9Tz/Jjf526Id0RizbHUPtU8qqJr+t7f8cKdrjp0baHxr/wnqjmJoPqdHVU5sTCXHwXO rTS9G2ZNSvrZ4RUn0aTMmwLeN1rFxoMwkoXWRpCqTBn2yc049+jItekMr2dFA9ja3c9x R4OzWBRPJE65uxAGuqckvqdOvbrtUlvrG/xYyFsakdw9kIXW6jHWWXSzFCBtyub0tovd 5FWkO8PXQDUN0M7d2cchOPJ9IchCmSRM0F1RVfAE3GZ0O5XBGWwf7SjXTiynHFoOUdJS I3Bg== X-Gm-Message-State: AJIora8wnBo/WQwalhjf6J9V4WiVQOBGkar5sPFu2wMA6zuJeMFNAnIw ZTyxg0ObLbv6rflv8zDI8nmNGy/CdQWg5A== X-Google-Smtp-Source: AGRyM1vSiWG5UsJn1FGs8y/l+5be4umvT4PkYpnDFc4Cbi+MLFCVLCTtuQ8PN01k8FUWH9WOK1+ovGD2Wp1BOA== X-Received: from slicestar.c.googlers.com ([fda3:e722:ac3:cc00:4f:4b78:c0a8:20a1]) (user=davidgow job=sendgmr) by 2002:a25:2f81:0:b0:66d:9a86:f6de with SMTP id v123-20020a252f81000000b0066d9a86f6demr3315237ybv.590.1656575286295; Thu, 30 Jun 2022 00:48:06 -0700 (PDT) Date: Thu, 30 Jun 2022 15:47:56 +0800 Message-Id: <20220630074757.2739000-1-davidgow@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.37.0.rc0.161.g10f37bed90-goog Subject: [PATCH v3 1/2] mm: Add PAGE_ALIGN_DOWN macro From: David Gow To: Vincent Whitchurch , Johannes Berg , Patricia Alfonso , Jeff Dike , Richard Weinberger , anton.ivanov@cambridgegreys.com, Dmitry Vyukov , Brendan Higgins , Andrew Morton , Andrey Konovalov , Andrey Ryabinin Cc: David Gow , kasan-dev , linux-um@lists.infradead.org, LKML , Daniel Latypov , linux-mm@kvack.org, kunit-dev@googlegroups.com ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="Cd/RaHs8"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf14.hostedemail.com: domain of 3NlW9YggKCPEWTobWZhpZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--davidgow.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3NlW9YggKCPEWTobWZhpZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--davidgow.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656575287; a=rsa-sha256; cv=none; b=hlX6S50m4OcsMBVJS0jCrnIpiNKEHRu99vV1oBc+Y7ao94+wIwj8V66JcREmI4LMsk9zn2 DDH58RSlFKe6ItO/xG7g8V1jN/jl4G4Z3reVeuB/s2c11Hh8jVSQD9A9cI2CHYfXNwUHY/ SLCeaCh0EH3NDKqXuIokh9dIq7OMNRs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656575287; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=XzzQ2qeoRIHvL3OIKjNQCubCMmqjty1j1wRkBLGrTgU=; b=YV1IUDlVQPGiHiqZUTOhOGsvBHgqYWE9De5teY6dKf8gbtdMMzXHdg5ZYQbcPJVfN8/v/b hD1qpjKJL1UQTzX7lXfbUz3mCFMejG6zS0CszArrsIoD7X/v6/X2fqgoZ1Va+2xFK/Ewxj ts86Je79QCxAZDfN5l5+25jV1prDGkY= X-Rspam-User: Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="Cd/RaHs8"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf14.hostedemail.com: domain of 3NlW9YggKCPEWTobWZhpZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--davidgow.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3NlW9YggKCPEWTobWZhpZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--davidgow.bounces.google.com X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 30D1410002F X-Stat-Signature: bwo8yks7ajdjdgkeq69e6i7ues8d18ce X-HE-Tag: 1656575287-325614 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000027, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is just the same as PAGE_ALIGN(), but rounds the address down, not up. Suggested-by: Dmitry Vyukov Signed-off-by: David Gow Acked-by: Andrew Morton --- Please take this patch as part of the UML tree, along with patch #2, thanks! Changes since v2: https://lore.kernel.org/lkml/20220527185600.1236769-1-davidgow@google.com/ - Add Andrew's Acked-by tag. v2 was the first version of this patch (it having been introduced as part of v2 of the UML/KASAN series). There are almost certainly lots of places where this macro should be used: just look for ALIGN_DOWN(..., PAGE_SIZE). I haven't gone through to try to replace them all. --- include/linux/mm.h | 3 +++ 1 file changed, 3 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 9f44254af8ce..9abe5975ad11 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -221,6 +221,9 @@ int overcommit_policy_handler(struct ctl_table *, int, void *, size_t *, /* to align the pointer to the (next) page boundary */ #define PAGE_ALIGN(addr) ALIGN(addr, PAGE_SIZE) +/* to align the pointer to the (prev) page boundary */ +#define PAGE_ALIGN_DOWN(addr) ALIGN_DOWN(addr, PAGE_SIZE) + /* test whether an address (unsigned long or pointer) is aligned to PAGE_SIZE */ #define PAGE_ALIGNED(addr) IS_ALIGNED((unsigned long)(addr), PAGE_SIZE) From patchwork Thu Jun 30 07:47:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Gow X-Patchwork-Id: 12901315 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FDA9C433EF for ; Thu, 30 Jun 2022 07:48:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B89988E0003; Thu, 30 Jun 2022 03:48:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B384F8E0001; Thu, 30 Jun 2022 03:48:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A271D8E0003; Thu, 30 Jun 2022 03:48:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 927158E0001 for ; Thu, 30 Jun 2022 03:48:44 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 5D9A880227 for ; Thu, 30 Jun 2022 07:48:44 +0000 (UTC) X-FDA: 79634125368.14.DC842FE Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) by imf01.hostedemail.com (Postfix) with ESMTP id F23C240034 for ; Thu, 30 Jun 2022 07:48:43 +0000 (UTC) Received: by mail-pf1-f201.google.com with SMTP id f63-20020a623842000000b005252a15e64aso7400453pfa.2 for ; Thu, 30 Jun 2022 00:48:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Tzg5dQ9vZO3m0+NWOVCNYlSDqeapswna3O58WtrOJtc=; b=ruFBz9lptWGuhm0SxR8bcK6/CuMDxiuXvNyGx9jxQOs/0T+pDUmpOiV10Jf+IxbPdK bpi2YwJ6ybgJQyKlUV17PXnsZXoH6z/iy1BxCVWqK3vrtH25fAEusel4P5beQTQ6oFyW on6xFfVFIjROyP21m2yeXJUWcDPFRnW4eDLoop3hwg/1Eg3D3WeLhS6YMs+rkXsNIA1r yCn7c1H/szzPB461V8smq/hgtjTiHQutcFcKdjneFs85orjzDpgMrKLAvm2acVwbvuXB qiO9rM7na7fpXnYEJfH6dF1PEfxdAPJgRQKY+bByYDvKBk0+iO/02bUKa1ZovMwgYC90 HCrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Tzg5dQ9vZO3m0+NWOVCNYlSDqeapswna3O58WtrOJtc=; b=2OMfvrNO6L10uN8mgYKBUanrbzo4NwK95WbjHHbA4QJ7ayv5cnTLNpOSD6wDxpjwIc utmhSU0gvaUhax+JSC1i2/d+ScPIrgL26JotHasx5pB+JpwMTlw/fG93U/t2SEfk7GLo 32uECf8nKPnu0/yY5mT7g5PJEXZIJUQCyy0ysjswQ9SszufZG6Q4Ze0yx/VFzjw5DHR5 x6OAp1c/I2cbocwTH3yAf0roZncONSihmL+xtc+3ILsYpMR/QALm8JwBnfQMPlB+OvlX jloO1Pb+zbul3/tG0vOmUFtK7uuv4OykE6PxU9yLCOkx1FPay9rgTwiXNQQph0cHdkef CXCg== X-Gm-Message-State: AJIora/D5SJB7fcL+GhFOJ3Ms7Hk66vDQRlhPZl9zp2jpw3O0ZmV6tLf cTcykRymDIZiSNiBmN/ewCbkKG4c67Jm5g== X-Google-Smtp-Source: AGRyM1vRcs/8PnLak8Y7WE1MoiAexzQjKH0Ue0GKuSview2nk1wKZmlM39tyGN3IejbsZlBHZqfp6/uTrUXrYA== X-Received: from slicestar.c.googlers.com ([fda3:e722:ac3:cc00:4f:4b78:c0a8:20a1]) (user=davidgow job=sendgmr) by 2002:a17:90b:46ca:b0:1ec:9a27:f706 with SMTP id jx10-20020a17090b46ca00b001ec9a27f706mr8992091pjb.12.1656575322888; Thu, 30 Jun 2022 00:48:42 -0700 (PDT) Date: Thu, 30 Jun 2022 15:47:57 +0800 In-Reply-To: <20220630074757.2739000-1-davidgow@google.com> Message-Id: <20220630074757.2739000-2-davidgow@google.com> Mime-Version: 1.0 References: <20220630074757.2739000-1-davidgow@google.com> X-Mailer: git-send-email 2.37.0.rc0.161.g10f37bed90-goog Subject: [PATCH v3 2/2] UML: add support for KASAN under x86_64 From: David Gow To: Vincent Whitchurch , Johannes Berg , Patricia Alfonso , Jeff Dike , Richard Weinberger , anton.ivanov@cambridgegreys.com, Dmitry Vyukov , Brendan Higgins , Andrew Morton , Andrey Konovalov , Andrey Ryabinin Cc: kasan-dev , linux-um@lists.infradead.org, LKML , Daniel Latypov , linux-mm@kvack.org, kunit-dev@googlegroups.com, David Gow ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1656575324; a=rsa-sha256; cv=none; b=yuejlo1StBuFpY2w35tfma7Hpnam0yEMXAAy45s5NK9t10TSYQtImvNetCLTZ7IpVUCJ1Z mnI0JjE5yqou6mc0SvXf8p7wGn1uVFKvDRvxbGfTlqBq7jyJifYG3HS3ptd8KellOsSrKP NWxG2A3SgtIilR8dYIkJG+9lWz/7f7E= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ruFBz9lp; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf01.hostedemail.com: domain of 3WlW9YggKCBc0xI503BJ3BB381.zB985AHK-997Ixz7.BE3@flex--davidgow.bounces.google.com designates 209.85.210.201 as permitted sender) smtp.mailfrom=3WlW9YggKCBc0xI503BJ3BB381.zB985AHK-997Ixz7.BE3@flex--davidgow.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1656575324; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Tzg5dQ9vZO3m0+NWOVCNYlSDqeapswna3O58WtrOJtc=; b=a0krppj1mB6kGVrr25aqMaPOa+jeJadb+ilRf4aKD2KzSIQpo4A8U73Rd3qhhK0RoxWASL /cb0zP2UW3AL4p4fSwwj+O0X6UneMRPc+NU6yqcx3oUr6b8Qutu1VXlwg2fuLKCzaUf1W5 mfHz//ztpAID5kIo1Hi1jStCmpb8lJg= X-Rspamd-Queue-Id: F23C240034 X-Rspam-User: Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ruFBz9lp; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf01.hostedemail.com: domain of 3WlW9YggKCBc0xI503BJ3BB381.zB985AHK-997Ixz7.BE3@flex--davidgow.bounces.google.com designates 209.85.210.201 as permitted sender) smtp.mailfrom=3WlW9YggKCBc0xI503BJ3BB381.zB985AHK-997Ixz7.BE3@flex--davidgow.bounces.google.com X-Rspamd-Server: rspam02 X-Stat-Signature: jg78aea6xzwowfqzsf9fr1yn85yse575 X-HE-Tag: 1656575323-824181 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Patricia Alfonso Make KASAN run on User Mode Linux on x86_64. The UML-specific KASAN initializer uses mmap to map the ~16TB of shadow memory to the location defined by KASAN_SHADOW_OFFSET. kasan_init() utilizes constructors to initialize KASAN before main(). The location of the KASAN shadow memory, starting at KASAN_SHADOW_OFFSET, can be configured using the KASAN_SHADOW_OFFSET option. The default location of this offset is 0x100000000000, which keeps it out-of-the-way even on UML setups with more "physical" memory. For low-memory setups, 0x7fff8000 can be used instead, which fits in an immediate and is therefore faster, as suggested by Dmitry Vyukov. There is usually enough free space at this location; however, it is a config option so that it can be easily changed if needed. Note that, unlike KASAN on other architectures, vmalloc allocations still use the shadow memory allocated upfront, rather than allocating and free-ing it per-vmalloc allocation. If another architecture chooses to go down the same path, we should replace the checks for CONFIG_UML with something more generic, such as: - A CONFIG_KASAN_NO_SHADOW_ALLOC option, which architectures could set - or, a way of having architecture-specific versions of these vmalloc and module shadow memory allocation options. Also note that, while UML supports both KASAN in inline mode (CONFIG_KASAN_INLINE) and static linking (CONFIG_STATIC_LINK), it does not support both at the same time. Signed-off-by: Patricia Alfonso Co-developed-by: Vincent Whitchurch Signed-off-by: Vincent Whitchurch Signed-off-by: David Gow Reviewed-by: Johannes Berg --- This is v3 of the KASAN/UML port. It should be ready to go. Note that this will fail to build if UML is linked statically due to: https://lore.kernel.org/all/20220526185402.955870-1-davidgow@google.com/ Changes since v2: https://lore.kernel.org/lkml/20220527185600.1236769-2-davidgow@google.com/ - Don't define CONFIG_KASAN in USER_CFLAGS, given we dont' use it. (Thanks Johannes) - Update patch descriptions and comments given we allocate shadow memory based on the size of the virtual address space, not the "physical" memory used by UML. - This was changed between the original RFC and v1, with KASAN_SHADOW_SIZE's definition being updated. - References to UML using 18TB of space and the shadow memory taking 2.25TB were updated. (Thanks Johannes) - A mention of physical memory in a comment was updated. (Thanks Andrey) - Move some discussion of how the vmalloc() handling could be made more generic from a comment to the commit description. (Thanks Andrey) Changes since RFC v3: https://lore.kernel.org/all/20220526010111.755166-1-davidgow@google.com/ - No longer print "KernelAddressSanitizer initialized" (Johannes) - Document the reason for the CONFIG_UML checks in shadow.c (Dmitry) - Support static builds via kasan_arch_is_ready() (Dmitry) - Get rid of a redundant call to kasam_mem_to_shadow() (Dmitry) - Use PAGE_ALIGN and the new PAGE_ALIGN_DOWN macros (Dmitry) - Reinstate missing arch/um/include/asm/kasan.h file (Johannes) Changes since v1: https://lore.kernel.org/all/20200226004608.8128-1-trishalfonso@google.com/ - Include several fixes from Vincent Whitchurch: https://lore.kernel.org/all/20220525111756.GA15955@axis.com/ - Support for KASAN_VMALLOC, by changing the way kasan_{populate,release}_vmalloc work to update existing shadow memory, rather than allocating anything new. - A similar fix for modules' shadow memory. - Support for KASAN_STACK - This requires the bugfix here: https://lore.kernel.org/lkml/20220523140403.2361040-1-vincent.whitchurch@axis.com/ - Plus a couple of files excluded from KASAN. - Revert the default shadow offset to 0x100000000000 - This was breaking when mem=1G for me, at least. - A few minor fixes to linker sections and scripts. - I've added one to dyn.lds.S on top of the ones Vincent added. --- arch/um/Kconfig | 15 +++++++++++++ arch/um/include/asm/common.lds.S | 2 ++ arch/um/include/asm/kasan.h | 37 ++++++++++++++++++++++++++++++++ arch/um/kernel/Makefile | 3 +++ arch/um/kernel/dyn.lds.S | 6 +++++- arch/um/kernel/mem.c | 19 ++++++++++++++++ arch/um/os-Linux/mem.c | 22 +++++++++++++++++++ arch/um/os-Linux/user_syms.c | 4 ++-- arch/x86/um/Makefile | 3 ++- arch/x86/um/vdso/Makefile | 3 +++ mm/kasan/shadow.c | 29 +++++++++++++++++++++++-- 11 files changed, 137 insertions(+), 6 deletions(-) create mode 100644 arch/um/include/asm/kasan.h diff --git a/arch/um/Kconfig b/arch/um/Kconfig index 8062a0c08952..289c9dc226d6 100644 --- a/arch/um/Kconfig +++ b/arch/um/Kconfig @@ -12,6 +12,8 @@ config UML select ARCH_HAS_STRNLEN_USER select ARCH_NO_PREEMPT select HAVE_ARCH_AUDITSYSCALL + select HAVE_ARCH_KASAN if X86_64 + select HAVE_ARCH_KASAN_VMALLOC if HAVE_ARCH_KASAN select HAVE_ARCH_SECCOMP_FILTER select HAVE_ASM_MODVERSIONS select HAVE_UID16 @@ -220,6 +222,19 @@ config UML_TIME_TRAVEL_SUPPORT It is safe to say Y, but you probably don't need this. +config KASAN_SHADOW_OFFSET + hex + depends on KASAN + default 0x100000000000 + help + This is the offset at which the ~16TB of shadow memory is + mapped and used by KASAN for memory debugging. This can be any + address that has at least KASAN_SHADOW_SIZE (total address space divided + by 8) amount of space so that the KASAN shadow memory does not conflict + with anything. The default is 0x100000000000, which works even if mem is + set to a large value. On low-memory systems, try 0x7fff8000, as it fits + into the immediate of most instructions, improving performance. + endmenu source "arch/um/drivers/Kconfig" diff --git a/arch/um/include/asm/common.lds.S b/arch/um/include/asm/common.lds.S index eca6c452a41b..fd481ac371de 100644 --- a/arch/um/include/asm/common.lds.S +++ b/arch/um/include/asm/common.lds.S @@ -83,6 +83,8 @@ } .init_array : { __init_array_start = .; + *(.kasan_init) + *(.init_array.*) *(.init_array) __init_array_end = .; } diff --git a/arch/um/include/asm/kasan.h b/arch/um/include/asm/kasan.h new file mode 100644 index 000000000000..0d6547f4ec85 --- /dev/null +++ b/arch/um/include/asm/kasan.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_UM_KASAN_H +#define __ASM_UM_KASAN_H + +#include +#include + +#define KASAN_SHADOW_OFFSET _AC(CONFIG_KASAN_SHADOW_OFFSET, UL) + +/* used in kasan_mem_to_shadow to divide by 8 */ +#define KASAN_SHADOW_SCALE_SHIFT 3 + +#ifdef CONFIG_X86_64 +#define KASAN_HOST_USER_SPACE_END_ADDR 0x00007fffffffffffUL +/* KASAN_SHADOW_SIZE is the size of total address space divided by 8 */ +#define KASAN_SHADOW_SIZE ((KASAN_HOST_USER_SPACE_END_ADDR + 1) >> \ + KASAN_SHADOW_SCALE_SHIFT) +#else +#error "KASAN_SHADOW_SIZE is not defined for this sub-architecture" +#endif /* CONFIG_X86_64 */ + +#define KASAN_SHADOW_START (KASAN_SHADOW_OFFSET) +#define KASAN_SHADOW_END (KASAN_SHADOW_START + KASAN_SHADOW_SIZE) + +#ifdef CONFIG_KASAN +void kasan_init(void); +void kasan_map_memory(void *start, unsigned long len); +extern int kasan_um_is_ready; + +#ifdef CONFIG_STATIC_LINK +#define kasan_arch_is_ready() (kasan_um_is_ready) +#endif +#else +static inline void kasan_init(void) { } +#endif /* CONFIG_KASAN */ + +#endif /* __ASM_UM_KASAN_H */ diff --git a/arch/um/kernel/Makefile b/arch/um/kernel/Makefile index 1c2d4b29a3d4..a089217e2f0e 100644 --- a/arch/um/kernel/Makefile +++ b/arch/um/kernel/Makefile @@ -27,6 +27,9 @@ obj-$(CONFIG_EARLY_PRINTK) += early_printk.o obj-$(CONFIG_STACKTRACE) += stacktrace.o obj-$(CONFIG_GENERIC_PCI_IOMAP) += ioport.o +KASAN_SANITIZE_stacktrace.o := n +KASAN_SANITIZE_sysrq.o := n + USER_OBJS := config.o include arch/um/scripts/Makefile.rules diff --git a/arch/um/kernel/dyn.lds.S b/arch/um/kernel/dyn.lds.S index 2f2a8ce92f1e..2b7fc5b54164 100644 --- a/arch/um/kernel/dyn.lds.S +++ b/arch/um/kernel/dyn.lds.S @@ -109,7 +109,11 @@ SECTIONS be empty, which isn't pretty. */ . = ALIGN(32 / 8); .preinit_array : { *(.preinit_array) } - .init_array : { *(.init_array) } + .init_array : { + *(.kasan_init) + *(.init_array.*) + *(.init_array) + } .fini_array : { *(.fini_array) } .data : { INIT_TASK_DATA(KERNEL_STACK_SIZE) diff --git a/arch/um/kernel/mem.c b/arch/um/kernel/mem.c index 15295c3237a0..276a1f0b91f1 100644 --- a/arch/um/kernel/mem.c +++ b/arch/um/kernel/mem.c @@ -18,6 +18,25 @@ #include #include #include +#include + +#ifdef CONFIG_KASAN +int kasan_um_is_ready; +void kasan_init(void) +{ + /* + * kasan_map_memory will map all of the required address space and + * the host machine will allocate physical memory as necessary. + */ + kasan_map_memory((void *)KASAN_SHADOW_START, KASAN_SHADOW_SIZE); + init_task.kasan_depth = 0; + kasan_um_is_ready = true; +} + +static void (*kasan_init_ptr)(void) +__section(".kasan_init") __used += kasan_init; +#endif /* allocated in paging_init, zeroed in mem_init, and unchanged thereafter */ unsigned long *empty_zero_page = NULL; diff --git a/arch/um/os-Linux/mem.c b/arch/um/os-Linux/mem.c index 3c1b77474d2d..8530b2e08604 100644 --- a/arch/um/os-Linux/mem.c +++ b/arch/um/os-Linux/mem.c @@ -17,6 +17,28 @@ #include #include +/* + * kasan_map_memory - maps memory from @start with a size of @len. + * The allocated memory is filled with zeroes upon success. + * @start: the start address of the memory to be mapped + * @len: the length of the memory to be mapped + * + * This function is used to map shadow memory for KASAN in uml + */ +void kasan_map_memory(void *start, size_t len) +{ + if (mmap(start, + len, + PROT_READ|PROT_WRITE, + MAP_FIXED|MAP_ANONYMOUS|MAP_PRIVATE|MAP_NORESERVE, + -1, + 0) == MAP_FAILED) { + os_info("Couldn't allocate shadow memory: %s\n.", + strerror(errno)); + exit(1); + } +} + /* Set by make_tempfile() during early boot. */ static char *tempdir = NULL; diff --git a/arch/um/os-Linux/user_syms.c b/arch/um/os-Linux/user_syms.c index 715594fe5719..cb667c9225ab 100644 --- a/arch/um/os-Linux/user_syms.c +++ b/arch/um/os-Linux/user_syms.c @@ -27,10 +27,10 @@ EXPORT_SYMBOL(strstr); #ifndef __x86_64__ extern void *memcpy(void *, const void *, size_t); EXPORT_SYMBOL(memcpy); -#endif - EXPORT_SYMBOL(memmove); EXPORT_SYMBOL(memset); +#endif + EXPORT_SYMBOL(printf); /* Here, instead, I can provide a fake prototype. Yes, someone cares: genksyms. diff --git a/arch/x86/um/Makefile b/arch/x86/um/Makefile index ba5789c35809..f778e37494ba 100644 --- a/arch/x86/um/Makefile +++ b/arch/x86/um/Makefile @@ -28,7 +28,8 @@ else obj-y += syscalls_64.o vdso/ -subarch-y = ../lib/csum-partial_64.o ../lib/memcpy_64.o ../entry/thunk_64.o +subarch-y = ../lib/csum-partial_64.o ../lib/memcpy_64.o ../entry/thunk_64.o \ + ../lib/memmove_64.o ../lib/memset_64.o endif diff --git a/arch/x86/um/vdso/Makefile b/arch/x86/um/vdso/Makefile index 5943387e3f35..8c0396fd0e6f 100644 --- a/arch/x86/um/vdso/Makefile +++ b/arch/x86/um/vdso/Makefile @@ -3,6 +3,9 @@ # Building vDSO images for x86. # +# do not instrument on vdso because KASAN is not compatible with user mode +KASAN_SANITIZE := n + # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. KCOV_INSTRUMENT := n diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c index a4f07de21771..7a7fc76e99a8 100644 --- a/mm/kasan/shadow.c +++ b/mm/kasan/shadow.c @@ -295,9 +295,22 @@ int kasan_populate_vmalloc(unsigned long addr, unsigned long size) return 0; shadow_start = (unsigned long)kasan_mem_to_shadow((void *)addr); - shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE); shadow_end = (unsigned long)kasan_mem_to_shadow((void *)addr + size); - shadow_end = ALIGN(shadow_end, PAGE_SIZE); + + /* + * User Mode Linux maps enough shadow memory for all of virtual memory + * at boot, so doesn't need to allocate more on vmalloc, just clear it. + * + * The remaining CONFIG_UML checks in this file exist for the same + * reason. + */ + if (IS_ENABLED(CONFIG_UML)) { + __memset((void *)shadow_start, KASAN_VMALLOC_INVALID, shadow_end - shadow_start); + return 0; + } + + shadow_start = PAGE_ALIGN_DOWN(shadow_start); + shadow_end = PAGE_ALIGN(shadow_end); ret = apply_to_page_range(&init_mm, shadow_start, shadow_end - shadow_start, @@ -466,6 +479,10 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end, if (shadow_end > shadow_start) { size = shadow_end - shadow_start; + if (IS_ENABLED(CONFIG_UML)) { + __memset(shadow_start, KASAN_SHADOW_INIT, shadow_end - shadow_start); + return; + } apply_to_existing_page_range(&init_mm, (unsigned long)shadow_start, size, kasan_depopulate_vmalloc_pte, @@ -531,6 +548,11 @@ int kasan_alloc_module_shadow(void *addr, size_t size, gfp_t gfp_mask) if (WARN_ON(!PAGE_ALIGNED(shadow_start))) return -EINVAL; + if (IS_ENABLED(CONFIG_UML)) { + __memset((void *)shadow_start, KASAN_SHADOW_INIT, shadow_size); + return 0; + } + ret = __vmalloc_node_range(shadow_size, 1, shadow_start, shadow_start + shadow_size, GFP_KERNEL, @@ -554,6 +576,9 @@ int kasan_alloc_module_shadow(void *addr, size_t size, gfp_t gfp_mask) void kasan_free_module_shadow(const struct vm_struct *vm) { + if (IS_ENABLED(CONFIG_UML)) + return; + if (vm->flags & VM_KASAN) vfree(kasan_mem_to_shadow(vm->addr)); }