From patchwork Thu Oct 31 09:39:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Axtens X-Patchwork-Id: 11220809 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9A89E14DB for ; Thu, 31 Oct 2019 09:39:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 59821208E3 for ; Thu, 31 Oct 2019 09:39:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=axtens.net header.i=@axtens.net header.b="Z8ihK2yz" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 59821208E3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=axtens.net Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5D73C6B0007; Thu, 31 Oct 2019 05:39:18 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 58A026B0008; Thu, 31 Oct 2019 05:39:18 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 477826B000A; Thu, 31 Oct 2019 05:39:18 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0169.hostedemail.com [216.40.44.169]) by kanga.kvack.org (Postfix) with ESMTP id 2570A6B0007 for ; Thu, 31 Oct 2019 05:39:18 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id B9C3ADB23 for ; Thu, 31 Oct 2019 09:39:17 +0000 (UTC) X-FDA: 76103581554.10.scene34_6e2ecc09ab311 X-Spam-Summary: 50,0,0,592673b504fd70fe,d41d8cd98f00b204,dja@axtens.net,:kasan-dev@googlegroups.com::x86@kernel.org:aryabinin@virtuozzo.com:glider@google.com:luto@kernel.org:linux-kernel@vger.kernel.org:mark.rutland@arm.com:dvyukov@google.com:christophe.leroy@c-s.fr:linuxppc-dev@lists.ozlabs.org:gor@linux.ibm.com:dja@axtens.net,RULES_HIT:2:41:355:379:541:800:966:967:973:982:988:989:1260:1311:1314:1345:1437:1515:1535:1605:1606:1730:1747:1777:1792:1801:2194:2196:2198:2199:2200:2201:2393:2525:2561:2568:2634:2682:2685:2859:2903:2911:2933:2937:2939:2942:2945:2947:2951:2954:2987:3022:3138:3139:3140:3141:3142:3622:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4119:4250:4321:4385:4425:4605:5007:6117:6119:6261:6653:7875:7903:8784:9025:9040:9121:10004:11026:11473:11658:11914:12043:12048:12219:12291:12296:12297:12438:12517:12519:12555:12679:12683:12698:12737:12895:12986:13161:13229:13894:14096:14394:21063:21080:21222:21433:21444:21451:21627:21 740:2174 X-HE-Tag: scene34_6e2ecc09ab311 X-Filterd-Recvd-Size: 8175 Received: from mail-pf1-f195.google.com (mail-pf1-f195.google.com [209.85.210.195]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Thu, 31 Oct 2019 09:39:17 +0000 (UTC) Received: by mail-pf1-f195.google.com with SMTP id 3so3971972pfb.10 for ; Thu, 31 Oct 2019 02:39:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=axtens.net; s=google; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=tw/W6NbXnb4qm98VeXSbJQHRtU1hT68y9JEMgvYPvkw=; b=Z8ihK2yzb4t0N40oPVBY2YeLprWCSZ+dp9BgkobXgdCSwQHyktcKzY5AC10wQT0wJs CM5zdQpl7ryOLOmeUjQOc6eb4PDXsEeNG+eVFluhkE3xXVcWxNQgN+QADU1N5AJBSbqP fjQM2U7OJA/+JCEZ6vFK9dfz29wPwKLs59WfA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=tw/W6NbXnb4qm98VeXSbJQHRtU1hT68y9JEMgvYPvkw=; b=DNKRJ4L1VsfRcHzmVDbTgLiSmfaq92XapAn1e9x8euXis0ABsOceN/CmvBDwS3Lkrv rRHrS0RO4yDdZ4YNmYfIla6axKYfgrNQEwUMGnNwwAqAh8s+EMMLZalz1QvqIppSYX+P JkfnYNFOYWpuiZTGUkYa016lSCacLFv8O89+bjdo1PlXbNnNp48Y3Yas57zrcskVTpSx 74Ur6jNHQO6/svxVTV/+VjgVbz/1jJJds3DQhJ6KoBCPOtyVD5j4fwNiOlFGr5PP0Sf7 /a0xm0i9CYhPVF1bBIZaZwe8thEbefMKcRlmIHZ9DrSXdUmQOdj/uQfpwlj/tn2q/MoG +47Q== X-Gm-Message-State: APjAAAV7Sip/A8/lmycC5mLLKMvdNBJCX7jBia5KUc+lL6OsKBWcl1NC +TMjn36WDJ6hBo+wKT7V/U7CYQ== X-Google-Smtp-Source: APXvYqyDn4TdsCZV2aZgVm6KyZHNNNAePgy+2Pk7lNaxWkqtto+SU2QfMIWroH4GAfNVD0kAkwkLNQ== X-Received: by 2002:a63:4b06:: with SMTP id y6mr5232911pga.409.1572514756048; Thu, 31 Oct 2019 02:39:16 -0700 (PDT) Received: from localhost (2001-44b8-1113-6700-783a-2bb9-f7cb-7c3c.static.ipv6.internode.on.net. [2001:44b8:1113:6700:783a:2bb9:f7cb:7c3c]) by smtp.gmail.com with ESMTPSA id a18sm742715pff.95.2019.10.31.02.39.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 31 Oct 2019 02:39:15 -0700 (PDT) From: Daniel Axtens To: kasan-dev@googlegroups.com, linux-mm@kvack.org, x86@kernel.org, aryabinin@virtuozzo.com, glider@google.com, luto@kernel.org, linux-kernel@vger.kernel.org, mark.rutland@arm.com, dvyukov@google.com, christophe.leroy@c-s.fr Cc: linuxppc-dev@lists.ozlabs.org, gor@linux.ibm.com, Daniel Axtens Subject: [PATCH v11 0/4] kasan: support backing vmalloc space with real shadow memory Date: Thu, 31 Oct 2019 20:39:05 +1100 Message-Id: <20191031093909.9228-1-dja@axtens.net> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Currently, vmalloc space is backed by the early shadow page. This means that kasan is incompatible with VMAP_STACK. This series provides a mechanism to back vmalloc space with real, dynamically allocated memory. I have only wired up x86, because that's the only currently supported arch I can work with easily, but it's very easy to wire up other architectures, and it appears that there is some work-in-progress code to do this on arm64 and s390. This has been discussed before in the context of VMAP_STACK: - https://bugzilla.kernel.org/show_bug.cgi?id=202009 - https://lkml.org/lkml/2018/7/22/198 - https://lkml.org/lkml/2019/7/19/822 In terms of implementation details: Most mappings in vmalloc space are small, requiring less than a full page of shadow space. Allocating a full shadow page per mapping would therefore be wasteful. Furthermore, to ensure that different mappings use different shadow pages, mappings would have to be aligned to KASAN_SHADOW_SCALE_SIZE * PAGE_SIZE. Instead, share backing space across multiple mappings. Allocate a backing page when a mapping in vmalloc space uses a particular page of the shadow region. This page can be shared by other vmalloc mappings later on. We hook in to the vmap infrastructure to lazily clean up unused shadow memory. Testing with test_vmalloc.sh on an x86 VM with 2 vCPUs shows that: - Turning on KASAN, inline instrumentation, without vmalloc, introuduces a 4.1x-4.2x slowdown in vmalloc operations. - Turning this on introduces the following slowdowns over KASAN: * ~1.76x slower single-threaded (test_vmalloc.sh performance) * ~2.18x slower when both cpus are performing operations simultaneously (test_vmalloc.sh sequential_test_order=1) This is unfortunate but given that this is a debug feature only, not the end of the world. The benchmarks are also a stress-test for the vmalloc subsystem: they're not indicative of an overall 2x slowdown! v1: https://lore.kernel.org/linux-mm/20190725055503.19507-1-dja@axtens.net/ v2: https://lore.kernel.org/linux-mm/20190729142108.23343-1-dja@axtens.net/ Address review comments: - Patch 1: use kasan_unpoison_shadow's built-in handling of ranges that do not align to a full shadow byte - Patch 3: prepopulate pgds rather than faulting things in v3: https://lore.kernel.org/linux-mm/20190731071550.31814-1-dja@axtens.net/ Address comments from Mark Rutland: - kasan_populate_vmalloc is a better name - handle concurrency correctly - various nits and cleanups - relax module alignment in KASAN_VMALLOC case v4: https://lore.kernel.org/linux-mm/20190815001636.12235-1-dja@axtens.net/ Changes to patch 1 only: - Integrate Mark's rework, thanks Mark! - handle the case where kasan_populate_shadow might fail - poision shadow on free, allowing the alloc path to just unpoision memory that it uses v5: https://lore.kernel.org/linux-mm/20190830003821.10737-1-dja@axtens.net/ Address comments from Christophe Leroy: - Fix some issues with my descriptions in commit messages and docs - Dynamically free unused shadow pages by hooking into the vmap book-keeping - Split out the test into a separate patch - Optional patch to track the number of pages allocated - minor checkpatch cleanups v6: https://lore.kernel.org/linux-mm/20190902112028.23773-1-dja@axtens.net/ Properly guard freeing pages in patch 1, drop debugging code. v7: https://lore.kernel.org/linux-mm/20190903145536.3390-1-dja@axtens.net/ Add a TLB flush on freeing, thanks Mark Rutland. Explain more clearly how I think freeing is concurrency-safe. v8: https://lore.kernel.org/linux-mm/20191001065834.8880-1-dja@axtens.net/ rename kasan_vmalloc/shadow_pages to kasan/vmalloc_shadow_pages v9: https://lore.kernel.org/linux-mm/20191017012506.28503-1-dja@axtens.net/ (attempt to) address a number of review comments for patch 1. v10: https://lore.kernel.org/linux-mm/20191029042059.28541-1-dja@axtens.net/ - rebase on linux-next, pulling in Vlad's new work on splitting the vmalloc locks. - after much discussion of barriers, document where I think they are needed and why. Thanks Mark and Andrey. - clean up some TLB flushing and checkpatch bits v11: Address review comments from Andrey and Vlad, drop patch 5, add benchmarking results. Daniel Axtens (4): kasan: support backing vmalloc space with real shadow memory kasan: add test for vmalloc fork: support VMAP_STACK with KASAN_VMALLOC x86/kasan: support KASAN_VMALLOC Documentation/dev-tools/kasan.rst | 63 ++++++++ arch/Kconfig | 9 +- arch/x86/Kconfig | 1 + arch/x86/mm/kasan_init_64.c | 61 ++++++++ include/linux/kasan.h | 31 ++++ include/linux/moduleloader.h | 2 +- include/linux/vmalloc.h | 12 ++ kernel/fork.c | 4 + lib/Kconfig.kasan | 16 +++ lib/test_kasan.c | 26 ++++ mm/kasan/common.c | 231 ++++++++++++++++++++++++++++++ mm/kasan/generic_report.c | 3 + mm/kasan/kasan.h | 1 + mm/vmalloc.c | 53 +++++-- 14 files changed, 500 insertions(+), 13 deletions(-)