From patchwork Tue Dec 14 16:20:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676323 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 878A1C433F5 for ; Tue, 14 Dec 2021 16:22:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 154D86B0075; Tue, 14 Dec 2021 11:22:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1295A6B007B; Tue, 14 Dec 2021 11:22:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EF57F6B0078; Tue, 14 Dec 2021 11:22:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0204.hostedemail.com [216.40.44.204]) by kanga.kvack.org (Postfix) with ESMTP id DA2DE6B0073 for ; Tue, 14 Dec 2021 11:22:07 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 94B8D82499B9 for ; Tue, 14 Dec 2021 16:21:57 +0000 (UTC) X-FDA: 78916916274.09.0202607 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf21.hostedemail.com (Postfix) with ESMTP id 102B01C000D for ; Tue, 14 Dec 2021 16:21:54 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id n41-20020a05600c502900b003335ab97f41so8134917wmr.3 for ; Tue, 14 Dec 2021 08:21:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=FXfeBchMfCdhuWi7P/c/cIQefpZCnVfOT656QFTkr+0=; b=rq4gED9OFGTopBP/h6U7GjMpe16CAVeGeHHMWF1VLMCL/cqIp/qiPpx4glUgvShgk1 EkSX9gp/VQNQU3SylYvaqYyMvibQHhBg5sdPlaVPzR9GUxJfz4sDC+eO4p8B0yV/EnG5 Q+LNLpcH58pH4zk/c181fED98u9j5txn3fFQrK5ARbC8kqPipeyptURB3RM/buoMHqua FrMvA+lEKucUHpWMNVWnqCh04HOhsUbN3S6yMJKEOl7kBKEyHO5xY3Q8cJtNiUGbu4X9 A6IQPDzxqIHzfa4rqaT8/nrZwD0lbINSsPAPp21LEr6Mleh7u7KaI/iVQ1abzcblJfJ1 2CTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FXfeBchMfCdhuWi7P/c/cIQefpZCnVfOT656QFTkr+0=; b=c9LiwNicAndGcFipMG6phjQ/i4+tPC2yXTbucrHx40rEAmnuhCFnyTwfcS1hXPeVH8 Kt2FLnXPtJO1Zp718v05ylkq3NfR9y9r0PI7jWwKNGYIYYQoOxMZbrarsSgpWQBFCnqc XsgWVEBWGn74xwL7xo6s5YckkuzGufTmBlk97uWElg4xJCnIOjingFuz4OR7/tc4RYti DUHI045eEVZY/99W7vaT5nB78N6gNoQmVysQIYgy0Aij5IYSfO64AhTgkgJmJUn47aHV IeNL6mrjTBHsMbmD/eL0cHU5ibzden1GLoFewHqFYJp0GYCNGbJ5yZycrrQKDlAWWMIr SukA== X-Gm-Message-State: AOAM532u+RlfVuER14se80B8TshU5NIjKnY4xy7z6+21J1yNulzSpZ6j 3M+3u6jy6zW4ZtOE90Ysz1+Kw38dReI= X-Google-Smtp-Source: ABdhPJy4QNvRmDfM0eG011hD4RxNjSwaKd4qO7sKL/WYj9gWuHMNHmlS9FUP/lJt7VznT2PlfxW/ubfArUA= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a1c:4c04:: with SMTP id z4mr8271378wmf.11.1639498915988; Tue, 14 Dec 2021 08:21:55 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:08 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-2-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 01/43] arch/x86: add missing include to sparsemem.h From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 102B01C000D X-Stat-Signature: 1f59c63uumx4pp18zgagsjtzxh76pfgc Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=rq4gED9O; spf=pass (imf21.hostedemail.com: domain of 3o8S4YQYKCCQGLIDERGOOGLE.COMLINUX-MMKVACK.ORG@flex--glider.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3o8S4YQYKCCQGLIDERGOOGLE.COMLINUX-MMKVACK.ORG@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1639498914-774087 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dmitry Vyukov Somehow all existing inclusions of sparsemem.h are preceded by inclusion of , but KMSAN contains code that transitively includes sparsemem.h without that header, resulting in a compilation error: sparsemem.h:34:32: error: unknown type name 'phys_addr_t' extern int phys_to_target_node(phys_addr_t start); ^ sparsemem.h:36:39: error: unknown type name 'u64' extern int memory_add_physaddr_to_nid(u64 start); ^ Because sparsemem.h does actually use phys_addr_t and u64, include types.h explicitly. Signed-off-by: Dmitry Vyukov --- Link: https://linux-review.googlesource.com/id/Ifae221ce85d870d8f8d17173bd44d5cf9be2950f --- arch/x86/include/asm/sparsemem.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/x86/include/asm/sparsemem.h b/arch/x86/include/asm/sparsemem.h index 6a9ccc1b2be5d..64df897c0ee30 100644 --- a/arch/x86/include/asm/sparsemem.h +++ b/arch/x86/include/asm/sparsemem.h @@ -2,6 +2,8 @@ #ifndef _ASM_X86_SPARSEMEM_H #define _ASM_X86_SPARSEMEM_H +#include + #ifdef CONFIG_SPARSEMEM /* * generic non-linear memory support: From patchwork Tue Dec 14 16:20:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676327 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1A0EC433FE for ; Tue, 14 Dec 2021 16:23:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 01A206B0074; Tue, 14 Dec 2021 11:22:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F0C086B0078; Tue, 14 Dec 2021 11:22:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DACEA6B007B; Tue, 14 Dec 2021 11:22:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay030.a.hostedemail.com [64.99.140.30]) by kanga.kvack.org (Postfix) with ESMTP id CBCCA6B0074 for ; Tue, 14 Dec 2021 11:22:13 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 8139D604B5 for ; Tue, 14 Dec 2021 16:22:03 +0000 (UTC) X-FDA: 78916916526.05.98C8659 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf19.hostedemail.com (Postfix) with ESMTP id 372B41A001A for ; Tue, 14 Dec 2021 16:22:01 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id o18-20020a05600c511200b00332fa17a02eso8123178wms.5 for ; Tue, 14 Dec 2021 08:22:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=8VxgqF5xGXtjJZhzLk4s7hPazanU7qrF+qS4In1itGY=; b=AveMNZjhwtXeBLHmah9w7GsBQrbNTi/nMzsskW+NaqwQykk0PqQzfvFh48V1sPuWO2 233GJ00ugZLZND14qh7U7c4DhdTf8EpdStlirRukbtwR3ekSXADk7BwOtBB7nvSSpOMN k8ZDKe1+EeErI9DmAGqpW+H4mD5pir5yQc6lz1poTNWfffqggOAtiI1yYA98icO09EPS KQn75F+63MmNtR9JNsnvY9amOmFFozOJd3yvzCSm4IaZQHhn3TS4T2Cc4x5CAy6avjrH NZYwrUoRJ96f730RXP+ennTkqbtgBoTWQc72Vmnljehzvsr1F5YH4VGQ/HtFlLzVVFf8 WPDQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8VxgqF5xGXtjJZhzLk4s7hPazanU7qrF+qS4In1itGY=; b=Dh+TniXgTxs7Y2o4wpAwJWkCdsBkOlFlXozEh0BCdI7pKTcZdztHn7/wjeQk2CVUH5 RFGw9R1sIN+g7GNubv5w+Q2T6/DQjnZGMMZFLPwV5Hctwx0Ra5gig0L8MKQRL4MxWmy5 m0aXzMTmSAqYdyNXMolQq1WdYOwZY3jQtfO1aPplNc76tV/rcfx0XVRKNFaliHUCXnkW fgCInnT4vG7riZPT14/fmjAZk7oIQWpIEBIttnVmF2qBZCDEbzYcUhIuO6/xsFfgPUx1 wiOdJIEH52kZaDZTsnIQ7fZZA/MutIwmyHtyBf4X1sJo98FCcMF/K6MP99+Exi5pnMnZ MbXg== X-Gm-Message-State: AOAM533LexK+id1FWvvPTM05w7K3zGKmnDwmzDGPakE0WBPTVJOpAXhe xq37NYdlsm/D9JKXh8IYesYRlyUaAdk= X-Google-Smtp-Source: ABdhPJz4ksFKtGMQ3rsxSdmlnLg3K7pprIsncLhox9g03DsBDe2vEI3K2vBskTG88xMq3hYOfeuMrTBQaC0= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a05:600c:1d1b:: with SMTP id l27mr5818332wms.1.1639498918909; Tue, 14 Dec 2021 08:21:58 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:09 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-3-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 02/43] stackdepot: reserve 5 extra bits in depot_stack_handle_t From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: 1gigukkb3zzms8kc8yjyqrcjrgn9hoh5 Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=AveMNZjh; spf=pass (imf19.hostedemail.com: domain of 3psS4YQYKCCcJOLGHUJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--glider.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3psS4YQYKCCcJOLGHUJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 372B41A001A X-HE-Tag: 1639498921-236900 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some users (currently only KMSAN) may want to use spare bits in depot_stack_handle_t. Let them do so by adding @extra_bits to __stack_depot_save() to store arbitrary flags, and providing stack_depot_get_extra_bits() to retrieve those flags. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I0587f6c777667864768daf07821d594bce6d8ff9 --- include/linux/stackdepot.h | 8 ++++++++ lib/stackdepot.c | 29 ++++++++++++++++++++++++----- 2 files changed, 32 insertions(+), 5 deletions(-) diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h index c34b55a6e5540..b24f404ab03ac 100644 --- a/include/linux/stackdepot.h +++ b/include/linux/stackdepot.h @@ -14,9 +14,15 @@ #include typedef u32 depot_stack_handle_t; +/* + * Number of bits in the handle that stack depot doesn't use. Users may store + * information in them. + */ +#define STACK_DEPOT_EXTRA_BITS 5 depot_stack_handle_t __stack_depot_save(unsigned long *entries, unsigned int nr_entries, + unsigned int extra_bits, gfp_t gfp_flags, bool can_alloc); depot_stack_handle_t stack_depot_save(unsigned long *entries, @@ -25,6 +31,8 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries, unsigned int stack_depot_fetch(depot_stack_handle_t handle, unsigned long **entries); +unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle); + int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size, int spaces); diff --git a/lib/stackdepot.c b/lib/stackdepot.c index b437ae79aca14..6ad7b8888ff19 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -41,7 +41,8 @@ #define STACK_ALLOC_OFFSET_BITS (STACK_ALLOC_ORDER + PAGE_SHIFT - \ STACK_ALLOC_ALIGN) #define STACK_ALLOC_INDEX_BITS (DEPOT_STACK_BITS - \ - STACK_ALLOC_NULL_PROTECTION_BITS - STACK_ALLOC_OFFSET_BITS) + STACK_ALLOC_NULL_PROTECTION_BITS - \ + STACK_ALLOC_OFFSET_BITS - STACK_DEPOT_EXTRA_BITS) #define STACK_ALLOC_SLABS_CAP 8192 #define STACK_ALLOC_MAX_SLABS \ (((1LL << (STACK_ALLOC_INDEX_BITS)) < STACK_ALLOC_SLABS_CAP) ? \ @@ -54,6 +55,7 @@ union handle_parts { u32 slabindex : STACK_ALLOC_INDEX_BITS; u32 offset : STACK_ALLOC_OFFSET_BITS; u32 valid : STACK_ALLOC_NULL_PROTECTION_BITS; + u32 extra : STACK_DEPOT_EXTRA_BITS; }; }; @@ -72,6 +74,14 @@ static int next_slab_inited; static size_t depot_offset; static DEFINE_RAW_SPINLOCK(depot_lock); +unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle) +{ + union handle_parts parts = { .handle = handle }; + + return parts.extra; +} +EXPORT_SYMBOL(stack_depot_get_extra_bits); + static bool init_stack_slab(void **prealloc) { if (!*prealloc) @@ -135,6 +145,7 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc) stack->handle.slabindex = depot_index; stack->handle.offset = depot_offset >> STACK_ALLOC_ALIGN; stack->handle.valid = 1; + stack->handle.extra = 0; memcpy(stack->entries, entries, flex_array_size(stack, entries, size)); depot_offset += required_size; @@ -297,6 +308,7 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); * * @entries: Pointer to storage array * @nr_entries: Size of the storage array + * @extra_bits: Flags to store in unused bits of depot_stack_handle_t * @alloc_flags: Allocation gfp flags * @can_alloc: Allocate stack slabs (increased chance of failure if false) * @@ -305,6 +317,10 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); * (allocates using GFP flags of @alloc_flags). If @can_alloc is %false, avoids * any allocations and will fail if no space is left to store the stack trace. * + * Additional opaque flags can be passed in @extra_bits, stored in the unused + * bits of the stack handle, and retrieved using stack_depot_get_extra_bits() + * without calling stack_depot_fetch(). + * * Context: Any context, but setting @can_alloc to %false is required if * alloc_pages() cannot be used from the current context. Currently * this is the case from contexts where neither %GFP_ATOMIC nor @@ -314,10 +330,11 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); */ depot_stack_handle_t __stack_depot_save(unsigned long *entries, unsigned int nr_entries, + unsigned int extra_bits, gfp_t alloc_flags, bool can_alloc) { struct stack_record *found = NULL, **bucket; - depot_stack_handle_t retval = 0; + union handle_parts retval = { .handle = 0 }; struct page *page = NULL; void *prealloc = NULL; unsigned long flags; @@ -391,9 +408,11 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, free_pages((unsigned long)prealloc, STACK_ALLOC_ORDER); } if (found) - retval = found->handle.handle; + retval.handle = found->handle.handle; fast_exit: - return retval; + retval.extra = extra_bits; + + return retval.handle; } EXPORT_SYMBOL_GPL(__stack_depot_save); @@ -413,6 +432,6 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries, unsigned int nr_entries, gfp_t alloc_flags) { - return __stack_depot_save(entries, nr_entries, alloc_flags, true); + return __stack_depot_save(entries, nr_entries, 0, alloc_flags, true); } EXPORT_SYMBOL_GPL(stack_depot_save); From patchwork Tue Dec 14 16:20:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676329 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09C09C433EF for ; Tue, 14 Dec 2021 16:23:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 71EF96B0078; Tue, 14 Dec 2021 11:22:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6A7636B007B; Tue, 14 Dec 2021 11:22:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 56EF36B007D; Tue, 14 Dec 2021 11:22:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0131.hostedemail.com [216.40.44.131]) by kanga.kvack.org (Postfix) with ESMTP id 483D06B0078 for ; Tue, 14 Dec 2021 11:22:14 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 07EA186E88 for ; Tue, 14 Dec 2021 16:22:04 +0000 (UTC) X-FDA: 78916916568.29.D899564 Received: from mail-lj1-f202.google.com (mail-lj1-f202.google.com [209.85.208.202]) by imf08.hostedemail.com (Postfix) with ESMTP id 795C216000F for ; Tue, 14 Dec 2021 16:22:00 +0000 (UTC) Received: by mail-lj1-f202.google.com with SMTP id r20-20020a2eb894000000b0021a4e932846so5658859ljp.6 for ; Tue, 14 Dec 2021 08:22:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=T+mYYL6aSbEG/4iKq3fuhN/0C7rIH3abijCddrO3Btg=; b=DCe7tBgES2hvR4caFxy5bljYM1IT44WZzYzikcnqZUOJYlBBpJqB6nRuLLyzsaMupI +zXKLFJ7f3UI79Rmtw28+xRXo8HNv+qUnzkeyIO7L+5D3boqrr2FNBogjt5YON2BTV1u AFx1rnPzU6vEU1A6/jgSC9hZLl05kZuxP3s0cd1WgWE7jYEl1d8Mvo0h6VTze1GPd9qP u0gcVOi9sYfqx+0A/Hd2f3NIBmpk1jfJD9n/izeZtsvQEeb0tr4plLmMtbjfmWKKRwmf pnlj7ZnCniw0y7nFqrzGkFg44r4tZcbmetehf9NjhTG7/aFMSy/ubcrZ5v5wlj0BqfdP ahZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=T+mYYL6aSbEG/4iKq3fuhN/0C7rIH3abijCddrO3Btg=; b=3+94S7kf3kmnEoBj+egmeTl0zLXrCf0UwUWTnNMa4cdm98a93IYFJRDPMyuqKjuwMV eWdMrVNd1DzFOxjYpST3RVmXT+78Re8bp2cPHm+V371k+ayzo5+Ewtv8Z9pj7+NRf+FL YYnfyTJDmsvNz6LGILkGjaM2Z9zd3ZZlu1ApOhEDQ4f9iZXq2SBHJZAACrKJxwMfhWAK ikzl7yes51CRm0a79FkXg1hoFAbTe1nJXc4+t5cmjrJ6y1zX/7846T/rCe8Bs0CWAO4w 2cDX/LSfd4h32IadoDk4izjQnqqCaijNyMScG7XS1fF27QesEvgvvQY0ppqLqXNVdxFJ /sDg== X-Gm-Message-State: AOAM532wU9Rq4W+keSiXErqEQh6mQRyQQIuy1OD6VMtuzfItCaT7XG/Y Z5ybhpy06puwCgqj0mKUSuHA6+4QNcc= X-Google-Smtp-Source: ABdhPJzSecVDmOS9zjdfpZRUjT0HViIDEGkgFckmlsJrOrsrpGbbAJowPx7hwdmUVKVKoxwpUINCdecE840= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:ac2:44c5:: with SMTP id d5mr5598346lfm.275.1639498921745; Tue, 14 Dec 2021 08:22:01 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:10 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-4-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 03/43] kasan: common: adapt to the new prototype of __stack_depot_save() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Queue-Id: 795C216000F X-Stat-Signature: cdzdrh87mfx3aoiwrro6zfia5uaqby15 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=DCe7tBgE; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf08.hostedemail.com: domain of 3qcS4YQYKCCoMROJKXMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--glider.bounces.google.com designates 209.85.208.202 as permitted sender) smtp.mailfrom=3qcS4YQYKCCoMROJKXMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--glider.bounces.google.com X-Rspamd-Server: rspam11 X-HE-Tag: 1639498920-110664 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pass extra_bits=0, as KASAN does not intend to store additional information in the stack handle. No functional change. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I932d8f4f11a41b7483e0d57078744cc94697607a --- mm/kasan/common.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/kasan/common.c b/mm/kasan/common.c index 8428da2aaf173..6c690ca0ee41a 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -37,7 +37,7 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc) nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0); nr_entries = filter_irq_stacks(entries, nr_entries); - return __stack_depot_save(entries, nr_entries, flags, can_alloc); + return __stack_depot_save(entries, nr_entries, 0, flags, can_alloc); } void kasan_set_track(struct kasan_track *track, gfp_t flags) From patchwork Tue Dec 14 16:20:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E1CCDC433EF for ; Tue, 14 Dec 2021 16:24:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6AB696B007B; Tue, 14 Dec 2021 11:22:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 658FD6B007D; Tue, 14 Dec 2021 11:22:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4D29A6B007E; Tue, 14 Dec 2021 11:22:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0027.hostedemail.com [216.40.44.27]) by kanga.kvack.org (Postfix) with ESMTP id 3E8046B007B for ; Tue, 14 Dec 2021 11:22:19 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E9950181AC9CC for ; Tue, 14 Dec 2021 16:22:08 +0000 (UTC) X-FDA: 78916916736.25.30BBC8B Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf07.hostedemail.com (Postfix) with ESMTP id 1E4BA4000D for ; Tue, 14 Dec 2021 16:22:06 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id u4-20020a5d4684000000b0017c8c1de97dso4853116wrq.16 for ; Tue, 14 Dec 2021 08:22:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=CQyambTSmP5kC58BDOZ38NkSus/ix4gF7HsyVWQKWXc=; b=RE3Ne1BeyAaL9++X1sCY7W6HxZFmsx3WE7bIxrpwlErXaOtGOfyERSJ5er3nl4ZCpY zmAzhk03fbH3MJBmjIftslQUgrs8buJ/7AMueg53qeMYueUQlTzpKz7cHQ/9UUrSNWxa 5Q5AaP2Tfoy2xyi3yDx/1yESKxT4BNyhXGlX2I0kRZ3z3SiZwhdsTzcPGlF/D0Jdeu5t PdY0JYCa8VbVCQxB2i1OtWjEZyLGgvMc0UvOuRLDIK2PS6jiAyEPXnpmIzANy2bhQnXQ FLg3tJRU2/w81RXVJWYgfl6HuKYVgTqZOrjBUgfZu4hvco/0MV0VJnM1KAFOmShvQpfR EHFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=CQyambTSmP5kC58BDOZ38NkSus/ix4gF7HsyVWQKWXc=; b=ViGWi/EnGym+bkL+Trx3zD4QFB4CNbZnNllQmqSTx3ygCCX/Xsp5foDp2Gk3AYoRwy 1HmwXQPL4kptb0j33M9diQCAGn0RW4mk1FCqKY9k8deydIAh9gMYh1NgDe8ntqajXu5k eQGOIN4D1J1k3XPaGiYt4wgtTby9K9jRYUpVkUogpJqMQFAVHottb6mdz3lq04gUFN2e Lk6OBF42AXLckxlpoI7pZR04x+8jmbWQPXLqFZz3h4TFZQARvKMDRIXMomHnurSMzpbm DVGduty4PM4poU/APUEWZMMFuu/a3yBrd+lBMNrw32PT++gG2QSRWfP9VGaCrEc1qbO4 WI6A== X-Gm-Message-State: AOAM531wEoSV9N6Zi1SEm6hWHSlYtp6KDds2BFF+vZaijdai+zv3grK2 iSb8oVyevvGRI3RkyGQHAiQUzOho+yg= X-Google-Smtp-Source: ABdhPJyj4nXTOH98QEuMjM8LdPyxWb8TjdCYI8xbjpieTOD1vLMhRc6PwL5s5uZRNEINQF/xeJdedBJ48NU= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a05:600c:1d1b:: with SMTP id l27mr5818449wms.1.1639498924568; Tue, 14 Dec 2021 08:22:04 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:11 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-5-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 04/43] instrumented.h: allow instrumenting both sides of copy_from_user() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 1E4BA4000D X-Stat-Signature: ypy5m75sdhz4qcgcdf1h85xbj5oq7yr5 Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=RE3Ne1Be; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf07.hostedemail.com: domain of 3rMS4YQYKCC0PURMNaPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--glider.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3rMS4YQYKCC0PURMNaPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--glider.bounces.google.com X-HE-Tag: 1639498926-992732 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Introduce instrument_copy_from_user_before() and instrument_copy_from_user_after() hooks to be invoked before and after the call to copy_from_user(). KASAN and KCSAN will be only using instrument_copy_from_user_before(), but for KMSAN we'll need to insert code after copy_from_user(). Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I855034578f0b0f126734cbd734fb4ae1d3a6af99 --- include/linux/instrumented.h | 21 +++++++++++++++++++-- include/linux/uaccess.h | 19 ++++++++++++++----- lib/iov_iter.c | 9 ++++++--- lib/usercopy.c | 3 ++- 4 files changed, 41 insertions(+), 11 deletions(-) diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h index 42faebbaa202a..ee8f7d17d34f5 100644 --- a/include/linux/instrumented.h +++ b/include/linux/instrumented.h @@ -120,7 +120,7 @@ instrument_copy_to_user(void __user *to, const void *from, unsigned long n) } /** - * instrument_copy_from_user - instrument writes of copy_from_user + * instrument_copy_from_user_before - add instrumentation before copy_from_user * * Instrument writes to kernel memory, that are due to copy_from_user (and * variants). The instrumentation should be inserted before the accesses. @@ -130,10 +130,27 @@ instrument_copy_to_user(void __user *to, const void *from, unsigned long n) * @n number of bytes to copy */ static __always_inline void -instrument_copy_from_user(const void *to, const void __user *from, unsigned long n) +instrument_copy_from_user_before(const void *to, const void __user *from, unsigned long n) { kasan_check_write(to, n); kcsan_check_write(to, n); } +/** + * instrument_copy_from_user_after - add instrumentation after copy_from_user + * + * Instrument writes to kernel memory, that are due to copy_from_user (and + * variants). The instrumentation should be inserted after the accesses. + * + * @to destination address + * @from source address + * @n number of bytes to copy + * @left number of bytes not copied (as returned by copy_from_user) + */ +static __always_inline void +instrument_copy_from_user_after(const void *to, const void __user *from, + unsigned long n, unsigned long left) +{ +} + #endif /* _LINUX_INSTRUMENTED_H */ diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index ac0394087f7d4..8dadd8642afbb 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -98,20 +98,28 @@ static inline void force_uaccess_end(mm_segment_t oldfs) static __always_inline __must_check unsigned long __copy_from_user_inatomic(void *to, const void __user *from, unsigned long n) { - instrument_copy_from_user(to, from, n); + unsigned long res; + + instrument_copy_from_user_before(to, from, n); check_object_size(to, n, false); - return raw_copy_from_user(to, from, n); + res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); + return res; } static __always_inline __must_check unsigned long __copy_from_user(void *to, const void __user *from, unsigned long n) { + unsigned long res; + might_fault(); + instrument_copy_from_user_before(to, from, n); if (should_fail_usercopy()) return n; - instrument_copy_from_user(to, from, n); check_object_size(to, n, false); - return raw_copy_from_user(to, from, n); + res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); + return res; } /** @@ -155,8 +163,9 @@ _copy_from_user(void *to, const void __user *from, unsigned long n) unsigned long res = n; might_fault(); if (!should_fail_usercopy() && likely(access_ok(from, n))) { - instrument_copy_from_user(to, from, n); + instrument_copy_from_user_before(to, from, n); res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); } if (unlikely(res)) memset(to + (n - res), 0, res); diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 66a740e6e153c..28c033cb9e803 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -161,13 +161,16 @@ static int copyout(void __user *to, const void *from, size_t n) static int copyin(void *to, const void __user *from, size_t n) { + size_t res = n; + if (should_fail_usercopy()) return n; if (access_ok(from, n)) { - instrument_copy_from_user(to, from, n); - n = raw_copy_from_user(to, from, n); + instrument_copy_from_user_before(to, from, n); + res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); } - return n; + return res; } static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t bytes, diff --git a/lib/usercopy.c b/lib/usercopy.c index 7413dd300516e..1505a52f23a01 100644 --- a/lib/usercopy.c +++ b/lib/usercopy.c @@ -12,8 +12,9 @@ unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n unsigned long res = n; might_fault(); if (!should_fail_usercopy() && likely(access_ok(from, n))) { - instrument_copy_from_user(to, from, n); + instrument_copy_from_user_before(to, from, n); res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); } if (unlikely(res)) memset(to + (n - res), 0, res); From patchwork Tue Dec 14 16:20:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676333 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 829EDC433F5 for ; Tue, 14 Dec 2021 16:24:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DAEE66B007D; Tue, 14 Dec 2021 11:22:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D36A36B007E; Tue, 14 Dec 2021 11:22:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BB1046B0080; Tue, 14 Dec 2021 11:22:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0090.hostedemail.com [216.40.44.90]) by kanga.kvack.org (Postfix) with ESMTP id AB7186B007D for ; Tue, 14 Dec 2021 11:22:19 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 6A6C0181AEF09 for ; Tue, 14 Dec 2021 16:22:09 +0000 (UTC) X-FDA: 78916916778.16.BB9C142 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf08.hostedemail.com (Postfix) with ESMTP id E2CD9160005 for ; Tue, 14 Dec 2021 16:22:05 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id v10-20020aa7d9ca000000b003e7bed57968so17432548eds.23 for ; Tue, 14 Dec 2021 08:22:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=fhRGaCnf+TI+rbD6bpipfhwQ2i3ms5pTCBO8ta/qeqc=; b=TX0tjjqrzBB5d35hGkn9aTC0IHbiHM4YeZPo6zu9iHoyjh5pzff8ntG2ikksEyhsxp b9nT/gLc/VQzP6JS6qUwJ+QbzhuzhgMIvWNBOEOeN6p+rcRZmWrWtrXPxxaEu5LUQIbQ 48EE6tE7ZO2rspSQQ+VZy+LEZzb3ggEwvRJG9UNMCQeUlACSOtd/TeDDjNtFuv1QcICH bCfuFNPc7uAZeTfFVsnvdW+qFpPRdtXGT9vjIBhmiFQYRDQ/Meh6NQu7VI9D7X0DrrE0 EUebdrP2DhS9uKqyH3c9ZI434APLeaFa52NHNwxzHCLG4S8w83C7hl+DzcqgQfLIwrPV ddsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=fhRGaCnf+TI+rbD6bpipfhwQ2i3ms5pTCBO8ta/qeqc=; b=XS9qzRjfygDIUcdDzMSLL3KVGuHSp81aA3xf5Y7BQse0wsGhaAtqT6uc7XNhNZc5MZ cvEj3OLZ55/0owuDaEyknfvV1TC3k3ze4SBOtMwgMcD6XGEJLce1Qzy8LD9H0vxJZLhE hQnuXxZemhRnA7AtLZlkOGoIAOLZYM3n1EXByYjAyCn/VGTkV4CuQaMxON8IkjbVC/uw Sa8wEnjnYt5Cl0SF8/aRJcF04nalf78mYEHKMUaKH1eJl4rlOvSDZnaUtJcXA9thCv60 Kfph3Hw3r/QTEo/C8AYZfHXtDqvcawnaDWbj5NpG/Ezrm3niWq3PsjqqRioQrtm+D+I1 DWpQ== X-Gm-Message-State: AOAM533xDZesbg3PmNtobLpZNGr8uy4bMcJ29xL3o0lX7aZWy8IfCDhQ MYXf1gQFyloJZ1rTy+S8OoymYeS96WI= X-Google-Smtp-Source: ABdhPJy1EVhvr5Fpu/bqGcT/XUVsQ/H9fHi/cHGeda5efKSyOvDHPkyhzWJngRhdjn3oSE6CI+7CDLKD7/I= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a17:906:58d5:: with SMTP id e21mr6916439ejs.540.1639498927503; Tue, 14 Dec 2021 08:22:07 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:12 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-6-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 05/43] asm: x86: instrument usercopy in get_user() and __put_user_size() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: E2CD9160005 X-Stat-Signature: eb9hmputpyfqua73x8m6jim6k85wisyj Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=TX0tjjqr; spf=pass (imf08.hostedemail.com: domain of 3r8S4YQYKCDASXUPQdSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3r8S4YQYKCDASXUPQdSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1639498925-227799 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use hooks from instrumented.h to notify bug detection tools about usercopy events in get_user() and put_user_size(). It's still unclear how to instrument put_user(), which assumes that instrumentation code doesn't clobber RAX. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ia9f12bfe5832623250e20f1859fdf5cc485a2fce --- arch/x86/include/asm/uaccess.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 33a68407def3f..86ad5ab211e97 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -5,6 +5,7 @@ * User space memory access functions */ #include +#include #include #include #include @@ -126,11 +127,13 @@ extern int __get_user_bad(void); int __ret_gu; \ register __inttype(*(ptr)) __val_gu asm("%"_ASM_DX); \ __chk_user_ptr(ptr); \ + instrument_copy_from_user_before((void *)&(x), ptr, sizeof(*(ptr))); \ asm volatile("call __" #fn "_%P4" \ : "=a" (__ret_gu), "=r" (__val_gu), \ ASM_CALL_CONSTRAINT \ : "0" (ptr), "i" (sizeof(*(ptr)))); \ (x) = (__force __typeof__(*(ptr))) __val_gu; \ + instrument_copy_from_user_after((void *)&(x), ptr, sizeof(*(ptr)), 0); \ __builtin_expect(__ret_gu, 0); \ }) @@ -275,7 +278,9 @@ extern void __put_user_nocheck_8(void); #define __put_user_size(x, ptr, size, label) \ do { \ + __typeof__(*(ptr)) __pus_val = x; \ __chk_user_ptr(ptr); \ + instrument_copy_to_user(ptr, &(__pus_val), size); \ switch (size) { \ case 1: \ __put_user_goto(x, ptr, "b", "iq", label); \ @@ -313,6 +318,7 @@ do { \ #define __get_user_size(x, ptr, size, label) \ do { \ __chk_user_ptr(ptr); \ + instrument_copy_from_user_before((void *)&(x), ptr, size); \ switch (size) { \ unsigned char x_u8__; \ case 1: \ @@ -331,6 +337,7 @@ do { \ default: \ (x) = __get_user_bad(); \ } \ + instrument_copy_from_user_after((void *)&(x), ptr, size, 0); \ } while (0) #define __get_user_asm(x, addr, itype, ltype, label) \ From patchwork Tue Dec 14 16:20:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676335 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E93CBC433EF for ; Tue, 14 Dec 2021 16:25:29 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2168D6B007E; Tue, 14 Dec 2021 11:22:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C63A6B0080; Tue, 14 Dec 2021 11:22:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0660D6B0081; Tue, 14 Dec 2021 11:22:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0050.hostedemail.com [216.40.44.50]) by kanga.kvack.org (Postfix) with ESMTP id E9CB36B007E for ; Tue, 14 Dec 2021 11:22:21 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id AA73E886DE for ; Tue, 14 Dec 2021 16:22:11 +0000 (UTC) X-FDA: 78916916862.23.27B3CDA Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf14.hostedemail.com (Postfix) with ESMTP id 39F40100017 for ; Tue, 14 Dec 2021 16:22:10 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id v18-20020a5d5912000000b001815910d2c0so4866760wrd.1 for ; Tue, 14 Dec 2021 08:22:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=vk27iv2F635cQnWQcbshAPEqVufkv6t3HZGPDzS+hy0=; b=N97xBrvZKeDEcFlNpkWBrtn7zEvb3Izxfed8tgccGP3KbP0JyoVvarwspHaE+pun9W hC8L+YfbQaFfkyLe8bnD8MYV+/Sdom3ZohAx+8SJYVoaN9+FGkeigUf/5Enx9Duw10M3 I4GX5ncfr5sThlpIWYcUp2zJ4xXqkR3lFRPU4wJafzGrnYz7IvMJbp4PHnk9NdLpwQWA DYtFY+DFULdtU0LkVrKqqlVyi91CCthRiiScHr0/FyR6/4X94fi1/MO9Bm/l/XHO6CaY 2R8KiVRheRSeJZYwjF3N1AftioY6J7C1jct6LWF2rPySvAGDOk7uvQD3ZoD6roS8/d5p BrFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=vk27iv2F635cQnWQcbshAPEqVufkv6t3HZGPDzS+hy0=; b=1X+cBnZ7gVNyuqm2MOAO8Mu1eVhOnVOU2DkoUrfZpOj5bCxwIc8vCCEPfiCPestzP4 zQ/JpPo9YHSehD9BAxdrUXlM9qfr8dlIXR9VpXhajW5rJpdfskSo5vMiLuoCgmTzA5Mx EZ3NI/FmNym0FbsfhHlRCZSxvLtxeTVhZbRcRKu1pt+m5o/sZOKVrf0L9vZZuXlgpFJt fSmJ7Y3HhVFHbUUDZLGjCrtovZKjEOhUddMpwkdY+sChoWWp5iZpiNkTIYoq6cQamLfz Tga2Czpe2a/m4vS5+YI4bA83uU3yOFROn2WkUKYkEZITI36glP8YN88W7Ow8P1M95CiB nDvg== X-Gm-Message-State: AOAM53204SgVkPW/P1AnU/ocosrwVO8IyPpsqwui5F1oLWd+/PbnwAlI Q5dU9F4U7e4NFgXyiDt0ciOxU4C05Cg= X-Google-Smtp-Source: ABdhPJx9Wj6Cyb1ZV5OC7nThpuMZJU+2OnpAgbvGjNqZyuKNa8hLFTbdISamGeA0K/FUYpEbmCyQ0RRf9nc= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:adf:f206:: with SMTP id p6mr6600814wro.509.1639498930071; Tue, 14 Dec 2021 08:22:10 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:13 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-7-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 06/43] asm-generic: instrument usercopy in cacheflush.h From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Queue-Id: 39F40100017 X-Stat-Signature: bsx5poqwg1r8ydyzj9uqcijcspepic4o Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=N97xBrvZ; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf14.hostedemail.com: domain of 3ssS4YQYKCDMVaXSTgVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--glider.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3ssS4YQYKCDMVaXSTgVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--glider.bounces.google.com X-Rspamd-Server: rspam11 X-HE-Tag: 1639498930-640961 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Notify memory tools about usercopy events in copy_to_user_page() and copy_from_user_page(). Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ic1ee8da1886325f46ad67f52176f48c2c836c48f --- include/asm-generic/cacheflush.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index 4f07afacbc239..0f63eb325025f 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -2,6 +2,8 @@ #ifndef _ASM_GENERIC_CACHEFLUSH_H #define _ASM_GENERIC_CACHEFLUSH_H +#include + struct mm_struct; struct vm_area_struct; struct page; @@ -105,6 +107,7 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end) #ifndef copy_to_user_page #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ + instrument_copy_to_user(dst, src, len); \ memcpy(dst, src, len); \ flush_icache_user_page(vma, page, vaddr, len); \ } while (0) @@ -112,7 +115,11 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end) #ifndef copy_from_user_page #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ - memcpy(dst, src, len) + do { \ + instrument_copy_from_user_before(dst, src, len); \ + memcpy(dst, src, len); \ + instrument_copy_from_user_after(dst, src, len, 0); \ + } while (0) #endif #endif /* _ASM_GENERIC_CACHEFLUSH_H */ From patchwork Tue Dec 14 16:20:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676337 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5124C433EF for ; Tue, 14 Dec 2021 16:26:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A063B6B0080; Tue, 14 Dec 2021 11:22:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9B4746B0081; Tue, 14 Dec 2021 11:22:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 82EB56B0082; Tue, 14 Dec 2021 11:22:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0056.hostedemail.com [216.40.44.56]) by kanga.kvack.org (Postfix) with ESMTP id 7312C6B0080 for ; Tue, 14 Dec 2021 11:22:24 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 22D0F82499B9 for ; Tue, 14 Dec 2021 16:22:14 +0000 (UTC) X-FDA: 78916916988.20.6922675 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf28.hostedemail.com (Postfix) with ESMTP id BEB9EC0011 for ; Tue, 14 Dec 2021 16:22:13 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id r129-20020a1c4487000000b00333629ed22dso13394793wma.6 for ; Tue, 14 Dec 2021 08:22:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=cymMxq8X/aG1qXHBjowqDi5l9Htb+dr7152LeU2UVMg=; b=WncnmFGwOheZzwj3lQVcaMmEmmbs5r8KC0pxfY3vrRTBhm0vWTPYKdVu6SuZ/lDOBN oo427lD+CO6vG97uePW3RveiP+oSrjPgOcLUHNljqIMlZPHOKZe7Wj7tBoNAQPMrJnw9 KqsuI9B2HZpdTFGR3B2S/hdL6j5LLLVSPvTeuYQUCkYfd+NAADNU+fpr+TNZrqrtAlV7 QDZ46XIqfqYP04Vuo8iPBydPbj1h6rshvISKhDAfitvyrcjxJn9MZ3oJv/1vX4zzE4EC u3rDgbQW+Ttcm2Fx7+ojv/HG7vMx3ZVQtjNbFbOEeX3tMT77CgmULH5SI48+r9l6CPCB GRzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=cymMxq8X/aG1qXHBjowqDi5l9Htb+dr7152LeU2UVMg=; b=zCwSbvD5ZJO0C7FaAdgzsmFCOfnap/cvrNgd55gRlfO+3SyXperASs53NqopN4Csa4 PNHImAMwpabsizWU7enWY+1mXrGc9bQoIsVoK8a6UDNK61v9LOV7HvMxJgQ1nvpKLqd3 Eevg8oOEjPZ7kJf4UJrVBBWb5OHoR6gF7XJmtUmZ6pvCJiGcbeguuOCxsF/UAEk0xo0a 6b/Fz2N3dT2qDK0ur/JlwE4qjHQ2oegrR5NaHUOg8RgUlylGm+wBdpq936tt9+7EC8Jf YI2Hu9zXzH95dW9kd/tF3BVVFSprtHgrxmgKxdni5WpJEFDH3sWumKmg+L14F3gtBNSq j/+Q== X-Gm-Message-State: AOAM530uiIV7LNJq9ZuZCN5HqFImGKJeb7ftOQ79qZ7wGEhsCIQfoCSt EoUExYSFmSXUfSvDmnqcNFVfSw0WOm8= X-Google-Smtp-Source: ABdhPJw7rnlyWmyiG+/6/FPFfhBexKn+2IQc1y0wrMK33WZwdvgtnE0Ywc99AkwQL+Q5SKgG2g6TjGgsUOs= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a5d:522b:: with SMTP id i11mr14310wra.2.1639498932629; Tue, 14 Dec 2021 08:22:12 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:14 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-8-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 07/43] compiler_attributes.h: add __disable_sanitizer_instrumentation From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=WncnmFGw; spf=pass (imf28.hostedemail.com: domain of 3tMS4YQYKCDUXcZUViXffXcV.TfdcZelo-ddbmRTb.fiX@flex--glider.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3tMS4YQYKCDUXcZUViXffXcV.TfdcZelo-ddbmRTb.fiX@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Queue-Id: BEB9EC0011 X-Stat-Signature: ubmzt8mypabreohbjd13w4ekkrpgindh X-Rspamd-Server: rspam04 X-HE-Tag: 1639498933-85388 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The new attribute maps to __attribute__((disable_sanitizer_instrumentation)), which will be supported by Clang >= 14.0. Future support in GCC is also possible. This attribute disables compiler instrumentation for kernel sanitizer tools, making it easier to implement noinstr. It is different from the existing __no_sanitize* attributes, which may still allow certain types of instrumentation to prevent false positives. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ic0123ce99b33ab7d5ed1ae90593425be8d3d774a --- include/linux/compiler_attributes.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h index b9121afd87331..37e2600202216 100644 --- a/include/linux/compiler_attributes.h +++ b/include/linux/compiler_attributes.h @@ -308,6 +308,24 @@ # define __compiletime_warning(msg) #endif +/* + * Optional: only supported since clang >= 14.0 + * + * clang: https://clang.llvm.org/docs/AttributeReference.html#disable-sanitizer-instrumentation + * + * disable_sanitizer_instrumentation is not always similar to + * no_sanitize(()): the latter may still let specific sanitizers + * insert code into functions to prevent false positives. Unlike that, + * disable_sanitizer_instrumentation prevents all kinds of instrumentation to + * functions with the attribute. + */ +#if __has_attribute(disable_sanitizer_instrumentation) +# define __disable_sanitizer_instrumentation \ + __attribute__((disable_sanitizer_instrumentation)) +#else +# define __disable_sanitizer_instrumentation +#endif + /* * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-weak-function-attribute * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html#index-weak-variable-attribute From patchwork Tue Dec 14 16:20:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676341 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02CA5C433F5 for ; Tue, 14 Dec 2021 16:27:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C51556B0088; Tue, 14 Dec 2021 11:22:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BD9EB6B0087; Tue, 14 Dec 2021 11:22:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A2A726B0085; Tue, 14 Dec 2021 11:22:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0005.hostedemail.com [216.40.44.5]) by kanga.kvack.org (Postfix) with ESMTP id 8E2576B0082 for ; Tue, 14 Dec 2021 11:22:32 -0500 (EST) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5577289076 for ; Tue, 14 Dec 2021 16:22:22 +0000 (UTC) X-FDA: 78916917324.12.E6932EF Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf09.hostedemail.com (Postfix) with ESMTP id 9A12614001A for ; Tue, 14 Dec 2021 16:22:13 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id l6-20020a05600c4f0600b0033321934a39so8113615wmq.9 for ; Tue, 14 Dec 2021 08:22:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=crrNI4EdXvQlOUFNdstWbkw7vB6bsrCNGKB5X4LHhOY=; b=n+stCJe3aRncgA9hSZRfM3fUm72LnecC2w2vfgNd1vTp9virbXICpvKqOBvpkuYsQm ZTHciyc474dwmlWUtmfRU6aKCW5ydB4EWoZA2hSs/vAHgpd96YcD6tmFIUj+9/kgCfu+ TLzRS4zQHFUVmxYpb7dbGtjDVZ+vDdxjmpnafkX4tF82JxMY0a8yGX4IKqhtMsNKOgTL OFKsYgBNs2mvH7dgrk1CVW7MdTPlK49DqsVpwOU/DAw8x77J+OfWX8eol4oC8apaDez0 ikJBdglfdPrc+VZO9YfZ7lWV41QiVwnMVlWTkl/C07Cf6hW0VJ1WGcCI/HD9YQ6/f7Yw 0Aag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=crrNI4EdXvQlOUFNdstWbkw7vB6bsrCNGKB5X4LHhOY=; b=Pg74AtDnQQSpjBVPjvDHc/apN0ADAMfdMqjXCtxQpE8Xqqo4ouBp+AZrnynYj0sV+U mCnIM3zedkHp8w0Q9XuMu7RvFa5CZFUkYRK6Rn1Q6+GDh+rl/4OI7rD5YiC9WRTkhuZw sZ6ndaV3ehW6xL7fmCi29xUWhTypwEOFB6ptOSDlCr1QY8s0oG8YgjKGtdPOuSmqpWGn SnVlfslCqJ3jzsRus8jvMya2mNYWIkyxgIkuEXt3Cm3bUs4Jg2FjdP3Dh2rZrd0sXh3k 2uWT5NmVkGObPxUsoRCnspWztDz9onQ1I/wvoL7mN46YTCctyNNv0NQ9mUp0+ixa8oha uAwg== X-Gm-Message-State: AOAM531L95J63fWI3N1AzXzxJeXqvf1JdOxj3gKSJySAAz6RkFctrFCH vQsZRPr3q+wdEt2VhN9XwkgMEb5MZ2c= X-Google-Smtp-Source: ABdhPJwqLNLEHRG46ecFJfm3NehmCK7IjKBt6m8AmvucWblCUsz0XHYJftYi0b2bAhDBEXz+GcBre6GCD/s= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a05:600c:2292:: with SMTP id 18mr35262wmf.6.1639498935306; Tue, 14 Dec 2021 08:22:15 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:15 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-9-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 08/43] kmsan: add ReST documentation From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=n+stCJe3; spf=pass (imf09.hostedemail.com: domain of 3t8S4YQYKCDgafcXYlaiiafY.Wigfchor-ggepUWe.ila@flex--glider.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3t8S4YQYKCDgafcXYlaiiafY.Wigfchor-ggepUWe.ila@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Queue-Id: 9A12614001A X-Stat-Signature: qcnzbq7qqxfknuiqepacn1sja6ughskx X-Rspamd-Server: rspam04 X-HE-Tag: 1639498933-454998 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add Documentation/dev-tools/kmsan.rst and reference it in the dev-tools index. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I751586f79418b95550a83c6035c650b5b01567cc --- Documentation/dev-tools/index.rst | 1 + Documentation/dev-tools/kmsan.rst | 411 ++++++++++++++++++++++++++++++ 2 files changed, 412 insertions(+) create mode 100644 Documentation/dev-tools/kmsan.rst diff --git a/Documentation/dev-tools/index.rst b/Documentation/dev-tools/index.rst index 010a2af1e7d9e..2fc71f769f481 100644 --- a/Documentation/dev-tools/index.rst +++ b/Documentation/dev-tools/index.rst @@ -24,6 +24,7 @@ Documentation/dev-tools/testing-overview.rst kcov gcov kasan + kmsan ubsan kmemleak kcsan diff --git a/Documentation/dev-tools/kmsan.rst b/Documentation/dev-tools/kmsan.rst new file mode 100644 index 0000000000000..121a1c46820a9 --- /dev/null +++ b/Documentation/dev-tools/kmsan.rst @@ -0,0 +1,411 @@ +============================= +KernelMemorySanitizer (KMSAN) +============================= + +KMSAN is a dynamic error detector aimed at finding uses of uninitialized +values. It is based on compiler instrumentation, and is quite similar to the +userspace `MemorySanitizer tool`_. + +Example report +============== + +Here is an example of a KMSAN report:: + + ===================================================== + BUG: KMSAN: uninit-value in test_uninit_kmsan_check_memory+0x1be/0x380 [kmsan_test] + test_uninit_kmsan_check_memory+0x1be/0x380 mm/kmsan/kmsan_test.c:273 + kunit_run_case_internal lib/kunit/test.c:333 + kunit_try_run_case+0x206/0x420 lib/kunit/test.c:374 + kunit_generic_run_threadfn_adapter+0x6d/0xc0 lib/kunit/try-catch.c:28 + kthread+0x721/0x850 kernel/kthread.c:327 + ret_from_fork+0x1f/0x30 ??:? + + Uninit was stored to memory at: + do_uninit_local_array+0xfa/0x110 mm/kmsan/kmsan_test.c:260 + test_uninit_kmsan_check_memory+0x1a2/0x380 mm/kmsan/kmsan_test.c:271 + kunit_run_case_internal lib/kunit/test.c:333 + kunit_try_run_case+0x206/0x420 lib/kunit/test.c:374 + kunit_generic_run_threadfn_adapter+0x6d/0xc0 lib/kunit/try-catch.c:28 + kthread+0x721/0x850 kernel/kthread.c:327 + ret_from_fork+0x1f/0x30 ??:? + + Local variable uninit created at: + do_uninit_local_array+0x4a/0x110 mm/kmsan/kmsan_test.c:256 + test_uninit_kmsan_check_memory+0x1a2/0x380 mm/kmsan/kmsan_test.c:271 + + Bytes 4-7 of 8 are uninitialized + Memory access of size 8 starts at ffff888083fe3da0 + + CPU: 0 PID: 6731 Comm: kunit_try_catch Tainted: G B E 5.16.0-rc3+ #104 + Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 + ===================================================== + + +The report says that the local variable ``uninit`` was created uninitialized in +``do_uninit_local_array()``. The lower stack trace corresponds to the place +where this variable was created. + +The upper stack shows where the uninit value was used - in +``test_uninit_kmsan_check_memory()``. The tool shows the bytes which were left +uninitialized in the local variable, as well as the stack where the value was +copied to another memory location before use. + +Please note that KMSAN only reports an error when an uninitialized value is +actually used (e.g. in a condition or pointer dereference). A lot of +uninitialized values in the kernel are never used, and reporting them would +result in too many false positives. + +KMSAN and Clang +=============== + +In order for KMSAN to work the kernel must be built with Clang, which so far is +the only compiler that has KMSAN support. The kernel instrumentation pass is +based on the userspace `MemorySanitizer tool`_. + +How to build +============ + +In order to build a kernel with KMSAN you will need a fresh Clang (14.0.0+). +Please refer to `LLVM documentation`_ for the instructions on how to build Clang. + +Now configure and build the kernel with CONFIG_KMSAN enabled. + +How KMSAN works +=============== + +KMSAN shadow memory +------------------- + +KMSAN associates a metadata byte (also called shadow byte) with every byte of +kernel memory. A bit in the shadow byte is set iff the corresponding bit of the +kernel memory byte is uninitialized. Marking the memory uninitialized (i.e. +setting its shadow bytes to ``0xff``) is called poisoning, marking it +initialized (setting the shadow bytes to ``0x00``) is called unpoisoning. + +When a new variable is allocated on the stack, it is poisoned by default by +instrumentation code inserted by the compiler (unless it is a stack variable +that is immediately initialized). Any new heap allocation done without +``__GFP_ZERO`` is also poisoned. + +Compiler instrumentation also tracks the shadow values with the help from the +runtime library in ``mm/kmsan/``. + +The shadow value of a basic or compound type is an array of bytes of the same +length. When a constant value is written into memory, that memory is unpoisoned. +When a value is read from memory, its shadow memory is also obtained and +propagated into all the operations which use that value. For every instruction +that takes one or more values the compiler generates code that calculates the +shadow of the result depending on those values and their shadows. + +Example:: + + int a = 0xff; // i.e. 0x000000ff + int b; + int c = a | b; + +In this case the shadow of ``a`` is ``0``, shadow of ``b`` is ``0xffffffff``, +shadow of ``c`` is ``0xffffff00``. This means that the upper three bytes of +``c`` are uninitialized, while the lower byte is initialized. + + +Origin tracking +--------------- + +Every four bytes of kernel memory also have a so-called origin assigned to +them. This origin describes the point in program execution at which the +uninitialized value was created. Every origin is associated with either the +full allocation stack (for heap-allocated memory), or the function containing +the uninitialized variable (for locals). + +When an uninitialized variable is allocated on stack or heap, a new origin +value is created, and that variable's origin is filled with that value. +When a value is read from memory, its origin is also read and kept together +with the shadow. For every instruction that takes one or more values the origin +of the result is one of the origins corresponding to any of the uninitialized +inputs. If a poisoned value is written into memory, its origin is written to the +corresponding storage as well. + +Example 1:: + + int a = 42; + int b; + int c = a + b; + +In this case the origin of ``b`` is generated upon function entry, and is +stored to the origin of ``c`` right before the addition result is written into +memory. + +Several variables may share the same origin address, if they are stored in the +same four-byte chunk. In this case every write to either variable updates the +origin for all of them. We have to sacrifice precision in this case, because +storing origins for individual bits (and even bytes) would be too costly. + +Example 2:: + + int combine(short a, short b) { + union ret_t { + int i; + short s[2]; + } ret; + ret.s[0] = a; + ret.s[1] = b; + return ret.i; + } + +If ``a`` is initialized and ``b`` is not, the shadow of the result would be +0xffff0000, and the origin of the result would be the origin of ``b``. +``ret.s[0]`` would have the same origin, but it will be never used, because +that variable is initialized. + +If both function arguments are uninitialized, only the origin of the second +argument is preserved. + +Origin chaining +~~~~~~~~~~~~~~~ + +To ease debugging, KMSAN creates a new origin for every store of an +uninitialized value to memory. The new origin references both its creation stack +and the previous origin the value had. This may cause increased memory +consumption, so we limit the length of origin chains in the runtime. + +Clang instrumentation API +------------------------- + +Clang instrumentation pass inserts calls to functions defined in +``mm/kmsan/instrumentation.c`` into the kernel code. + +Shadow manipulation +~~~~~~~~~~~~~~~~~~~ + +For every memory access the compiler emits a call to a function that returns a +pair of pointers to the shadow and origin addresses of the given memory:: + + typedef struct { + void *shadow, *origin; + } shadow_origin_ptr_t + + shadow_origin_ptr_t __msan_metadata_ptr_for_load_{1,2,4,8}(void *addr) + shadow_origin_ptr_t __msan_metadata_ptr_for_store_{1,2,4,8}(void *addr) + shadow_origin_ptr_t __msan_metadata_ptr_for_load_n(void *addr, uintptr_t size) + shadow_origin_ptr_t __msan_metadata_ptr_for_store_n(void *addr, uintptr_t size) + +The function name depends on the memory access size. + +The compiler makes sure that for every loaded value its shadow and origin +values are read from memory. When a value is stored to memory, its shadow and +origin are also stored using the metadata pointers. + +Origin tracking +~~~~~~~~~~~~~~~ + +A special function is used to create a new origin value for a local variable and +set the origin of that variable to that value:: + + void __msan_poison_alloca(void *addr, uintptr_t size, char *descr) + +Access to per-task data +~~~~~~~~~~~~~~~~~~~~~~~~~ + +At the beginning of every instrumented function KMSAN inserts a call to +``__msan_get_context_state()``:: + + kmsan_context_state *__msan_get_context_state(void) + +``kmsan_context_state`` is declared in ``include/linux/kmsan.h``:: + + struct kmsan_context_state { + char param_tls[KMSAN_PARAM_SIZE]; + char retval_tls[KMSAN_RETVAL_SIZE]; + char va_arg_tls[KMSAN_PARAM_SIZE]; + char va_arg_origin_tls[KMSAN_PARAM_SIZE]; + u64 va_arg_overflow_size_tls; + char param_origin_tls[KMSAN_PARAM_SIZE]; + depot_stack_handle_t retval_origin_tls; + }; + +This structure is used by KMSAN to pass parameter shadows and origins between +instrumented functions. + +String functions +~~~~~~~~~~~~~~~~ + +The compiler replaces calls to ``memcpy()``/``memmove()``/``memset()`` with the +following functions. These functions are also called when data structures are +initialized or copied, making sure shadow and origin values are copied alongside +with the data:: + + void *__msan_memcpy(void *dst, void *src, uintptr_t n) + void *__msan_memmove(void *dst, void *src, uintptr_t n) + void *__msan_memset(void *dst, int c, uintptr_t n) + +Error reporting +~~~~~~~~~~~~~~~ + +For each pointer dereference and each condition the compiler emits a shadow +check that calls ``__msan_warning()`` in the case a poisoned value is being +used:: + + void __msan_warning(u32 origin) + +``__msan_warning()`` causes KMSAN runtime to print an error report. + +Inline assembly instrumentation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +KMSAN instruments every inline assembly output with a call to:: + + void __msan_instrument_asm_store(void *addr, uintptr_t size) + +, which unpoisons the memory region. + +This approach may mask certain errors, but it also helps to avoid a lot of +false positives in bitwise operations, atomics etc. + +Sometimes the pointers passed into inline assembly do not point to valid memory. +In such cases they are ignored at runtime. + +Disabling the instrumentation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +A function can be marked with ``__no_kmsan_checks``. Doing so makes KMSAN +ignore uninitialized values in that function and mark its output as initialized. +As a result, the user will not get KMSAN reports related to that function. + +Another function attribute supported by KMSAN is ``__no_sanitize_memory``. +Applying this attribute to a function will result in KMSAN not instrumenting it, +which can be helpful if we do not want the compiler to mess up some low-level +code (e.g. that marked with ``noinstr``). + +This however comes at a cost: stack allocations from such functions will have +incorrect shadow/origin values, likely leading to false positives. Functions +called from non-instrumented code may also receive incorrect metadata for their +parameters. + +As a rule of thumb, avoid using ``__no_sanitize_memory`` explicitly. + +It is also possible to disable KMSAN for a single file (e.g. main.o):: + + KMSAN_SANITIZE_main.o := n + +or for the whole directory:: + + KMSAN_SANITIZE := n + +in the Makefile. Think of this as applying ``__no_sanitize_memory`` to every +function in the file or directory. Most users won't need KMSAN_SANITIZE, unless +their code gets broken by KMSAN (e.g. runs at early boot time). + +Runtime library +--------------- + +The code is located in ``mm/kmsan/``. + +Per-task KMSAN state +~~~~~~~~~~~~~~~~~~~~ + +Every task_struct has an associated KMSAN task state that holds the KMSAN +context (see above) and a per-task flag disallowing KMSAN reports:: + + struct kmsan_context { + ... + bool allow_reporting; + struct kmsan_context_state cstate; + ... + } + + struct task_struct { + ... + struct kmsan_context kmsan; + ... + } + + +KMSAN contexts +~~~~~~~~~~~~~~ + +When running in a kernel task context, KMSAN uses ``current->kmsan.cstate`` to +hold the metadata for function parameters and return values. + +But in the case the kernel is running in the interrupt, softirq or NMI context, +where ``current`` is unavailable, KMSAN switches to per-cpu interrupt state:: + + DEFINE_PER_CPU(struct kmsan_ctx, kmsan_percpu_ctx); + +Metadata allocation +~~~~~~~~~~~~~~~~~~~ + +There are several places in the kernel for which the metadata is stored. + +1. Each ``struct page`` instance contains two pointers to its shadow and +origin pages:: + + struct page { + ... + struct page *shadow, *origin; + ... + }; + +At boot-time, the kernel allocates shadow and origin pages for every available +kernel page. This is done quite late, when the kernel address space is already +fragmented, so normal data pages may arbitrarily interleave with the metadata +pages. + +This means that in general for two contiguous memory pages their shadow/origin +pages may not be contiguous. So, if a memory access crosses the boundary +of a memory block, accesses to shadow/origin memory may potentially corrupt +other pages or read incorrect values from them. + +In practice, contiguous memory pages returned by the same ``alloc_pages()`` +call will have contiguous metadata, whereas if these pages belong to two +different allocations their metadata pages can be fragmented. + +For the kernel data (``.data``, ``.bss`` etc.) and percpu memory regions +there also are no guarantees on metadata contiguity. + +In the case ``__msan_metadata_ptr_for_XXX_YYY()`` hits the border between two +pages with non-contiguous metadata, it returns pointers to fake shadow/origin regions:: + + char dummy_load_page[PAGE_SIZE] __attribute__((aligned(PAGE_SIZE))); + char dummy_store_page[PAGE_SIZE] __attribute__((aligned(PAGE_SIZE))); + +``dummy_load_page`` is zero-initialized, so reads from it always yield zeroes. +All stores to ``dummy_store_page`` are ignored. + +2. For vmalloc memory and modules, there is a direct mapping between the memory +range, its shadow and origin. KMSAN reduces the vmalloc area by 3/4, making only +the first quarter available to ``vmalloc()``. The second quarter of the vmalloc +area contains shadow memory for the first quarter, the third one holds the +origins. A small part of the fourth quarter contains shadow and origins for the +kernel modules. Please refer to ``arch/x86/include/asm/pgtable_64_types.h`` for +more details. + +When an array of pages is mapped into a contiguous virtual memory space, their +shadow and origin pages are similarly mapped into contiguous regions. + +3. For CPU entry area there are separate per-CPU arrays that hold its +metadata:: + + DEFINE_PER_CPU(char[CPU_ENTRY_AREA_SIZE], cpu_entry_area_shadow); + DEFINE_PER_CPU(char[CPU_ENTRY_AREA_SIZE], cpu_entry_area_origin); + +When calculating shadow and origin addresses for a given memory address, KMSAN +checks whether the address belongs to the physical page range, the virtual page +range or CPU entry area. + +Handling ``pt_regs`` +~~~~~~~~~~~~~~~~~~~~ + +Many functions receive a ``struct pt_regs`` holding the register state at a +certain point. Registers do not have (easily calculatable) shadow or origin +associated with them, so we assume they are always initialized. + +References +========== + +E. Stepanov, K. Serebryany. `MemorySanitizer: fast detector of uninitialized +memory use in C++ +`_. +In Proceedings of CGO 2015. + +.. _MemorySanitizer tool: https://clang.llvm.org/docs/MemorySanitizer.html +.. _LLVM documentation: https://llvm.org/docs/GettingStarted.html From patchwork Tue Dec 14 16:20:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676339 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4276FC433F5 for ; Tue, 14 Dec 2021 16:26:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 28B7F6B0081; Tue, 14 Dec 2021 11:22:30 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 23B486B0082; Tue, 14 Dec 2021 11:22:30 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0DB706B0083; Tue, 14 Dec 2021 11:22:30 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0034.hostedemail.com [216.40.44.34]) by kanga.kvack.org (Postfix) with ESMTP id EFED06B0081 for ; Tue, 14 Dec 2021 11:22:29 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id AE310180DA19B for ; Tue, 14 Dec 2021 16:22:19 +0000 (UTC) X-FDA: 78916917198.24.6F6EFFC Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf25.hostedemail.com (Postfix) with ESMTP id 8B954A0015 for ; Tue, 14 Dec 2021 16:22:15 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id d3-20020adfa343000000b0018ed6dd4629so4854656wrb.2 for ; Tue, 14 Dec 2021 08:22:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Jjmj3NfcL4RIWWcjIe7Qu67hp42cRss+aplyHHmWX6A=; b=dsYBdOhqGaaa9PjsEmjVgP7jb+SydcbxMgf/q2frLLdl6Ue4dxDmWMI1uSRnt9e4tl 8EN6fuqAN1pB07ffVjLKp68NQkEugitoVyjaWEXGt5URJy4AJ0/K/EJIzia7a8ifLLHH Qdp5xARe4b4nPQ6LApIOirW8wjWL8FFt6TB/XT6z9uClOELYki2BZJTBvrbNuRnOix+n DJeTnfHEFEc8sdPd09Y1lSle0Vt/+oaJvlHX484kHFA9gE4h3Y7MyhCz4sy8ZJ1NfP5c 8FYeQNEQyK3DKidsmLUkv+Q/Uw7+4laTzO9lkD8fN+gNuOLdKyKQ+sNof7Wvdz23qc7v w0lg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Jjmj3NfcL4RIWWcjIe7Qu67hp42cRss+aplyHHmWX6A=; b=dN8VFlQfybJNJoWiXjMuX6DXXF+2EkvoxKuX78SWk7tRGXkAoZwqv+cli+csRtBV1+ KjDNSGdvKkZiRZ1YXAJEn6jJ6hxwU/lwFxOBseteYXar9c8LhY/caoTm1/zTBSh9iDcx a/EdX5V+t8y9kM52EGlvWYVYOrU1+vkA6itEJhydkZP04tgO5BmZskP9ETdun/SbI0oI pvdXab8HmxtSJ4Vj6mNvC4iamIBtJWAJB4Z7f7+BVNZAtLhlC6KmVLN0vv3GYB80okUz 7f8WbskDHphAEerRHD51Dr9T73Ke6iRWZCWcftYqB0v2FpaUrkvdwcHishs9bHpPtOZM UkPQ== X-Gm-Message-State: AOAM532ZvNaWzpbAiQoXSoPnBmZYKaNe4xPeGs7ZG1iaW34OhWCHfvGl 6Ind2NgFQKbk+Dt+Wgl7JakfQDSyanw= X-Google-Smtp-Source: ABdhPJx22gOJI63ONPs3T0E3/ieKvxRNELWcFZOjCmtKU2tJkN8ygCc56stf/zL88guIxkLENaHppDelx68= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a1c:23cb:: with SMTP id j194mr47944273wmj.13.1639498938088; Tue, 14 Dec 2021 08:22:18 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:16 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-10-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 09/43] kmsan: introduce __no_sanitize_memory and __no_kmsan_checks From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 8B954A0015 X-Stat-Signature: o6bkuf7cmsqa33iqzu4hi5cxmmqje481 Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=dsYBdOhq; spf=pass (imf25.hostedemail.com: domain of 3usS4YQYKCDsdifabodlldib.Zljifkru-jjhsXZh.lod@flex--glider.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=3usS4YQYKCDsdifabodlldib.Zljifkru-jjhsXZh.lod@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1639498935-125822 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __no_sanitize_memory is a function attribute that instructs KMSAN to skip a function during instrumentation. This is needed to e.g. implement the noinstr functions. __no_kmsan_checks is a function attribute that makes KMSAN ignore the uninitialized values coming from the function's inputs, and initialize the function's outputs. Functions marked with this attribute can't be inlined into functions not marked with it, and vice versa. __SANITIZE_MEMORY__ is a macro that's defined iff the file is instrumented with KMSAN. This is not the same as CONFIG_KMSAN, which is defined for every file. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I004ff0360c918d3cd8b18767ddd1381c6d3281be --- include/linux/compiler-clang.h | 23 +++++++++++++++++++++++ include/linux/compiler-gcc.h | 6 ++++++ 2 files changed, 29 insertions(+) diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h index 3c4de9b6c6e3e..5f11a6f269e28 100644 --- a/include/linux/compiler-clang.h +++ b/include/linux/compiler-clang.h @@ -51,6 +51,29 @@ #define __no_sanitize_undefined #endif +#if __has_feature(memory_sanitizer) +#define __SANITIZE_MEMORY__ +/* + * Unlike other sanitizers, KMSAN still inserts code into functions marked with + * no_sanitize("kernel-memory"). Using disable_sanitizer_instrumentation + * provides the behavior consistent with other __no_sanitize_ attributes, + * guaranteeing that __no_sanitize_memory functions remain uninstrumented. + */ +#define __no_sanitize_memory __disable_sanitizer_instrumentation + +/* + * The __no_kmsan_checks attribute ensures that a function does not produce + * false positive reports by: + * - initializing all local variables and memory stores in this function; + * - skipping all shadow checks; + * - passing initialized arguments to this function's callees. + */ +#define __no_kmsan_checks __attribute__((no_sanitize("kernel-memory"))) +#else +#define __no_sanitize_memory +#define __no_kmsan_checks +#endif + /* * Support for __has_feature(coverage_sanitizer) was added in Clang 13 together * with no_sanitize("coverage"). Prior versions of Clang support coverage diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h index ccbbd31b3aae5..f6e69387aad05 100644 --- a/include/linux/compiler-gcc.h +++ b/include/linux/compiler-gcc.h @@ -129,6 +129,12 @@ #define __SANITIZE_ADDRESS__ #endif +/* + * GCC does not support KMSAN. + */ +#define __no_sanitize_memory +#define __no_kmsan_checks + /* * Turn individual warnings and errors on and off locally, depending * on version. From patchwork Tue Dec 14 16:20:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676343 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E714C433F5 for ; Tue, 14 Dec 2021 16:27:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F36DC6B0082; Tue, 14 Dec 2021 11:22:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EBB4E6B0085; Tue, 14 Dec 2021 11:22:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C4C316B0083; Tue, 14 Dec 2021 11:22:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0180.hostedemail.com [216.40.44.180]) by kanga.kvack.org (Postfix) with ESMTP id A9D836B0082 for ; Tue, 14 Dec 2021 11:22:32 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5D6058248D52 for ; Tue, 14 Dec 2021 16:22:22 +0000 (UTC) X-FDA: 78916917324.16.85C30EE Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf27.hostedemail.com (Postfix) with ESMTP id 98A2A40009 for ; Tue, 14 Dec 2021 16:22:21 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id 144-20020a1c0496000000b003305ac0e03aso13380262wme.8 for ; Tue, 14 Dec 2021 08:22:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ITyhCHO1UiX2DPCFkkqWNUvg7KfOdfMNTQoZ+QNE7bU=; b=VBMgNGVHDe18Og0TwNncVl3ew6zMtrxStb1uB6st2XRmRBQ3tEQXjuXrYuoSm0prGY K20SlkDHnXzsSFK83A1uOoPf7s8MoCjkMO60Pz4Bsrw5Zv3EFffTyroafyTpApie3f5m nG6BxjJm7YHC0e0oJK8lMT2uRqxC1uWTQHEppiGgKAXQKqpaYr/YGuy4I6cjhoRPfjGj aJvrRtPNJ75tOy8zCBClXxAQIrWbUSjbUhHVVIs8uDdSTyy0Rb+tOXzVFasYYZt5TqIc i0ARL88+0gAlT2wgHGVsXU2gLTSMd4ju6zO7fLp36fAYYiZyasvP1IysAaRLHtGTDfz/ h1sA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ITyhCHO1UiX2DPCFkkqWNUvg7KfOdfMNTQoZ+QNE7bU=; b=bwbjCX3fBC87Pr0sRyQIipuYL/7Kp9IQ22zR3MO2RHmBCfJGTLJBAaawHQMo6D3atX UMiNoQLb/YdnX9wL4nQsDSTZui2WO2uTxcxX66Qh7FEpBnvoKLEjXxbNqSR//IZeSfg+ L3M09bTcXhEpQE8T1SLxhpz0vsLnsoEcHL4BuL6N8BcBWvHJUA2Ny0WAAbHb67CqCUZ7 aEr4PhAXVeJXlHpXR/y9hvoesNAPyHdXTEYz/S9wuPcYgZWq1/GqwtCJ7wLEwARvtkhK 5DitLvPJF6TTg4f/XosQK/XaMP3d7zBowFxfF433PawlSaupjzYrJC7LC0+xfSvgjAdD Y4yg== X-Gm-Message-State: AOAM530+H6XurZehMfysAK1Ig4scR19oV0/RoDf4GVTS6UDlXY4czXMn xVTGt6tyVYZ1Gml2P6Z6aPg9kX2zk9c= X-Google-Smtp-Source: ABdhPJynp7SO7nkkbS3IYqSUnN1vnEfdIZ95PUH2Mr7X6JqBGNbKrX/KoqtS/NjRGYPpNTG+FE367wmyggw= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a5d:6ac2:: with SMTP id u2mr6665311wrw.486.1639498940724; Tue, 14 Dec 2021 08:22:20 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:17 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-11-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 10/43] kmsan: pgtable: reduce vmalloc space From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Queue-Id: 98A2A40009 X-Stat-Signature: bznt1nw5x3tzaqqjme4yn1wjg1ey4h5s Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=VBMgNGVH; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf27.hostedemail.com: domain of 3vMS4YQYKCD0fkhcdqfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--glider.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3vMS4YQYKCD0fkhcdqfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--glider.bounces.google.com X-Rspamd-Server: rspam11 X-HE-Tag: 1639498941-627071 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN is going to use 3/4 of existing vmalloc space to hold the metadata, therefore we lower VMALLOC_END to make sure vmalloc() doesn't allocate past the first 1/4. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I9d8b7f0a88a639f1263bc693cbd5c136626f7efd --- arch/x86/include/asm/pgtable_64_types.h | 41 ++++++++++++++++++++++++- arch/x86/mm/init_64.c | 2 +- 2 files changed, 41 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 91ac106545703..7f15d43754a34 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -139,7 +139,46 @@ extern unsigned int ptrs_per_p4d; # define VMEMMAP_START __VMEMMAP_BASE_L4 #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */ -#define VMALLOC_END (VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1) +#define VMEMORY_END (VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1) + +#ifndef CONFIG_KMSAN +#define VMALLOC_END VMEMORY_END +#else +/* + * In KMSAN builds vmalloc area is four times smaller, and the remaining 3/4 + * are used to keep the metadata for virtual pages. The memory formerly + * belonging to vmalloc area is now laid out as follows: + * + * 1st quarter: VMALLOC_START to VMALLOC_END - new vmalloc area + * 2nd quarter: KMSAN_VMALLOC_SHADOW_START to + * VMALLOC_END+KMSAN_VMALLOC_SHADOW_OFFSET - vmalloc area shadow + * 3rd quarter: KMSAN_VMALLOC_ORIGIN_START to + * VMALLOC_END+KMSAN_VMALLOC_ORIGIN_OFFSET - vmalloc area origins + * 4th quarter: KMSAN_MODULES_SHADOW_START to KMSAN_MODULES_ORIGIN_START + * - shadow for modules, + * KMSAN_MODULES_ORIGIN_START to + * KMSAN_MODULES_ORIGIN_START + MODULES_LEN - origins for modules. + */ +#define VMALLOC_QUARTER_SIZE ((VMALLOC_SIZE_TB << 40) >> 2) +#define VMALLOC_END (VMALLOC_START + VMALLOC_QUARTER_SIZE - 1) + +/* + * vmalloc metadata addresses are calculated by adding shadow/origin offsets + * to vmalloc address. + */ +#define KMSAN_VMALLOC_SHADOW_OFFSET VMALLOC_QUARTER_SIZE +#define KMSAN_VMALLOC_ORIGIN_OFFSET (VMALLOC_QUARTER_SIZE << 1) + +#define KMSAN_VMALLOC_SHADOW_START (VMALLOC_START + KMSAN_VMALLOC_SHADOW_OFFSET) +#define KMSAN_VMALLOC_ORIGIN_START (VMALLOC_START + KMSAN_VMALLOC_ORIGIN_OFFSET) + +/* + * The shadow/origin for modules are placed one by one in the last 1/4 of + * vmalloc space. + */ +#define KMSAN_MODULES_SHADOW_START (VMALLOC_END + KMSAN_VMALLOC_ORIGIN_OFFSET + 1) +#define KMSAN_MODULES_ORIGIN_START (KMSAN_MODULES_SHADOW_START + MODULES_LEN) +#endif /* CONFIG_KMSAN */ #define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE) /* The module sections ends with the start of the fixmap */ diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 36098226a9573..8e884e44a8d1e 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1287,7 +1287,7 @@ static void __init preallocate_vmalloc_pages(void) unsigned long addr; const char *lvl; - for (addr = VMALLOC_START; addr <= VMALLOC_END; addr = ALIGN(addr + 1, PGDIR_SIZE)) { + for (addr = VMALLOC_START; addr <= VMEMORY_END; addr = ALIGN(addr + 1, PGDIR_SIZE)) { pgd_t *pgd = pgd_offset_k(addr); p4d_t *p4d; pud_t *pud; From patchwork Tue Dec 14 16:20:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676345 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D04EC433F5 for ; Tue, 14 Dec 2021 16:28:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8F38C6B0083; Tue, 14 Dec 2021 11:22:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8A3366B0085; Tue, 14 Dec 2021 11:22:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 76B076B0087; Tue, 14 Dec 2021 11:22:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0168.hostedemail.com [216.40.44.168]) by kanga.kvack.org (Postfix) with ESMTP id 671C36B0083 for ; Tue, 14 Dec 2021 11:22:36 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 2E043180DA19C for ; Tue, 14 Dec 2021 16:22:26 +0000 (UTC) X-FDA: 78916917492.16.5BA7B4D Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf09.hostedemail.com (Postfix) with ESMTP id 24641140012 for ; Tue, 14 Dec 2021 16:22:22 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id c1-20020aa7c741000000b003e7bf1da4bcso17367020eds.21 for ; Tue, 14 Dec 2021 08:22:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=eUjwzrCoKbUh6R4rs1WIx34anW4MjQQumSQfrOhNX/M=; b=A+mk+fmu1YbqofexadpLTYy3cf/NtdeJeyWT6xHGBf9oiljC4Sn+IROP4d2ppCx8c+ KFpuvuJ6otYQZ4VP0KeILi504/gfqUhG9yg8vqhWullvaKggE7IHyx+rAZWaicoeV+oc L33wkJWvTbKwNbw/qiUrh6n4BMMGbz0sW1vJlDBy2QRt1+wG4/+RQoRIXAGtBN1m49x5 vdnHPrDaGi86YfsJ8RISvJcoAFwbSMZ0jM2I1xxcraggn+SuSJ6K29uQIzzUy2pnTCrz 5yNiGmVqiImv547jU5KnPombPq8Oh4FN8J64WuratTrkPgZkCl22mUeIeOymjZYY88/J S5BA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=eUjwzrCoKbUh6R4rs1WIx34anW4MjQQumSQfrOhNX/M=; b=Va8vqCAWTpohpuBGnsJBUIndje9ErfhI3WeFrbY7cb47I5wOSMD7Qy6/INA16H20yg zdwPn8kP5TTJE+Vx1W/Dbv6abUBMSNBA+5v0Um/DYfbDBVVlIyNyJlb5gdtTank3W8tb vxCmiLhO/pbHp77ME6/i9kHQ2SEw0PGuil5IRm+rrKfZ3n3/8e9WGc1DNIb3YrFqETtp uu0HQeXs9k9piGKWj9sefelX+nezxrTa1cJyDqpJ2IhYd7ibgSnfzHnvHXyUFbWzqPmw fLNc1fQqUZ1FcpmYOAlVNxDB4rAOK3SnsOkWsghODZ/LB6P35SnLFytyQF2tvLy9ABy3 NffA== X-Gm-Message-State: AOAM533iyrcNj7IriZK5Z3/Y1hmtH+GD59bn5noUrZ+JRNecGaTw4S+H FZiRcJ4QBdICSUR2YVWFclOiXstQEjs= X-Google-Smtp-Source: ABdhPJzLaORHTQ4qmDAWFyBr9kup6IJB7Qe+p4FZHaadADZeiQuoAqGrx6VJ6ldvWuPpnMogRhonYMzcup0= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a17:907:7e8e:: with SMTP id qb14mr6877453ejc.562.1639498943380; Tue, 14 Dec 2021 08:22:23 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:18 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-12-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 11/43] libnvdimm/pfn_dev: increase MAX_STRUCT_PAGE_SIZE From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 24641140012 X-Stat-Signature: 5jcjthcjgt4erphnqu5rgortf4ktckt4 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=A+mk+fmu; spf=pass (imf09.hostedemail.com: domain of 3v8S4YQYKCEAinkfgtiqqing.eqonkpwz-oomxcem.qti@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3v8S4YQYKCEAinkfgtiqqing.eqonkpwz-oomxcem.qti@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1639498942-741550 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN adds extra metadata fields to struct page, so it does not fit into 64 bytes anymore. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I353796acc6a850bfd7bb342aa1b63e616fc614f1 --- drivers/nvdimm/nd.h | 2 +- drivers/nvdimm/pfn_devs.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h index 6f8ce114032d0..b50aecd1dd423 100644 --- a/drivers/nvdimm/nd.h +++ b/drivers/nvdimm/nd.h @@ -663,7 +663,7 @@ void devm_namespace_disable(struct device *dev, struct nd_namespace_common *ndns); #if IS_ENABLED(CONFIG_ND_CLAIM) /* max struct page size independent of kernel config */ -#define MAX_STRUCT_PAGE_SIZE 64 +#define MAX_STRUCT_PAGE_SIZE 128 int nvdimm_setup_pfn(struct nd_pfn *nd_pfn, struct dev_pagemap *pgmap); #else static inline int nvdimm_setup_pfn(struct nd_pfn *nd_pfn, diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c index 58eda16f5c534..07a539195cc8b 100644 --- a/drivers/nvdimm/pfn_devs.c +++ b/drivers/nvdimm/pfn_devs.c @@ -785,7 +785,7 @@ static int nd_pfn_init(struct nd_pfn *nd_pfn) * when populating the vmemmap. This *should* be equal to * PMD_SIZE for most architectures. * - * Also make sure size of struct page is less than 64. We + * Also make sure size of struct page is less than 128. We * want to make sure we use large enough size here so that * we don't have a dynamic reserve space depending on * struct page size. But we also want to make sure we notice From patchwork Tue Dec 14 16:20:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676347 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1E63FC433F5 for ; Tue, 14 Dec 2021 16:28:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DAE506B0085; Tue, 14 Dec 2021 11:22:40 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D5E3E6B0087; Tue, 14 Dec 2021 11:22:40 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C24F86B0089; Tue, 14 Dec 2021 11:22:40 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0099.hostedemail.com [216.40.44.99]) by kanga.kvack.org (Postfix) with ESMTP id B3CF26B0085 for ; Tue, 14 Dec 2021 11:22:40 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 792F9181AC9CC for ; Tue, 14 Dec 2021 16:22:30 +0000 (UTC) X-FDA: 78916917660.29.6B253A0 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf09.hostedemail.com (Postfix) with ESMTP id 34243140015 for ; Tue, 14 Dec 2021 16:22:25 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id j25-20020a05600c1c1900b00332372c252dso8128948wms.1 for ; Tue, 14 Dec 2021 08:22:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=16fifb7fJzYqUAIiYSNBFHwRb1cbOSZox8dTCQyam0A=; b=FYoyQuklP2kR/YlG5VwN0TkoQqWmp2nR4oXvp2JqUrE3xG1lS0l6ESiSBW7nqYN4Lh hCEaEHi5xKaCvQ3l+uXlIighB9wYaF+K90XykbBeii6vP3VeU7xol/8hbMgw2agx8E0h YR3PvWGRaMrv63N+yw0UQ/mh2+UFeqZzCyTuPIBCNoBZRGL5UBnDci/f3GAeh7c6jwFF /Z2s5WvhBSufD+VRiWkfEtXPAntkmqf9R/r5u9AqMI9WKRfMGFScQtZMTgIISrPLfWFD 2n9AKqYILeSb1EsYLyOBOV/4doaFkcoWd7POKw5aiv076jLR9bhgNJlmUv5qhO2YZrEO TFDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=16fifb7fJzYqUAIiYSNBFHwRb1cbOSZox8dTCQyam0A=; b=pJthyFgvUZkuxLhr8UcLYQRQ3NU+FCThLT2nvV2G2/qnKOtHR/EEQIJbrAkQhVHKTQ rxWiadLTmGMMUJuR+dqzWepFAO/L9q2zeFx6sZBdkJd8er9hYFwkzM96+zwQXNAlRxt8 EnUUnwOqO3ysuE8MDvBleGY4QdjK89PSZBGkfZFUR1EfGFM2oMlEsxKMXyws0kASL5T8 295h3eoIg/CkGTZ8RLG86d+268mcVHaHxHZ1ZglxgwQzbpLQ3xiau5HRU1OUiZTcs1yT 2GfzVjtEQUnxuA5BcixmNdiKfaBE3/ptuIFuviUl3YkE4ZhR2bffjU/Z+HZc0hM5jmdF y/PA== X-Gm-Message-State: AOAM530AI98wgJe9s49dJ68dPnUnjq6VbsYmPz8n9kaTxOcFGZmhIkbm Y0+g9oK+k9oNHZDekWliUnrf09PqQKI= X-Google-Smtp-Source: ABdhPJwGJ87yZ0c0920vo+MGB3z979iOfpbYK0oGnlTVO8FYOEYPR7anYWpNFEcFdlEd0o9ummCDC9H5Y1g= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a05:600c:1c8d:: with SMTP id k13mr2668278wms.0.1639498945977; Tue, 14 Dec 2021 08:22:25 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:19 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-13-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 12/43] kcsan: clang: retire CONFIG_KCSAN_KCOV_BROKEN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=FYoyQukl; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf09.hostedemail.com: domain of 3wcS4YQYKCEIkpmhivksskpi.gsqpmry1-qqozego.svk@flex--glider.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3wcS4YQYKCEIkpmhivksskpi.gsqpmry1-qqozego.svk@flex--glider.bounces.google.com X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 34243140015 X-Stat-Signature: bfd7yszkpqubgs7b665f8z9rtqt5bp7o X-HE-Tag: 1639498945-899840 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kcov used to be broken prior to Clang 11, but right now that version is already the minimum required to build with KCSAN, that is why we don't need KCSAN_KCOV_BROKEN anymore. Suggested-by: Marco Elver Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ida287421577f37de337139b5b5b9e977e4a6fee2 --- lib/Kconfig.kcsan | 11 ----------- 1 file changed, 11 deletions(-) diff --git a/lib/Kconfig.kcsan b/lib/Kconfig.kcsan index e0a93ffdef30e..b81454b2a0d09 100644 --- a/lib/Kconfig.kcsan +++ b/lib/Kconfig.kcsan @@ -10,21 +10,10 @@ config HAVE_KCSAN_COMPILER For the list of compilers that support KCSAN, please see . -config KCSAN_KCOV_BROKEN - def_bool KCOV && CC_HAS_SANCOV_TRACE_PC - depends on CC_IS_CLANG - depends on !$(cc-option,-Werror=unused-command-line-argument -fsanitize=thread -fsanitize-coverage=trace-pc) - help - Some versions of clang support either KCSAN and KCOV but not the - combination of the two. - See https://bugs.llvm.org/show_bug.cgi?id=45831 for the status - in newer releases. - menuconfig KCSAN bool "KCSAN: dynamic data race detector" depends on HAVE_ARCH_KCSAN && HAVE_KCSAN_COMPILER depends on DEBUG_KERNEL && !KASAN - depends on !KCSAN_KCOV_BROKEN select STACKTRACE help The Kernel Concurrency Sanitizer (KCSAN) is a dynamic From patchwork Tue Dec 14 16:20:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676355 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46DA5C433F5 for ; Tue, 14 Dec 2021 16:29:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 71F1F6B0087; Tue, 14 Dec 2021 11:22:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6CEEE6B0089; Tue, 14 Dec 2021 11:22:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 435BD6B008A; Tue, 14 Dec 2021 11:22:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0097.hostedemail.com [216.40.44.97]) by kanga.kvack.org (Postfix) with ESMTP id 278896B0087 for ; Tue, 14 Dec 2021 11:22:42 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D616D181AEF09 for ; Tue, 14 Dec 2021 16:22:31 +0000 (UTC) X-FDA: 78916917702.16.F88D9B6 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf29.hostedemail.com (Postfix) with ESMTP id 43AC4120012 for ; Tue, 14 Dec 2021 16:22:28 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id w15-20020adfee4f000000b001a0e51ed4e5so263043wro.3 for ; Tue, 14 Dec 2021 08:22:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GdDeleqLQ5zJ/B8ZGsG4G+WeJyFxizBURfvIO90L+BA=; b=kF2VtMfaNQ0VNPcEcS8aivUWwnDvw2kZYtiPFn7NOHksQW6cuRBK8DI+MW+gn5yo9O XRLaYffQnkKurb9JtQj348/hHQ0MDFBlQjaNrD7kvd1e6DQ43zwgbNRnUxXIkEKB7vaZ 5v4Yeuo/x+wItqVvwe+bmBs1Sj3nt0rYsK9wfR3vgBG2322pTxOrmNYZa53okeQTzlU2 vKjY4dgo/O7gBuJfJ3N6JMOUfM0zvo8vNS+Fq4hSlRQT2ybqHiySX67eJn0EkZ+pvywU 7Yh0kIvTtHPc3yPLZd6sNVhGvkCtFiMTg11/03szUWnupeCiA9RFGXX8UJSIWxpxymmt hHfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GdDeleqLQ5zJ/B8ZGsG4G+WeJyFxizBURfvIO90L+BA=; b=2Q/kbRffVZe81sXam3k2TZJ0Wr14Iw5Xo4wy+82nd0Tqdjy4fdOT+gk6788J27KU94 2i62hCtLCv85dT4CHV8MAVnR8zft/rK12hHj1EQ3rpbZ5rIEtDa8c8b8wLgKebCFBWCX WWa1jEb0XZSJvQFd6nvEQzC20m6JlnXKuTN9Vg1IqNc8cVf2n/s3Z8JpW5SoznLhN3PS li4BeCEUVdBg6Z9q0mEd/3ybVuSieCJyDcIUyl/f1de91NhEcHa4CyGF5JhsGNQb8bH1 lNE5BtpYZu70pbSaX2KUc0xWv6f5FL4cFxhHhEbRtLEmglLlT0cFrjfu78YrrcFlkZJK xQFA== X-Gm-Message-State: AOAM532bniobrqvyxyk/C2GmIfVhenvN9jQEpkvmTg6A0yf3Bvz57Vno QKklVqiOOEv2X4oj9moZAWE+EJIQSb4= X-Google-Smtp-Source: ABdhPJyqRJvDZz7d4NuEwx8Pl1IG3FwbEQjLq1KRRnOTzBy9P9QqdjYDBOcwOcSmBZCiVi66lxCVelhNvmw= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a5d:66cb:: with SMTP id k11mr7059807wrw.253.1639498948793; Tue, 14 Dec 2021 08:22:28 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:20 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-14-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 13/43] kmsan: add KMSAN runtime core From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=kF2VtMfa; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of 3xMS4YQYKCEUnspklynvvnsl.jvtspu14-ttr2hjr.vyn@flex--glider.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3xMS4YQYKCEUnspklynvvnsl.jvtspu14-ttr2hjr.vyn@flex--glider.bounces.google.com X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 43AC4120012 X-Stat-Signature: wnybuno9n8bjiepbeo58u6upuzxh6r1e X-HE-Tag: 1639498948-162075 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch adds the core parts of KMSAN runtime and associated files: - include/linux/kmsan-checks.h: user API to poison/unpoison/check the kernel memory; - include/linux/kmsan.h: declarations of KMSAN hooks to be referenced outside of KMSAN runtime; - lib/Kconfig.kmsan: CONFIG_KMSAN and related declarations; - Makefile, mm/Makefile, mm/kmsan/Makefile: boilerplate Makefile code; - mm/kmsan/annotations.c: non-inlineable implementation of KMSAN_INIT(); - mm/kmsan/core.c: core functions that operate with shadow and origin memory and perform checks, utility functions; - mm/kmsan/hooks.c: KMSAN hooks for kernel subsystems; - mm/kmsan/init.c: KMSAN initialization routines; - mm/kmsan/instrumentation.c: functions called by KMSAN instrumentation; - mm/kmsan/kmsan.h: internal KMSAN declarations; - mm/kmsan/shadow.c: routines that encapsulate metadata creation and addressing; - scripts/Makefile.kmsan: CFLAGS_KMSAN - scripts/Makefile.lib: KMSAN_SANITIZE and KMSAN_ENABLE_CHECKS macros The patch also adds the necessary bookkeeping bits to struct page and struct task_struct: - each struct page now contains pointers to two struct pages holding KMSAN metadata (shadow and origins) for the original struct page; - each task_struct contains a struct kmsan_task_state used to track the metadata of function parameters and return values for that task. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I9b71bfe3425466c97159f9de0062e5e8e4fec866 --- Makefile | 1 + include/linux/kmsan-checks.h | 123 ++++++++++ include/linux/kmsan.h | 365 ++++++++++++++++++++++++++++++ include/linux/mm_types.h | 12 + include/linux/sched.h | 5 + lib/Kconfig.debug | 1 + lib/Kconfig.kmsan | 18 ++ mm/Makefile | 1 + mm/kmsan/Makefile | 22 ++ mm/kmsan/annotations.c | 28 +++ mm/kmsan/core.c | 427 +++++++++++++++++++++++++++++++++++ mm/kmsan/hooks.c | 400 ++++++++++++++++++++++++++++++++ mm/kmsan/init.c | 238 +++++++++++++++++++ mm/kmsan/instrumentation.c | 233 +++++++++++++++++++ mm/kmsan/kmsan.h | 197 ++++++++++++++++ mm/kmsan/report.c | 210 +++++++++++++++++ mm/kmsan/shadow.c | 332 +++++++++++++++++++++++++++ scripts/Makefile.kmsan | 1 + scripts/Makefile.lib | 9 + 19 files changed, 2623 insertions(+) create mode 100644 include/linux/kmsan-checks.h create mode 100644 include/linux/kmsan.h create mode 100644 lib/Kconfig.kmsan create mode 100644 mm/kmsan/Makefile create mode 100644 mm/kmsan/annotations.c create mode 100644 mm/kmsan/core.c create mode 100644 mm/kmsan/hooks.c create mode 100644 mm/kmsan/init.c create mode 100644 mm/kmsan/instrumentation.c create mode 100644 mm/kmsan/kmsan.h create mode 100644 mm/kmsan/report.c create mode 100644 mm/kmsan/shadow.c create mode 100644 scripts/Makefile.kmsan diff --git a/Makefile b/Makefile index 765115c99655f..7af3edfb2d0de 100644 --- a/Makefile +++ b/Makefile @@ -1012,6 +1012,7 @@ include-y := scripts/Makefile.extrawarn include-$(CONFIG_DEBUG_INFO) += scripts/Makefile.debug include-$(CONFIG_KASAN) += scripts/Makefile.kasan include-$(CONFIG_KCSAN) += scripts/Makefile.kcsan +include-$(CONFIG_KMSAN) += scripts/Makefile.kmsan include-$(CONFIG_UBSAN) += scripts/Makefile.ubsan include-$(CONFIG_KCOV) += scripts/Makefile.kcov include-$(CONFIG_GCC_PLUGINS) += scripts/Makefile.gcc-plugins diff --git a/include/linux/kmsan-checks.h b/include/linux/kmsan-checks.h new file mode 100644 index 0000000000000..d41868c723d1e --- /dev/null +++ b/include/linux/kmsan-checks.h @@ -0,0 +1,123 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * KMSAN checks to be used for one-off annotations in subsystems. + * + * Copyright (C) 2017-2021 Google LLC + * Author: Alexander Potapenko + * + */ + +#ifndef _LINUX_KMSAN_CHECKS_H +#define _LINUX_KMSAN_CHECKS_H + +#include + +#ifdef CONFIG_KMSAN + +/* + * Helper functions that mark the return value initialized. + * See mm/kmsan/annotations.c. + */ +u8 kmsan_init_1(u8 value); +u16 kmsan_init_2(u16 value); +u32 kmsan_init_4(u32 value); +u64 kmsan_init_8(u64 value); + +static inline void *kmsan_init_ptr(void *ptr) +{ + return (void *)kmsan_init_8((u64)ptr); +} + +static inline char kmsan_init_char(char value) +{ + return (u8)kmsan_init_1((u8)value); +} + +#define __decl_kmsan_init_type(type, fn) unsigned type : fn, signed type : fn + +/** + * kmsan_init - Make the value initialized. + * @val: 1-, 2-, 4- or 8-byte integer that may be treated as uninitialized by + * KMSAN. + * + * Return: value of @val that KMSAN treats as initialized. + */ +#define kmsan_init(val) \ + ( \ + (typeof(val))(_Generic((val), \ + __decl_kmsan_init_type(char, kmsan_init_1), \ + __decl_kmsan_init_type(short, kmsan_init_2), \ + __decl_kmsan_init_type(int, kmsan_init_4), \ + __decl_kmsan_init_type(long, kmsan_init_8), \ + char : kmsan_init_char, \ + void * : kmsan_init_ptr)(val))) + +/** + * kmsan_poison_memory() - Mark the memory range as uninitialized. + * @address: address to start with. + * @size: size of buffer to poison. + * @flags: GFP flags for allocations done by this function. + * + * Until other data is written to this range, KMSAN will treat it as + * uninitialized. Error reports for this memory will reference the call site of + * kmsan_poison_memory() as origin. + */ +void kmsan_poison_memory(const void *address, size_t size, gfp_t flags); + +/** + * kmsan_unpoison_memory() - Mark the memory range as initialized. + * @address: address to start with. + * @size: size of buffer to unpoison. + * + * Until other data is written to this range, KMSAN will treat it as + * initialized. + */ +void kmsan_unpoison_memory(const void *address, size_t size); + +/** + * kmsan_check_memory() - Check the memory range for being initialized. + * @address: address to start with. + * @size: size of buffer to check. + * + * If any piece of the given range is marked as uninitialized, KMSAN will report + * an error. + */ +void kmsan_check_memory(const void *address, size_t size); + +/** + * kmsan_copy_to_user() - Notify KMSAN about a data transfer to userspace. + * @to: destination address in the userspace. + * @from: source address in the kernel. + * @to_copy: number of bytes to copy. + * @left: number of bytes not copied. + * + * If this is a real userspace data transfer, KMSAN checks the bytes that were + * actually copied to ensure there was no information leak. If @to belongs to + * the kernel space (which is possible for compat syscalls), KMSAN just copies + * the metadata. + */ +void kmsan_copy_to_user(const void *to, const void *from, size_t to_copy, + size_t left); + +#else + +#define kmsan_init(value) (value) + +static inline void kmsan_poison_memory(const void *address, size_t size, + gfp_t flags) +{ +} +static inline void kmsan_unpoison_memory(const void *address, size_t size) +{ +} +static inline void kmsan_check_memory(const void *address, size_t size) +{ +} +static inline void kmsan_copy_to_user(const void *to, const void *from, + size_t to_copy, size_t left) +{ +} + +#endif + +#endif /* _LINUX_KMSAN_CHECKS_H */ diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h new file mode 100644 index 0000000000000..f17bc9ded7b97 --- /dev/null +++ b/include/linux/kmsan.h @@ -0,0 +1,365 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * KMSAN API for subsystems. + * + * Copyright (C) 2017-2021 Google LLC + * Author: Alexander Potapenko + * + */ +#ifndef _LINUX_KMSAN_H +#define _LINUX_KMSAN_H + +#include +#include +#include +#include +#include +#include + +struct page; +struct kmem_cache; +struct task_struct; +struct scatterlist; +struct urb; + +#ifdef CONFIG_KMSAN + +/* These constants are defined in the MSan LLVM instrumentation pass. */ +#define KMSAN_RETVAL_SIZE 800 +#define KMSAN_PARAM_SIZE 800 + +struct kmsan_context_state { + char param_tls[KMSAN_PARAM_SIZE]; + char retval_tls[KMSAN_RETVAL_SIZE]; + char va_arg_tls[KMSAN_PARAM_SIZE]; + char va_arg_origin_tls[KMSAN_PARAM_SIZE]; + u64 va_arg_overflow_size_tls; + char param_origin_tls[KMSAN_PARAM_SIZE]; + depot_stack_handle_t retval_origin_tls; +}; + +#undef KMSAN_PARAM_SIZE +#undef KMSAN_RETVAL_SIZE + +struct kmsan_ctx { + struct kmsan_context_state cstate; + int kmsan_in_runtime; + bool allow_reporting; +}; + +/** + * kmsan_init_shadow() - Initialize KMSAN shadow at boot time. + * + * Allocate and initialize KMSAN metadata for early allocations. + */ +void __init kmsan_init_shadow(void); + +/** + * kmsan_init_runtime() - Initialize KMSAN state and enable KMSAN. + */ +void __init kmsan_init_runtime(void); + +/** + * kmsan_memblock_free_pages() - handle freeing of memblock pages. + * @page: struct page to free. + * @order: order of @page. + * + * Freed pages are either returned to buddy allocator or held back to be used + * as metadata pages. + */ +bool __init kmsan_memblock_free_pages(struct page *page, unsigned int order); + +/** + * kmsan_task_create() - Initialize KMSAN state for the task. + * @task: task to initialize. + */ +void kmsan_task_create(struct task_struct *task); + +/** + * kmsan_task_exit() - Notify KMSAN that a task has exited. + * @task: task about to finish. + */ +void kmsan_task_exit(struct task_struct *task); + +/** + * kmsan_alloc_page() - Notify KMSAN about an alloc_pages() call. + * @page: struct page pointer returned by alloc_pages(). + * @order: order of allocated struct page. + * @flags: GFP flags used by alloc_pages() + * + * KMSAN marks 1<<@order pages starting at @page as uninitialized, unless + * @flags contain __GFP_ZERO. + */ +void kmsan_alloc_page(struct page *page, unsigned int order, gfp_t flags); + +/** + * kmsan_free_page() - Notify KMSAN about a free_pages() call. + * @page: struct page pointer passed to free_pages(). + * @order: order of deallocated struct page. + * + * KMSAN marks freed memory as uninitialized. + */ +void kmsan_free_page(struct page *page, unsigned int order); + +/** + * kmsan_copy_page_meta() - Copy KMSAN metadata between two pages. + * @dst: destination page. + * @src: source page. + * + * KMSAN copies the contents of metadata pages for @src into the metadata pages + * for @dst. If @dst has no associated metadata pages, nothing happens. + * If @src has no associated metadata pages, @dst metadata pages are unpoisoned. + */ +void kmsan_copy_page_meta(struct page *dst, struct page *src); + +/** + * kmsan_gup_pgd_range() - Notify KMSAN about a gup_pgd_range() call. + * @pages: array of struct page pointers. + * @nr: array size. + * + * gup_pgd_range() creates new pages, some of which may belong to the userspace + * memory. In that case KMSAN marks them as initialized. + */ +void kmsan_gup_pgd_range(struct page **pages, int nr); + +/** + * kmsan_slab_alloc() - Notify KMSAN about a slab allocation. + * @s: slab cache the object belongs to. + * @object: object pointer. + * @flags: GFP flags passed to the allocator. + * + * Depending on cache flags and GFP flags, KMSAN sets up the metadata of the + * newly created object, marking it as initialized or uninitialized. + */ +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags); + +/** + * kmsan_slab_free() - Notify KMSAN about a slab deallocation. + * @s: slab cache the object belongs to. + * @object: object pointer. + * + * KMSAN marks the freed object as uninitialized. + */ +void kmsan_slab_free(struct kmem_cache *s, void *object); + +/** + * kmsan_kmalloc_large() - Notify KMSAN about a large slab allocation. + * @ptr: object pointer. + * @size: object size. + * @flags: GFP flags passed to the allocator. + * + * Similar to kmsan_slab_alloc(), but for large allocations. + */ +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags); + +/** + * kmsan_kfree_large() - Notify KMSAN about a large slab deallocation. + * @ptr: object pointer. + * + * Similar to kmsan_slab_free(), but for large allocations. + */ +void kmsan_kfree_large(const void *ptr); + +/** + * kmsan_map_kernel_range_noflush() - Notify KMSAN about a vmap. + * @start: start of vmapped range. + * @end: end of vmapped range. + * @prot: page protection flags used for vmap. + * @pages: array of pages. + * @page_shift: page_shift passed to vmap_range_noflush(). + * + * KMSAN maps shadow and origin pages of @pages into contiguous ranges in + * vmalloc metadata address range. + */ +void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, + pgprot_t prot, struct page **pages, + unsigned int page_shift); + +/** + * kmsan_vunmap_kernel_range_noflush() - Notify KMSAN about a vunmap. + * @start: start of vunmapped range. + * @end: end of vunmapped range. + * + * KMSAN unmaps the contiguous metadata ranges created by + * kmsan_map_kernel_range_noflush(). + */ +void kmsan_vunmap_range_noflush(unsigned long start, unsigned long end); + +/** + * kmsan_ioremap_page_range() - Notify KMSAN about a ioremap_page_range() call. + * @addr: range start. + * @end: range end. + * @phys_addr: physical range start. + * @prot: page protection flags used for ioremap_page_range(). + * @page_shift: page_shift argument passed to vmap_range_noflush(). + * + * KMSAN creates new metadata pages for the physical pages mapped into the + * virtual memory. + */ +void kmsan_ioremap_page_range(unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int page_shift); + +/** + * kmsan_iounmap_page_range() - Notify KMSAN about a iounmap_page_range() call. + * @start: range start. + * @end: range end. + * + * KMSAN unmaps the metadata pages for the given range and, unlike for + * vunmap_page_range(), also deallocates them. + */ +void kmsan_iounmap_page_range(unsigned long start, unsigned long end); + +/** + * kmsan_handle_dma() - Handle a DMA data transfer. + * @page: first page of the buffer. + * @offset: offset of the buffer within the first page. + * @size: buffer size. + * @dir: one of possible dma_data_direction values. + * + * Depending on @direction, KMSAN: + * * checks the buffer, if it is copied to device; + * * initializes the buffer, if it is copied from device; + * * does both, if this is a DMA_BIDIRECTIONAL transfer. + */ +void kmsan_handle_dma(struct page *page, size_t offset, size_t size, + enum dma_data_direction dir); + +/** + * kmsan_handle_dma_sg() - Handle a DMA transfer using scatterlist. + * @sg: scatterlist holding DMA buffers. + * @nents: number of scatterlist entries. + * @dir: one of possible dma_data_direction values. + * + * Depending on @direction, KMSAN: + * * checks the buffers in the scatterlist, if they are copied to device; + * * initializes the buffers, if they are copied from device; + * * does both, if this is a DMA_BIDIRECTIONAL transfer. + */ +void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, + enum dma_data_direction dir); + +/** + * kmsan_handle_urb() - Handle a USB data transfer. + * @urb: struct urb pointer. + * @is_out: data transfer direction (true means output to hardware). + * + * If @is_out is true, KMSAN checks the transfer buffer of @urb. Otherwise, + * KMSAN initializes the transfer buffer. + */ +void kmsan_handle_urb(const struct urb *urb, bool is_out); + +/** + * kmsan_instrumentation_begin() - handle instrumentation_begin(). + * @regs: pointer to struct pt_regs that non-instrumented code passes to + * instrumented code. + */ +void kmsan_instrumentation_begin(struct pt_regs *regs); + +#else + +static inline void kmsan_init_shadow(void) +{ +} + +static inline void kmsan_init_runtime(void) +{ +} + +static inline bool kmsan_memblock_free_pages(struct page *page, + unsigned int order) +{ + return true; +} + +static inline void kmsan_task_create(struct task_struct *task) +{ +} + +static inline void kmsan_task_exit(struct task_struct *task) +{ +} + +static inline int kmsan_alloc_page(struct page *page, unsigned int order, + gfp_t flags) +{ + return 0; +} + +static inline void kmsan_free_page(struct page *page, unsigned int order) +{ +} + +static inline void kmsan_copy_page_meta(struct page *dst, struct page *src) +{ +} + +static inline void kmsan_gup_pgd_range(struct page **pages, int nr) +{ +} + +static inline void kmsan_slab_alloc(struct kmem_cache *s, void *object, + gfp_t flags) +{ +} + +static inline void kmsan_slab_free(struct kmem_cache *s, void *object) +{ +} + +static inline void kmsan_kmalloc_large(const void *ptr, size_t size, + gfp_t flags) +{ +} + +static inline void kmsan_kfree_large(const void *ptr) +{ +} + +static inline void kmsan_vmap_pages_range_noflush(unsigned long start, + unsigned long end, + pgprot_t prot, + struct page **pages, + unsigned int page_shift) +{ +} + +static inline void kmsan_vunmap_range_noflush(unsigned long start, + unsigned long end) +{ +} + +static inline void kmsan_ioremap_page_range(unsigned long start, + unsigned long end, + phys_addr_t phys_addr, + pgprot_t prot, + unsigned int page_shift) +{ +} + +static inline void kmsan_iounmap_page_range(unsigned long start, + unsigned long end) +{ +} + +static inline void kmsan_handle_dma(struct page *page, size_t offset, + size_t size, enum dma_data_direction dir) +{ +} + +static inline void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, + enum dma_data_direction dir) +{ +} + +static inline void kmsan_handle_urb(const struct urb *urb, bool is_out) +{ +} + +static inline void kmsan_instrumentation_begin(struct pt_regs *regs) +{ +} + +#endif + +#endif /* _LINUX_KMSAN_H */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index c3a6e62096006..bdbe4b39b826d 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -233,6 +233,18 @@ struct page { not kmapped, ie. highmem) */ #endif /* WANT_PAGE_VIRTUAL */ +#ifdef CONFIG_KMSAN + /* + * KMSAN metadata for this page: + * - shadow page: every bit indicates whether the corresponding + * bit of the original page is initialized (0) or not (1); + * - origin page: every 4 bytes contain an id of the stack trace + * where the uninitialized value was created. + */ + struct page *kmsan_shadow; + struct page *kmsan_origin; +#endif + #ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS int _last_cpupid; #endif diff --git a/include/linux/sched.h b/include/linux/sched.h index 78c351e35fec6..8d076f82d5072 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -1341,6 +1342,10 @@ struct task_struct { #endif #endif +#ifdef CONFIG_KMSAN + struct kmsan_ctx kmsan_ctx; +#endif + #if IS_ENABLED(CONFIG_KUNIT) struct kunit *kunit_test; #endif diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 5e14e32056add..304374f2c300a 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -963,6 +963,7 @@ config DEBUG_STACKOVERFLOW source "lib/Kconfig.kasan" source "lib/Kconfig.kfence" +source "lib/Kconfig.kmsan" endmenu # "Memory Debugging" diff --git a/lib/Kconfig.kmsan b/lib/Kconfig.kmsan new file mode 100644 index 0000000000000..02fd6db792b1f --- /dev/null +++ b/lib/Kconfig.kmsan @@ -0,0 +1,18 @@ +config HAVE_ARCH_KMSAN + bool + +config HAVE_KMSAN_COMPILER + def_bool (CC_IS_CLANG && $(cc-option,-fsanitize=kernel-memory -mllvm -msan-disable-checks=1)) + +config KMSAN + bool "KMSAN: detector of uninitialized values use" + depends on HAVE_ARCH_KMSAN && HAVE_KMSAN_COMPILER + depends on SLUB && !KASAN && !KCSAN + depends on CC_IS_CLANG && CLANG_VERSION >= 140000 + select STACKDEPOT + help + KernelMemorySanitizer (KMSAN) is a dynamic detector of uses of + uninitialized values in the kernel. It is based on compiler + instrumentation provided by Clang and thus requires Clang to build. + + See for more details. diff --git a/mm/Makefile b/mm/Makefile index d6c0042e3aa0d..8e9319a9affea 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -87,6 +87,7 @@ obj-$(CONFIG_SLAB) += slab.o obj-$(CONFIG_SLUB) += slub.o obj-$(CONFIG_KASAN) += kasan/ obj-$(CONFIG_KFENCE) += kfence/ +obj-$(CONFIG_KMSAN) += kmsan/ obj-$(CONFIG_FAILSLAB) += failslab.o obj-$(CONFIG_MEMTEST) += memtest.o obj-$(CONFIG_MIGRATION) += migrate.o diff --git a/mm/kmsan/Makefile b/mm/kmsan/Makefile new file mode 100644 index 0000000000000..f57a956cb1c8b --- /dev/null +++ b/mm/kmsan/Makefile @@ -0,0 +1,22 @@ +obj-y := core.o instrumentation.o init.o hooks.o report.o shadow.o annotations.o + +KMSAN_SANITIZE := n +KCOV_INSTRUMENT := n +UBSAN_SANITIZE := n + +KMSAN_SANITIZE_kmsan_annotations.o := y + +# Disable instrumentation of KMSAN runtime with other tools. +CC_FLAGS_KMSAN_RUNTIME := -fno-stack-protector +CC_FLAGS_KMSAN_RUNTIME += $(call cc-option,-fno-conserve-stack) +CC_FLAGS_KMSAN_RUNTIME += -DDISABLE_BRANCH_PROFILING + +CFLAGS_REMOVE.o = $(CC_FLAGS_FTRACE) + +CFLAGS_annotations.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_core.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_hooks.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_init.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_instrumentation.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_report.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_shadow.o := $(CC_FLAGS_KMSAN_RUNTIME) diff --git a/mm/kmsan/annotations.c b/mm/kmsan/annotations.c new file mode 100644 index 0000000000000..037468d1840f2 --- /dev/null +++ b/mm/kmsan/annotations.c @@ -0,0 +1,28 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN annotations. + * + * The kmsan_init_SIZE functions reside in a separate translation unit to + * prevent inlining them. Clang may inline functions marked with + * __no_sanitize_memory attribute into functions without it, which effectively + * results in ignoring the attribute. + * + * Copyright (C) 2017-2021 Google LLC + * Author: Alexander Potapenko + * + */ + +#include +#include + +#define DECLARE_KMSAN_INIT(size, t) \ + __no_sanitize_memory t kmsan_init_##size(t value) \ + { \ + return value; \ + } \ + EXPORT_SYMBOL(kmsan_init_##size) + +DECLARE_KMSAN_INIT(1, u8); +DECLARE_KMSAN_INIT(2, u16); +DECLARE_KMSAN_INIT(4, u32); +DECLARE_KMSAN_INIT(8, u64); diff --git a/mm/kmsan/core.c b/mm/kmsan/core.c new file mode 100644 index 0000000000000..b2bb25a8013e4 --- /dev/null +++ b/mm/kmsan/core.c @@ -0,0 +1,427 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN runtime library. + * + * Copyright (C) 2017-2021 Google LLC + * Author: Alexander Potapenko + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../slab.h" +#include "kmsan.h" + +/* + * Avoid creating too long origin chains, these are unlikely to participate in + * real reports. + */ +#define MAX_CHAIN_DEPTH 7 +#define NUM_SKIPPED_TO_WARN 10000 + +bool kmsan_enabled __read_mostly; + +/* + * Per-CPU KMSAN context to be used in interrupts, where current->kmsan is + * unavaliable. + */ +DEFINE_PER_CPU(struct kmsan_ctx, kmsan_percpu_ctx); + +void kmsan_internal_task_create(struct task_struct *task) +{ + struct kmsan_ctx *ctx = &task->kmsan_ctx; + + __memset(ctx, 0, sizeof(struct kmsan_ctx)); + ctx->allow_reporting = true; + kmsan_internal_unpoison_memory(current_thread_info(), + sizeof(struct thread_info), false); +} + +void kmsan_internal_poison_memory(void *address, size_t size, gfp_t flags, + unsigned int poison_flags) +{ + u32 extra_bits = + kmsan_extra_bits(/*depth*/ 0, poison_flags & KMSAN_POISON_FREE); + bool checked = poison_flags & KMSAN_POISON_CHECK; + depot_stack_handle_t handle; + + handle = kmsan_save_stack_with_flags(flags, extra_bits); + kmsan_internal_set_shadow_origin(address, size, -1, handle, checked); +} + +void kmsan_internal_unpoison_memory(void *address, size_t size, bool checked) +{ + kmsan_internal_set_shadow_origin(address, size, 0, 0, checked); +} + +depot_stack_handle_t kmsan_save_stack_with_flags(gfp_t flags, + unsigned int extra) +{ + unsigned long entries[KMSAN_STACK_DEPTH]; + unsigned int nr_entries; + + nr_entries = stack_trace_save(entries, KMSAN_STACK_DEPTH, 0); + nr_entries = filter_irq_stacks(entries, nr_entries); + + /* Don't sleep (see might_sleep_if() in __alloc_pages_nodemask()). */ + flags &= ~__GFP_DIRECT_RECLAIM; + + return __stack_depot_save(entries, nr_entries, extra, flags, true); +} + +/* Copy the metadata following the memmove() behavior. */ +void kmsan_internal_memmove_metadata(void *dst, void *src, size_t n) +{ + depot_stack_handle_t old_origin = 0, chain_origin, new_origin = 0; + int src_slots, dst_slots, i, iter, step, skip_bits; + depot_stack_handle_t *origin_src, *origin_dst; + void *shadow_src, *shadow_dst; + u32 *align_shadow_src, shadow; + bool backwards; + + shadow_dst = kmsan_get_metadata(dst, KMSAN_META_SHADOW); + if (!shadow_dst) + return; + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(dst, n)); + + shadow_src = kmsan_get_metadata(src, KMSAN_META_SHADOW); + if (!shadow_src) { + /* + * |src| is untracked: zero out destination shadow, ignore the + * origins, we're done. + */ + __memset(shadow_dst, 0, n); + return; + } + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(src, n)); + + __memmove(shadow_dst, shadow_src, n); + + origin_dst = kmsan_get_metadata(dst, KMSAN_META_ORIGIN); + origin_src = kmsan_get_metadata(src, KMSAN_META_ORIGIN); + KMSAN_WARN_ON(!origin_dst || !origin_src); + src_slots = (ALIGN((u64)src + n, KMSAN_ORIGIN_SIZE) - + ALIGN_DOWN((u64)src, KMSAN_ORIGIN_SIZE)) / + KMSAN_ORIGIN_SIZE; + dst_slots = (ALIGN((u64)dst + n, KMSAN_ORIGIN_SIZE) - + ALIGN_DOWN((u64)dst, KMSAN_ORIGIN_SIZE)) / + KMSAN_ORIGIN_SIZE; + KMSAN_WARN_ON(!src_slots || !dst_slots); + KMSAN_WARN_ON((src_slots < 1) || (dst_slots < 1)); + KMSAN_WARN_ON((src_slots - dst_slots > 1) || + (dst_slots - src_slots < -1)); + + backwards = dst > src; + i = backwards ? min(src_slots, dst_slots) - 1 : 0; + iter = backwards ? -1 : 1; + + align_shadow_src = + (u32 *)ALIGN_DOWN((u64)shadow_src, KMSAN_ORIGIN_SIZE); + for (step = 0; step < min(src_slots, dst_slots); step++, i += iter) { + KMSAN_WARN_ON(i < 0); + shadow = align_shadow_src[i]; + if (i == 0) { + /* + * If |src| isn't aligned on KMSAN_ORIGIN_SIZE, don't + * look at the first |src % KMSAN_ORIGIN_SIZE| bytes + * of the first shadow slot. + */ + skip_bits = ((u64)src % KMSAN_ORIGIN_SIZE) * 8; + shadow = (shadow << skip_bits) >> skip_bits; + } + if (i == src_slots - 1) { + /* + * If |src + n| isn't aligned on + * KMSAN_ORIGIN_SIZE, don't look at the last + * |(src + n) % KMSAN_ORIGIN_SIZE| bytes of the + * last shadow slot. + */ + skip_bits = (((u64)src + n) % KMSAN_ORIGIN_SIZE) * 8; + shadow = (shadow >> skip_bits) << skip_bits; + } + /* + * Overwrite the origin only if the corresponding + * shadow is nonempty. + */ + if (origin_src[i] && (origin_src[i] != old_origin) && shadow) { + old_origin = origin_src[i]; + chain_origin = kmsan_internal_chain_origin(old_origin); + /* + * kmsan_internal_chain_origin() may return + * NULL, but we don't want to lose the previous + * origin value. + */ + if (chain_origin) + new_origin = chain_origin; + else + new_origin = old_origin; + } + if (shadow) + origin_dst[i] = new_origin; + else + origin_dst[i] = 0; + } +} + +depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id) +{ + unsigned long entries[3]; + u32 extra_bits; + int depth; + bool uaf; + + if (!id) + return id; + /* + * Make sure we have enough spare bits in |id| to hold the UAF bit and + * the chain depth. + */ + BUILD_BUG_ON((1 << STACK_DEPOT_EXTRA_BITS) <= (MAX_CHAIN_DEPTH << 1)); + + extra_bits = stack_depot_get_extra_bits(id); + depth = kmsan_depth_from_eb(extra_bits); + uaf = kmsan_uaf_from_eb(extra_bits); + + if (depth >= MAX_CHAIN_DEPTH) { + static atomic_long_t kmsan_skipped_origins; + long skipped = atomic_long_inc_return(&kmsan_skipped_origins); + + if (skipped % NUM_SKIPPED_TO_WARN == 0) { + pr_warn("not chained %d origins\n", skipped); + dump_stack(); + kmsan_print_origin(id); + } + return id; + } + depth++; + extra_bits = kmsan_extra_bits(depth, uaf); + + entries[0] = KMSAN_CHAIN_MAGIC_ORIGIN; + entries[1] = kmsan_save_stack_with_flags(GFP_ATOMIC, 0); + entries[2] = id; + return __stack_depot_save(entries, ARRAY_SIZE(entries), extra_bits, + GFP_ATOMIC, true); +} + +void kmsan_internal_set_shadow_origin(void *addr, size_t size, int b, + u32 origin, bool checked) +{ + u64 address = (u64)addr; + void *shadow_start; + u32 *origin_start; + size_t pad = 0; + int i; + + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(addr, size)); + shadow_start = kmsan_get_metadata(addr, KMSAN_META_SHADOW); + if (!shadow_start) { + /* + * kmsan_metadata_is_contiguous() is true, so either all shadow + * and origin pages are NULL, or all are non-NULL. + */ + if (checked) { + pr_err("%s: not memsetting %d bytes starting at %px, because the shadow is NULL\n", + __func__, size, addr); + BUG(); + } + return; + } + __memset(shadow_start, b, size); + + if (!IS_ALIGNED(address, KMSAN_ORIGIN_SIZE)) { + pad = address % KMSAN_ORIGIN_SIZE; + address -= pad; + size += pad; + } + size = ALIGN(size, KMSAN_ORIGIN_SIZE); + origin_start = + (u32 *)kmsan_get_metadata((void *)address, KMSAN_META_ORIGIN); + + for (i = 0; i < size / KMSAN_ORIGIN_SIZE; i++) + origin_start[i] = origin; +} + +struct page *kmsan_vmalloc_to_page_or_null(void *vaddr) +{ + struct page *page; + + if (!kmsan_internal_is_vmalloc_addr(vaddr) && + !kmsan_internal_is_module_addr(vaddr)) + return NULL; + page = vmalloc_to_page(vaddr); + if (pfn_valid(page_to_pfn(page))) + return page; + else + return NULL; +} + +void kmsan_internal_check_memory(void *addr, size_t size, const void *user_addr, + int reason) +{ + depot_stack_handle_t cur_origin = 0, new_origin = 0; + unsigned long addr64 = (unsigned long)addr; + depot_stack_handle_t *origin = NULL; + unsigned char *shadow = NULL; + int cur_off_start = -1; + int i, chunk_size; + size_t pos = 0; + + if (!size) + return; + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(addr, size)); + while (pos < size) { + chunk_size = min(size - pos, + PAGE_SIZE - ((addr64 + pos) % PAGE_SIZE)); + shadow = kmsan_get_metadata((void *)(addr64 + pos), + KMSAN_META_SHADOW); + if (!shadow) { + /* + * This page is untracked. If there were uninitialized + * bytes before, report them. + */ + if (cur_origin) { + kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, + cur_off_start, pos - 1, user_addr, + reason); + kmsan_leave_runtime(); + } + cur_origin = 0; + cur_off_start = -1; + pos += chunk_size; + continue; + } + for (i = 0; i < chunk_size; i++) { + if (!shadow[i]) { + /* + * This byte is unpoisoned. If there were + * poisoned bytes before, report them. + */ + if (cur_origin) { + kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, + cur_off_start, pos + i - 1, + user_addr, reason); + kmsan_leave_runtime(); + } + cur_origin = 0; + cur_off_start = -1; + continue; + } + origin = kmsan_get_metadata((void *)(addr64 + pos + i), + KMSAN_META_ORIGIN); + KMSAN_WARN_ON(!origin); + new_origin = *origin; + /* + * Encountered new origin - report the previous + * uninitialized range. + */ + if (cur_origin != new_origin) { + if (cur_origin) { + kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, + cur_off_start, pos + i - 1, + user_addr, reason); + kmsan_leave_runtime(); + } + cur_origin = new_origin; + cur_off_start = pos + i; + } + } + pos += chunk_size; + } + KMSAN_WARN_ON(pos != size); + if (cur_origin) { + kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, cur_off_start, pos - 1, + user_addr, reason); + kmsan_leave_runtime(); + } +} + +bool kmsan_metadata_is_contiguous(void *addr, size_t size) +{ + char *cur_shadow = NULL, *next_shadow = NULL, *cur_origin = NULL, + *next_origin = NULL; + u64 cur_addr = (u64)addr, next_addr = cur_addr + PAGE_SIZE; + depot_stack_handle_t *origin_p; + bool all_untracked = false; + + if (!size) + return true; + + /* The whole range belongs to the same page. */ + if (ALIGN_DOWN(cur_addr + size - 1, PAGE_SIZE) == + ALIGN_DOWN(cur_addr, PAGE_SIZE)) + return true; + + cur_shadow = kmsan_get_metadata((void *)cur_addr, /*is_origin*/ false); + if (!cur_shadow) + all_untracked = true; + cur_origin = kmsan_get_metadata((void *)cur_addr, /*is_origin*/ true); + if (all_untracked && cur_origin) + goto report; + + for (; next_addr < (u64)addr + size; + cur_addr = next_addr, cur_shadow = next_shadow, + cur_origin = next_origin, next_addr += PAGE_SIZE) { + next_shadow = kmsan_get_metadata((void *)next_addr, false); + next_origin = kmsan_get_metadata((void *)next_addr, true); + if (all_untracked) { + if (next_shadow || next_origin) + goto report; + if (!next_shadow && !next_origin) + continue; + } + if (((u64)cur_shadow == ((u64)next_shadow - PAGE_SIZE)) && + ((u64)cur_origin == ((u64)next_origin - PAGE_SIZE))) + continue; + goto report; + } + return true; + +report: + pr_err("%s: attempting to access two shadow page ranges.\n", __func__); + pr_err("Access of size %d at %px.\n", size, addr); + pr_err("Addresses belonging to different ranges: %px and %px\n", + cur_addr, next_addr); + pr_err("page[0].shadow: %px, page[1].shadow: %px\n", cur_shadow, + next_shadow); + pr_err("page[0].origin: %px, page[1].origin: %px\n", cur_origin, + next_origin); + origin_p = kmsan_get_metadata(addr, KMSAN_META_ORIGIN); + if (origin_p) { + pr_err("Origin: %08x\n", *origin_p); + kmsan_print_origin(*origin_p); + } else { + pr_err("Origin: unavailable\n"); + } + return false; +} + +bool kmsan_internal_is_module_addr(void *vaddr) +{ + return ((u64)vaddr >= MODULES_VADDR) && ((u64)vaddr < MODULES_END); +} + +bool kmsan_internal_is_vmalloc_addr(void *addr) +{ + return ((u64)addr >= VMALLOC_START) && ((u64)addr < VMALLOC_END); +} diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c new file mode 100644 index 0000000000000..4012d7a4adb53 --- /dev/null +++ b/mm/kmsan/hooks.c @@ -0,0 +1,400 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN hooks for kernel subsystems. + * + * These functions handle creation of KMSAN metadata for memory allocations. + * + * Copyright (C) 2018-2021 Google LLC + * Author: Alexander Potapenko + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../slab.h" +#include "kmsan.h" + +/* + * Instrumented functions shouldn't be called under + * kmsan_enter_runtime()/kmsan_leave_runtime(), because this will lead to + * skipping effects of functions like memset() inside instrumented code. + */ + +void kmsan_task_create(struct task_struct *task) +{ + kmsan_enter_runtime(); + kmsan_internal_task_create(task); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_task_create); + +void kmsan_task_exit(struct task_struct *task) +{ + struct kmsan_ctx *ctx = &task->kmsan_ctx; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + ctx->allow_reporting = false; +} +EXPORT_SYMBOL(kmsan_task_exit); + +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags) +{ + if (unlikely(object == NULL)) + return; + if (!kmsan_enabled || kmsan_in_runtime()) + return; + /* + * There's a ctor or this is an RCU cache - do nothing. The memory + * status hasn't changed since last use. + */ + if (s->ctor || (s->flags & SLAB_TYPESAFE_BY_RCU)) + return; + + kmsan_enter_runtime(); + if (flags & __GFP_ZERO) + kmsan_internal_unpoison_memory(object, s->object_size, + KMSAN_POISON_CHECK); + else + kmsan_internal_poison_memory(object, s->object_size, flags, + KMSAN_POISON_CHECK); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_slab_alloc); + +void kmsan_slab_free(struct kmem_cache *s, void *object) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + /* RCU slabs could be legally used after free within the RCU period */ + if (unlikely(s->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON))) + return; + /* + * If there's a constructor, freed memory must remain in the same state + * until the next allocation. We cannot save its state to detect + * use-after-free bugs, instead we just keep it unpoisoned. + */ + if (s->ctor) + return; + kmsan_enter_runtime(); + kmsan_internal_poison_memory(object, s->object_size, GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_slab_free); + +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) +{ + if (unlikely(ptr == NULL)) + return; + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + if (flags & __GFP_ZERO) + kmsan_internal_unpoison_memory((void *)ptr, size, + /*checked*/ true); + else + kmsan_internal_poison_memory((void *)ptr, size, flags, + KMSAN_POISON_CHECK); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_kmalloc_large); + +void kmsan_kfree_large(const void *ptr) +{ + struct page *page; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + page = virt_to_head_page((void *)ptr); + KMSAN_WARN_ON(ptr != page_address(page)); + kmsan_internal_poison_memory((void *)ptr, + PAGE_SIZE << compound_order(page), + GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_kfree_large); + +static unsigned long vmalloc_shadow(unsigned long addr) +{ + return (unsigned long)kmsan_get_metadata((void *)addr, + KMSAN_META_SHADOW); +} + +static unsigned long vmalloc_origin(unsigned long addr) +{ + return (unsigned long)kmsan_get_metadata((void *)addr, + KMSAN_META_ORIGIN); +} + +void kmsan_vunmap_range_noflush(unsigned long start, unsigned long end) +{ + __vunmap_range_noflush(vmalloc_shadow(start), vmalloc_shadow(end)); + __vunmap_range_noflush(vmalloc_origin(start), vmalloc_origin(end)); + flush_cache_vmap(vmalloc_shadow(start), vmalloc_shadow(end)); + flush_cache_vmap(vmalloc_origin(start), vmalloc_origin(end)); +} +EXPORT_SYMBOL(kmsan_vunmap_range_noflush); + +/* + * This function creates new shadow/origin pages for the physical pages mapped + * into the virtual memory. If those physical pages already had shadow/origin, + * those are ignored. + */ +void kmsan_ioremap_page_range(unsigned long start, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int page_shift) +{ + gfp_t gfp_mask = GFP_KERNEL | __GFP_ZERO; + struct page *shadow, *origin; + unsigned long off = 0; + int i, nr; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + nr = (end - start) / PAGE_SIZE; + kmsan_enter_runtime(); + for (i = 0; i < nr; i++, off += PAGE_SIZE) { + shadow = alloc_pages(gfp_mask, 1); + origin = alloc_pages(gfp_mask, 1); + __vmap_pages_range_noflush( + vmalloc_shadow(start + off), + vmalloc_shadow(start + off + PAGE_SIZE), prot, &shadow, + page_shift); + __vmap_pages_range_noflush( + vmalloc_origin(start + off), + vmalloc_origin(start + off + PAGE_SIZE), prot, &origin, + page_shift); + } + flush_cache_vmap(vmalloc_shadow(start), vmalloc_shadow(end)); + flush_cache_vmap(vmalloc_origin(start), vmalloc_origin(end)); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_ioremap_page_range); + +void kmsan_iounmap_page_range(unsigned long start, unsigned long end) +{ + unsigned long v_shadow, v_origin; + struct page *shadow, *origin; + int i, nr; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + nr = (end - start) / PAGE_SIZE; + kmsan_enter_runtime(); + v_shadow = (unsigned long)vmalloc_shadow(start); + v_origin = (unsigned long)vmalloc_origin(start); + for (i = 0; i < nr; i++, v_shadow += PAGE_SIZE, v_origin += PAGE_SIZE) { + shadow = kmsan_vmalloc_to_page_or_null((void *)v_shadow); + origin = kmsan_vmalloc_to_page_or_null((void *)v_origin); + __vunmap_range_noflush(v_shadow, vmalloc_shadow(end)); + __vunmap_range_noflush(v_origin, vmalloc_origin(end)); + if (shadow) + __free_pages(shadow, 1); + if (origin) + __free_pages(origin, 1); + } + flush_cache_vmap(vmalloc_shadow(start), vmalloc_shadow(end)); + flush_cache_vmap(vmalloc_origin(start), vmalloc_origin(end)); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_iounmap_page_range); + +void kmsan_copy_to_user(const void *to, const void *from, size_t to_copy, + size_t left) +{ + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + /* + * At this point we've copied the memory already. It's hard to check it + * before copying, as the size of actually copied buffer is unknown. + */ + + /* copy_to_user() may copy zero bytes. No need to check. */ + if (!to_copy) + return; + /* Or maybe copy_to_user() failed to copy anything. */ + if (to_copy <= left) + return; + + ua_flags = user_access_save(); + if ((u64)to < TASK_SIZE) { + /* This is a user memory access, check it. */ + kmsan_internal_check_memory((void *)from, to_copy - left, to, + REASON_COPY_TO_USER); + user_access_restore(ua_flags); + return; + } + /* Otherwise this is a kernel memory access. This happens when a compat + * syscall passes an argument allocated on the kernel stack to a real + * syscall. + * Don't check anything, just copy the shadow of the copied bytes. + */ + kmsan_internal_memmove_metadata((void *)to, (void *)from, + to_copy - left); + user_access_restore(ua_flags); +} +EXPORT_SYMBOL(kmsan_copy_to_user); + +/* Helper function to check an URB. */ +void kmsan_handle_urb(const struct urb *urb, bool is_out) +{ + if (!urb) + return; + if (is_out) + kmsan_internal_check_memory(urb->transfer_buffer, + urb->transfer_buffer_length, + /*user_addr*/ 0, REASON_SUBMIT_URB); + else + kmsan_internal_unpoison_memory(urb->transfer_buffer, + urb->transfer_buffer_length, + /*checked*/ false); +} +EXPORT_SYMBOL(kmsan_handle_urb); + +static void kmsan_handle_dma_page(const void *addr, size_t size, + enum dma_data_direction dir) +{ + switch (dir) { + case DMA_BIDIRECTIONAL: + kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0, + REASON_ANY); + kmsan_internal_unpoison_memory((void *)addr, size, + /*checked*/ false); + break; + case DMA_TO_DEVICE: + kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0, + REASON_ANY); + break; + case DMA_FROM_DEVICE: + kmsan_internal_unpoison_memory((void *)addr, size, + /*checked*/ false); + break; + case DMA_NONE: + break; + } +} + +/* Helper function to handle DMA data transfers. */ +void kmsan_handle_dma(struct page *page, size_t offset, size_t size, + enum dma_data_direction dir) +{ + u64 page_offset, to_go, addr; + + if (PageHighMem(page)) + return; + addr = (u64)page_address(page) + offset; + /* + * The kernel may occasionally give us adjacent DMA pages not belonging + * to the same allocation. Process them separately to avoid triggering + * internal KMSAN checks. + */ + while (size > 0) { + page_offset = addr % PAGE_SIZE; + to_go = min(PAGE_SIZE - page_offset, (u64)size); + kmsan_handle_dma_page((void *)addr, to_go, dir); + addr += to_go; + size -= to_go; + } +} +EXPORT_SYMBOL(kmsan_handle_dma); + +void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, + enum dma_data_direction dir) +{ + struct scatterlist *item; + int i; + + for_each_sg(sg, item, nents, i) + kmsan_handle_dma(sg_page(item), item->offset, item->length, + dir); +} +EXPORT_SYMBOL(kmsan_handle_dma_sg); + +/* Functions from kmsan-checks.h follow. */ +void kmsan_poison_memory(const void *address, size_t size, gfp_t flags) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + /* The users may want to poison/unpoison random memory. */ + kmsan_internal_poison_memory((void *)address, size, flags, + KMSAN_POISON_NOCHECK); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_poison_memory); + +void kmsan_unpoison_memory(const void *address, size_t size) +{ + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + ua_flags = user_access_save(); + kmsan_enter_runtime(); + /* The users may want to poison/unpoison random memory. */ + kmsan_internal_unpoison_memory((void *)address, size, + KMSAN_POISON_NOCHECK); + kmsan_leave_runtime(); + user_access_restore(ua_flags); +} +EXPORT_SYMBOL(kmsan_unpoison_memory); + +void kmsan_gup_pgd_range(struct page **pages, int nr) +{ + void *page_addr; + int i; + + /* + * gup_pgd_range() has just created a number of new pages that KMSAN + * treats as uninitialized. In the case they belong to the userspace + * memory, unpoison the corresponding kernel pages. + */ + for (i = 0; i < nr; i++) { + if (PageHighMem(pages[i])) + continue; + page_addr = page_address(pages[i]); + if (((u64)page_addr < TASK_SIZE) && + ((u64)page_addr + PAGE_SIZE < TASK_SIZE)) + kmsan_unpoison_memory(page_addr, PAGE_SIZE); + } +} +EXPORT_SYMBOL(kmsan_gup_pgd_range); + +void kmsan_check_memory(const void *addr, size_t size) +{ + if (!kmsan_enabled) + return; + return kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0, + REASON_ANY); +} +EXPORT_SYMBOL(kmsan_check_memory); + +void kmsan_instrumentation_begin(struct pt_regs *regs) +{ + struct kmsan_context_state *state = &kmsan_get_context()->cstate; + + if (state) + __memset(state, 0, sizeof(struct kmsan_context_state)); + if (!kmsan_enabled || !regs) + return; + kmsan_internal_unpoison_memory(regs, sizeof(*regs), /*checked*/ true); +} +EXPORT_SYMBOL(kmsan_instrumentation_begin); diff --git a/mm/kmsan/init.c b/mm/kmsan/init.c new file mode 100644 index 0000000000000..49ab06cde082a --- /dev/null +++ b/mm/kmsan/init.c @@ -0,0 +1,238 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN initialization routines. + * + * Copyright (C) 2017-2021 Google LLC + * Author: Alexander Potapenko + * + */ + +#include "kmsan.h" + +#include +#include +#include + +#define NUM_FUTURE_RANGES 128 +struct start_end_pair { + u64 start, end; +}; + +static struct start_end_pair start_end_pairs[NUM_FUTURE_RANGES] __initdata; +static int future_index __initdata; + +/* + * Record a range of memory for which the metadata pages will be created once + * the page allocator becomes available. + */ +static void __init kmsan_record_future_shadow_range(void *start, void *end) +{ + u64 nstart = (u64)start, nend = (u64)end, cstart, cend; + bool merged = false; + int i; + + KMSAN_WARN_ON(future_index == NUM_FUTURE_RANGES); + KMSAN_WARN_ON((nstart >= nend) || !nstart || !nend); + nstart = ALIGN_DOWN(nstart, PAGE_SIZE); + nend = ALIGN(nend, PAGE_SIZE); + + /* + * Scan the existing ranges to see if any of them overlaps with + * [start, end). In that case, merge the two ranges instead of + * creating a new one. + * The number of ranges is less than 20, so there is no need to organize + * them into a more intelligent data structure. + */ + for (i = 0; i < future_index; i++) { + cstart = start_end_pairs[i].start; + cend = start_end_pairs[i].end; + if ((cstart < nstart && cend < nstart) || + (cstart > nend && cend > nend)) + /* ranges are disjoint - do not merge */ + continue; + start_end_pairs[i].start = min(nstart, cstart); + start_end_pairs[i].end = max(nend, cend); + merged = true; + break; + } + if (merged) + return; + start_end_pairs[future_index].start = nstart; + start_end_pairs[future_index].end = nend; + future_index++; +} + +/* + * Initialize the shadow for existing mappings during kernel initialization. + * These include kernel text/data sections, NODE_DATA and future ranges + * registered while creating other data (e.g. percpu). + * + * Allocations via memblock can be only done before slab is initialized. + */ +void __init kmsan_init_shadow(void) +{ + const size_t nd_size = roundup(sizeof(pg_data_t), PAGE_SIZE); + phys_addr_t p_start, p_end; + int nid; + u64 i; + + for_each_reserved_mem_range(i, &p_start, &p_end) + kmsan_record_future_shadow_range(phys_to_virt(p_start), + phys_to_virt(p_end)); + /* Allocate shadow for .data */ + kmsan_record_future_shadow_range(_sdata, _edata); + + for_each_online_node(nid) + kmsan_record_future_shadow_range( + NODE_DATA(nid), (char *)NODE_DATA(nid) + nd_size); + + for (i = 0; i < future_index; i++) + kmsan_init_alloc_meta_for_range( + (void *)start_end_pairs[i].start, + (void *)start_end_pairs[i].end); +} +EXPORT_SYMBOL(kmsan_init_shadow); + +struct page_pair { + struct page *shadow, *origin; +}; +static struct page_pair held_back[MAX_ORDER] __initdata; + +/* + * Eager metadata allocation. When the memblock allocator is freeing pages to + * pagealloc, we use 2/3 of them as metadata for the remaining 1/3. + * We store the pointers to the returned blocks of pages in held_back[] grouped + * by their order: when kmsan_memblock_free_pages() is called for the first + * time with a certain order, it is reserved as a shadow block, for the second + * time - as an origin block. On the third time the incoming block receives its + * shadow and origin ranges from the previously saved shadow and origin blocks, + * after which held_back[order] can be used again. + * + * At the very end there may be leftover blocks in held_back[]. They are + * collected later by kmsan_memblock_discard(). + */ +bool kmsan_memblock_free_pages(struct page *page, unsigned int order) +{ + struct page *shadow, *origin; + + if (!held_back[order].shadow) { + held_back[order].shadow = page; + return false; + } + if (!held_back[order].origin) { + held_back[order].origin = page; + return false; + } + shadow = held_back[order].shadow; + origin = held_back[order].origin; + kmsan_setup_meta(page, shadow, origin, order); + + held_back[order].shadow = NULL; + held_back[order].origin = NULL; + return true; +} + +#define MAX_BLOCKS 8 +struct smallstack { + struct page *items[MAX_BLOCKS]; + int index; + int order; +}; + +struct smallstack collect = { + .index = 0, + .order = MAX_ORDER, +}; + +static void smallstack_push(struct smallstack *stack, struct page *pages) +{ + KMSAN_WARN_ON(stack->index == MAX_BLOCKS); + stack->items[stack->index] = pages; + stack->index++; +} +#undef MAX_BLOCKS + +static struct page *smallstack_pop(struct smallstack *stack) +{ + struct page *ret; + + KMSAN_WARN_ON(stack->index == 0); + stack->index--; + ret = stack->items[stack->index]; + stack->items[stack->index] = NULL; + return ret; +} + +static void do_collection(void) +{ + struct page *page, *shadow, *origin; + + while (collect.index >= 3) { + page = smallstack_pop(&collect); + shadow = smallstack_pop(&collect); + origin = smallstack_pop(&collect); + kmsan_setup_meta(page, shadow, origin, collect.order); + __free_pages_core(page, collect.order); + } +} + +static void collect_split(void) +{ + struct smallstack tmp = { + .order = collect.order - 1, + .index = 0, + }; + struct page *page; + + if (!collect.order) + return; + while (collect.index) { + page = smallstack_pop(&collect); + smallstack_push(&tmp, &page[0]); + smallstack_push(&tmp, &page[1 << tmp.order]); + } + __memcpy(&collect, &tmp, sizeof(struct smallstack)); +} + +/* + * Memblock is about to go away. Split the page blocks left over in held_back[] + * and return 1/3 of that memory to the system. + */ +static void kmsan_memblock_discard(void) +{ + int i; + + /* + * For each order=N: + * - push held_back[N].shadow and .origin to |collect|; + * - while there are >= 3 elements in |collect|, do garbage collection: + * - pop 3 ranges from |collect|; + * - use two of them as shadow and origin for the third one; + * - repeat; + * - split each remaining element from |collect| into 2 ranges of + * order=N-1, + * - repeat. + */ + collect.order = MAX_ORDER - 1; + for (i = MAX_ORDER - 1; i >= 0; i--) { + if (held_back[i].shadow) + smallstack_push(&collect, held_back[i].shadow); + if (held_back[i].origin) + smallstack_push(&collect, held_back[i].origin); + held_back[i].shadow = NULL; + held_back[i].origin = NULL; + do_collection(); + collect_split(); + } +} + +void __init kmsan_init_runtime(void) +{ + /* Assuming current is init_task */ + kmsan_internal_task_create(current); + kmsan_memblock_discard(); + pr_info("vmalloc area at: %px\n", VMALLOC_START); + pr_info("Starting KernelMemorySanitizer\n"); + kmsan_enabled = true; +} +EXPORT_SYMBOL(kmsan_init_runtime); diff --git a/mm/kmsan/instrumentation.c b/mm/kmsan/instrumentation.c new file mode 100644 index 0000000000000..1eb2d64aa39a6 --- /dev/null +++ b/mm/kmsan/instrumentation.c @@ -0,0 +1,233 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN compiler API. + * + * Copyright (C) 2017-2021 Google LLC + * Author: Alexander Potapenko + * + */ + +#include "kmsan.h" +#include +#include +#include + +static inline bool is_bad_asm_addr(void *addr, uintptr_t size, bool is_store) +{ + if ((u64)addr < TASK_SIZE) + return true; + if (!kmsan_get_metadata(addr, KMSAN_META_SHADOW)) + return true; + return false; +} + +static inline struct shadow_origin_ptr +get_shadow_origin_ptr(void *addr, u64 size, bool store) +{ + unsigned long ua_flags = user_access_save(); + struct shadow_origin_ptr ret; + + ret = kmsan_get_shadow_origin_ptr(addr, size, store); + user_access_restore(ua_flags); + return ret; +} + +struct shadow_origin_ptr __msan_metadata_ptr_for_load_n(void *addr, + uintptr_t size) +{ + return get_shadow_origin_ptr(addr, size, /*store*/ false); +} +EXPORT_SYMBOL(__msan_metadata_ptr_for_load_n); + +struct shadow_origin_ptr __msan_metadata_ptr_for_store_n(void *addr, + uintptr_t size) +{ + return get_shadow_origin_ptr(addr, size, /*store*/ true); +} +EXPORT_SYMBOL(__msan_metadata_ptr_for_store_n); + +#define DECLARE_METADATA_PTR_GETTER(size) \ + struct shadow_origin_ptr __msan_metadata_ptr_for_load_##size( \ + void *addr) \ + { \ + return get_shadow_origin_ptr(addr, size, /*store*/ false); \ + } \ + EXPORT_SYMBOL(__msan_metadata_ptr_for_load_##size); \ + struct shadow_origin_ptr __msan_metadata_ptr_for_store_##size( \ + void *addr) \ + { \ + return get_shadow_origin_ptr(addr, size, /*store*/ true); \ + } \ + EXPORT_SYMBOL(__msan_metadata_ptr_for_store_##size) + +DECLARE_METADATA_PTR_GETTER(1); +DECLARE_METADATA_PTR_GETTER(2); +DECLARE_METADATA_PTR_GETTER(4); +DECLARE_METADATA_PTR_GETTER(8); + +void __msan_instrument_asm_store(void *addr, uintptr_t size) +{ + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + ua_flags = user_access_save(); + /* + * Most of the accesses are below 32 bytes. The two exceptions so far + * are clwb() (64 bytes) and FPU state (512 bytes). + * It's unlikely that the assembly will touch more than 512 bytes. + */ + if (size > 512) { + WARN_ONCE(1, "assembly store size too big: %d\n", size); + size = 8; + } + if (is_bad_asm_addr(addr, size, /*is_store*/ true)) { + user_access_restore(ua_flags); + return; + } + kmsan_enter_runtime(); + /* Unpoisoning the memory on best effort. */ + kmsan_internal_unpoison_memory(addr, size, /*checked*/ false); + kmsan_leave_runtime(); + user_access_restore(ua_flags); +} +EXPORT_SYMBOL(__msan_instrument_asm_store); + +void *__msan_memmove(void *dst, const void *src, uintptr_t n) +{ + void *result; + + result = __memmove(dst, src, n); + if (!n) + /* Some people call memmove() with zero length. */ + return result; + if (!kmsan_enabled || kmsan_in_runtime()) + return result; + + kmsan_internal_memmove_metadata(dst, (void *)src, n); + + return result; +} +EXPORT_SYMBOL(__msan_memmove); + +void *__msan_memcpy(void *dst, const void *src, uintptr_t n) +{ + void *result; + + result = __memcpy(dst, src, n); + if (!n) + /* Some people call memcpy() with zero length. */ + return result; + + if (!kmsan_enabled || kmsan_in_runtime()) + return result; + + /* Using memmove instead of memcpy doesn't affect correctness. */ + kmsan_internal_memmove_metadata(dst, (void *)src, n); + + return result; +} +EXPORT_SYMBOL(__msan_memcpy); + +void *__msan_memset(void *dst, int c, uintptr_t n) +{ + void *result; + + result = __memset(dst, c, n); + if (!kmsan_enabled || kmsan_in_runtime()) + return result; + + kmsan_enter_runtime(); + /* + * Clang doesn't pass parameter metadata here, so it is impossible to + * use shadow of @c to set up the shadow for @dst. + */ + kmsan_internal_unpoison_memory(dst, n, /*checked*/ false); + kmsan_leave_runtime(); + + return result; +} +EXPORT_SYMBOL(__msan_memset); + +depot_stack_handle_t __msan_chain_origin(depot_stack_handle_t origin) +{ + depot_stack_handle_t ret = 0; + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return ret; + + ua_flags = user_access_save(); + + /* Creating new origins may allocate memory. */ + kmsan_enter_runtime(); + ret = kmsan_internal_chain_origin(origin); + kmsan_leave_runtime(); + user_access_restore(ua_flags); + return ret; +} +EXPORT_SYMBOL(__msan_chain_origin); + +void __msan_poison_alloca(void *address, uintptr_t size, char *descr) +{ + depot_stack_handle_t handle; + unsigned long entries[4]; + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + ua_flags = user_access_save(); + entries[0] = KMSAN_ALLOCA_MAGIC_ORIGIN; + entries[1] = (u64)descr; + entries[2] = (u64)__builtin_return_address(0); + /* + * With frame pointers enabled, it is possible to quickly fetch the + * second frame of the caller stack without calling the unwinder. + * Without them, simply do not bother. + */ + if (IS_ENABLED(CONFIG_UNWINDER_FRAME_POINTER)) + entries[3] = (u64)__builtin_return_address(1); + else + entries[3] = 0; + + /* stack_depot_save() may allocate memory. */ + kmsan_enter_runtime(); + handle = stack_depot_save(entries, ARRAY_SIZE(entries), GFP_ATOMIC); + kmsan_leave_runtime(); + + kmsan_internal_set_shadow_origin(address, size, -1, handle, + /*checked*/ true); + user_access_restore(ua_flags); +} +EXPORT_SYMBOL(__msan_poison_alloca); + +void __msan_unpoison_alloca(void *address, uintptr_t size) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + kmsan_enter_runtime(); + kmsan_internal_unpoison_memory(address, size, /*checked*/ true); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(__msan_unpoison_alloca); + +void __msan_warning(u32 origin) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + kmsan_report(origin, /*address*/ 0, /*size*/ 0, + /*off_first*/ 0, /*off_last*/ 0, /*user_addr*/ 0, + REASON_ANY); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(__msan_warning); + +struct kmsan_context_state *__msan_get_context_state(void) +{ + return &kmsan_get_context()->cstate; +} +EXPORT_SYMBOL(__msan_get_context_state); diff --git a/mm/kmsan/kmsan.h b/mm/kmsan/kmsan.h new file mode 100644 index 0000000000000..29c91b6e28799 --- /dev/null +++ b/mm/kmsan/kmsan.h @@ -0,0 +1,197 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Functions used by the KMSAN runtime. + * + * Copyright (C) 2017-2021 Google LLC + * Author: Alexander Potapenko + * + */ + +#ifndef __MM_KMSAN_KMSAN_H +#define __MM_KMSAN_KMSAN_H + +#include +#include +#include +#include +#include +#include +#include +#include + +#define KMSAN_ALLOCA_MAGIC_ORIGIN 0xabcd0100 +#define KMSAN_CHAIN_MAGIC_ORIGIN 0xabcd0200 + +#define KMSAN_POISON_NOCHECK 0x0 +#define KMSAN_POISON_CHECK 0x1 +#define KMSAN_POISON_FREE 0x2 + +#define KMSAN_ORIGIN_SIZE 4 + +#define KMSAN_STACK_DEPTH 64 + +#define KMSAN_META_SHADOW (false) +#define KMSAN_META_ORIGIN (true) + +extern bool kmsan_enabled; +extern int panic_on_kmsan; + +/* + * KMSAN performs a lot of consistency checks that are currently enabled by + * default. BUG_ON is normally discouraged in the kernel, unless used for + * debugging, but KMSAN itself is a debugging tool, so it makes little sense to + * recover if something goes wrong. + */ +#define KMSAN_WARN_ON(cond) \ + ({ \ + const bool __cond = WARN_ON(cond); \ + if (unlikely(__cond)) { \ + WRITE_ONCE(kmsan_enabled, false); \ + if (panic_on_kmsan) { \ + /* Can't call panic() here because */ \ + /* of uaccess checks.*/ \ + BUG(); \ + } \ + } \ + __cond; \ + }) + +/* + * A pair of metadata pointers to be returned by the instrumentation functions. + */ +struct shadow_origin_ptr { + void *shadow, *origin; +}; + +struct shadow_origin_ptr kmsan_get_shadow_origin_ptr(void *addr, u64 size, + bool store); +void *kmsan_get_metadata(void *addr, bool is_origin); +void __init kmsan_init_alloc_meta_for_range(void *start, void *end); + +enum kmsan_bug_reason { + REASON_ANY, + REASON_COPY_TO_USER, + REASON_SUBMIT_URB, +}; + +void kmsan_print_origin(depot_stack_handle_t origin); + +/** + * kmsan_report() - Report a use of uninitialized value. + * @origin: Stack ID of the uninitialized value. + * @address: Address at which the memory access happens. + * @size: Memory access size. + * @off_first: Offset (from @address) of the first byte to be reported. + * @off_last: Offset (from @address) of the last byte to be reported. + * @user_addr: When non-NULL, denotes the userspace address to which the kernel + * is leaking data. + * @reason: Error type from enum kmsan_bug_reason. + * + * kmsan_report() prints an error message for a consequent group of bytes + * sharing the same origin. If an uninitialized value is used in a comparison, + * this function is called once without specifying the addresses. When checking + * a memory range, KMSAN may call kmsan_report() multiple times with the same + * @address, @size, @user_addr and @reason, but different @off_first and + * @off_last corresponding to different @origin values. + */ +void kmsan_report(depot_stack_handle_t origin, void *address, int size, + int off_first, int off_last, const void *user_addr, + enum kmsan_bug_reason reason); + +DECLARE_PER_CPU(struct kmsan_ctx, kmsan_percpu_ctx); + +static __always_inline struct kmsan_ctx *kmsan_get_context(void) +{ + return in_task() ? ¤t->kmsan_ctx : raw_cpu_ptr(&kmsan_percpu_ctx); +} + +/* + * When a compiler hook is invoked, it may make a call to instrumented code + * and eventually call itself recursively. To avoid that, we protect the + * runtime entry points with kmsan_enter_runtime()/kmsan_leave_runtime() and + * exit the hook if kmsan_in_runtime() is true. + */ + +static __always_inline bool kmsan_in_runtime(void) +{ + if ((hardirq_count() >> HARDIRQ_SHIFT) > 1) + return true; + return kmsan_get_context()->kmsan_in_runtime; +} + +static __always_inline void kmsan_enter_runtime(void) +{ + struct kmsan_ctx *ctx; + + ctx = kmsan_get_context(); + KMSAN_WARN_ON(ctx->kmsan_in_runtime++); +} + +static __always_inline void kmsan_leave_runtime(void) +{ + struct kmsan_ctx *ctx = kmsan_get_context(); + + KMSAN_WARN_ON(--ctx->kmsan_in_runtime); +} + +depot_stack_handle_t kmsan_save_stack(void); +depot_stack_handle_t kmsan_save_stack_with_flags(gfp_t flags, + unsigned int extra_bits); + +/* + * Pack and unpack the origin chain depth and UAF flag to/from the extra bits + * provided by the stack depot. + * The UAF flag is stored in the lowest bit, followed by the depth in the upper + * bits. + * set_dsh_extra_bits() is responsible for clamping the value. + */ +static __always_inline unsigned int kmsan_extra_bits(unsigned int depth, + bool uaf) +{ + return (depth << 1) | uaf; +} + +static __always_inline bool kmsan_uaf_from_eb(unsigned int extra_bits) +{ + return extra_bits & 1; +} + +static __always_inline unsigned int kmsan_depth_from_eb(unsigned int extra_bits) +{ + return extra_bits >> 1; +} + +/* + * kmsan_internal_ functions are supposed to be very simple and not require the + * kmsan_in_runtime() checks. + */ +void kmsan_internal_memmove_metadata(void *dst, void *src, size_t n); +void kmsan_internal_poison_memory(void *address, size_t size, gfp_t flags, + unsigned int poison_flags); +void kmsan_internal_unpoison_memory(void *address, size_t size, bool checked); +void kmsan_internal_set_shadow_origin(void *address, size_t size, int b, + u32 origin, bool checked); +depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id); + +void kmsan_internal_task_create(struct task_struct *task); + +bool kmsan_metadata_is_contiguous(void *addr, size_t size); +void kmsan_internal_check_memory(void *addr, size_t size, const void *user_addr, + int reason); +bool kmsan_internal_is_module_addr(void *vaddr); +bool kmsan_internal_is_vmalloc_addr(void *addr); + +struct page *kmsan_vmalloc_to_page_or_null(void *vaddr); +void kmsan_setup_meta(struct page *page, struct page *shadow, + struct page *origin, int order); + +/* Declared in mm/vmalloc.c */ +void __vunmap_range_noflush(unsigned long start, unsigned long end); +int __vmap_pages_range_noflush(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, + unsigned int page_shift); + +/* Declared in mm/internal.h */ +void __free_pages_core(struct page *page, unsigned int order); + +#endif /* __MM_KMSAN_KMSAN_H */ diff --git a/mm/kmsan/report.c b/mm/kmsan/report.c new file mode 100644 index 0000000000000..d539fe1129fb9 --- /dev/null +++ b/mm/kmsan/report.c @@ -0,0 +1,210 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN error reporting routines. + * + * Copyright (C) 2019-2021 Google LLC + * Author: Alexander Potapenko + * + */ + +#include +#include +#include +#include +#include + +#include "kmsan.h" + +static DEFINE_SPINLOCK(kmsan_report_lock); +#define DESCR_SIZE 128 +/* Protected by kmsan_report_lock */ +static char report_local_descr[DESCR_SIZE]; +int panic_on_kmsan __read_mostly; + +#ifdef MODULE_PARAM_PREFIX +#undef MODULE_PARAM_PREFIX +#endif +#define MODULE_PARAM_PREFIX "kmsan." +module_param_named(panic, panic_on_kmsan, int, 0); + +/* + * Skip internal KMSAN frames. + */ +static int get_stack_skipnr(const unsigned long stack_entries[], + int num_entries) +{ + int len, skip; + char buf[64]; + + for (skip = 0; skip < num_entries; ++skip) { + len = scnprintf(buf, sizeof(buf), "%ps", + (void *)stack_entries[skip]); + + /* Never show __msan_* or kmsan_* functions. */ + if ((strnstr(buf, "__msan_", len) == buf) || + (strnstr(buf, "kmsan_", len) == buf)) + continue; + + /* + * No match for runtime functions -- @skip entries to skip to + * get to first frame of interest. + */ + break; + } + + return skip; +} + +/* + * Currently the descriptions of locals generated by Clang look as follows: + * ----local_name@function_name + * We want to print only the name of the local, as other information in that + * description can be confusing. + * The meaningful part of the description is copied to a global buffer to avoid + * allocating memory. + */ +static char *pretty_descr(char *descr) +{ + int i, pos = 0, len = strlen(descr); + + for (i = 0; i < len; i++) { + if (descr[i] == '@') + break; + if (descr[i] == '-') + continue; + report_local_descr[pos] = descr[i]; + if (pos + 1 == DESCR_SIZE) + break; + pos++; + } + report_local_descr[pos] = 0; + return report_local_descr; +} + +void kmsan_print_origin(depot_stack_handle_t origin) +{ + unsigned long *entries = NULL, *chained_entries = NULL; + unsigned int nr_entries, chained_nr_entries, skipnr; + void *pc1 = NULL, *pc2 = NULL; + depot_stack_handle_t head; + unsigned long magic; + char *descr = NULL; + + if (!origin) + return; + + while (true) { + nr_entries = stack_depot_fetch(origin, &entries); + magic = nr_entries ? entries[0] : 0; + if ((nr_entries == 4) && (magic == KMSAN_ALLOCA_MAGIC_ORIGIN)) { + descr = (char *)entries[1]; + pc1 = (void *)entries[2]; + pc2 = (void *)entries[3]; + pr_err("Local variable %s created at:\n", + pretty_descr(descr)); + if (pc1) + pr_err(" %pS\n", pc1); + if (pc2) + pr_err(" %pS\n", pc2); + break; + } + if ((nr_entries == 3) && (magic == KMSAN_CHAIN_MAGIC_ORIGIN)) { + head = entries[1]; + origin = entries[2]; + pr_err("Uninit was stored to memory at:\n"); + chained_nr_entries = + stack_depot_fetch(head, &chained_entries); + kmsan_internal_unpoison_memory( + chained_entries, + chained_nr_entries * sizeof(*chained_entries), + /*checked*/ false); + skipnr = get_stack_skipnr(chained_entries, + chained_nr_entries); + stack_trace_print(chained_entries + skipnr, + chained_nr_entries - skipnr, 0); + pr_err("\n"); + continue; + } + pr_err("Uninit was created at:\n"); + if (nr_entries) { + skipnr = get_stack_skipnr(entries, nr_entries); + stack_trace_print(entries + skipnr, nr_entries - skipnr, + 0); + } else { + pr_err("(stack is not available)\n"); + } + break; + } +} + +void kmsan_report(depot_stack_handle_t origin, void *address, int size, + int off_first, int off_last, const void *user_addr, + enum kmsan_bug_reason reason) +{ + unsigned long stack_entries[KMSAN_STACK_DEPTH]; + int num_stack_entries, skipnr; + char *bug_type = NULL; + unsigned long flags, ua_flags; + bool is_uaf; + + if (!kmsan_enabled) + return; + if (!current->kmsan_ctx.allow_reporting) + return; + if (!origin) + return; + + current->kmsan_ctx.allow_reporting = false; + ua_flags = user_access_save(); + spin_lock_irqsave(&kmsan_report_lock, flags); + pr_err("=====================================================\n"); + is_uaf = kmsan_uaf_from_eb(stack_depot_get_extra_bits(origin)); + switch (reason) { + case REASON_ANY: + bug_type = is_uaf ? "use-after-free" : "uninit-value"; + break; + case REASON_COPY_TO_USER: + bug_type = is_uaf ? "kernel-infoleak-after-free" : + "kernel-infoleak"; + break; + case REASON_SUBMIT_URB: + bug_type = is_uaf ? "kernel-usb-infoleak-after-free" : + "kernel-usb-infoleak"; + break; + } + + num_stack_entries = + stack_trace_save(stack_entries, KMSAN_STACK_DEPTH, 1); + skipnr = get_stack_skipnr(stack_entries, num_stack_entries); + + pr_err("BUG: KMSAN: %s in %pS\n", bug_type, stack_entries[skipnr]); + stack_trace_print(stack_entries + skipnr, num_stack_entries - skipnr, + 0); + pr_err("\n"); + + kmsan_print_origin(origin); + + if (size) { + pr_err("\n"); + if (off_first == off_last) + pr_err("Byte %d of %d is uninitialized\n", off_first, + size); + else + pr_err("Bytes %d-%d of %d are uninitialized\n", + off_first, off_last, size); + } + if (address) + pr_err("Memory access of size %d starts at %px\n", size, + address); + if (user_addr && reason == REASON_COPY_TO_USER) + pr_err("Data copied to user address %px\n", user_addr); + pr_err("\n"); + dump_stack_print_info(KERN_ERR); + pr_err("=====================================================\n"); + add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); + spin_unlock_irqrestore(&kmsan_report_lock, flags); + if (panic_on_kmsan) + panic("kmsan.panic set ...\n"); + user_access_restore(ua_flags); + current->kmsan_ctx.allow_reporting = true; +} diff --git a/mm/kmsan/shadow.c b/mm/kmsan/shadow.c new file mode 100644 index 0000000000000..c71b0ce19ea6d --- /dev/null +++ b/mm/kmsan/shadow.c @@ -0,0 +1,332 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN shadow implementation. + * + * Copyright (C) 2017-2021 Google LLC + * Author: Alexander Potapenko + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kmsan.h" + +#define shadow_page_for(page) ((page)->kmsan_shadow) + +#define origin_page_for(page) ((page)->kmsan_origin) + +static void *shadow_ptr_for(struct page *page) +{ + return page_address(shadow_page_for(page)); +} + +static void *origin_ptr_for(struct page *page) +{ + return page_address(origin_page_for(page)); +} + +static bool page_has_metadata(struct page *page) +{ + return shadow_page_for(page) && origin_page_for(page); +} + +static void set_no_shadow_origin_page(struct page *page) +{ + shadow_page_for(page) = NULL; + origin_page_for(page) = NULL; +} + +/* + * Dummy load and store pages to be used when the real metadata is unavailable. + * There are separate pages for loads and stores, so that every load returns a + * zero, and every store doesn't affect other loads. + */ +static char dummy_load_page[PAGE_SIZE] __aligned(PAGE_SIZE); +static char dummy_store_page[PAGE_SIZE] __aligned(PAGE_SIZE); + +/* + * Taken from arch/x86/mm/physaddr.h to avoid using an instrumented version. + */ +static int kmsan_phys_addr_valid(unsigned long addr) +{ + if (IS_ENABLED(CONFIG_PHYS_ADDR_T_64BIT)) + return !(addr >> boot_cpu_data.x86_phys_bits); + else + return 1; +} + +/* + * Taken from arch/x86/mm/physaddr.c to avoid using an instrumented version. + */ +static bool kmsan_virt_addr_valid(void *addr) +{ + unsigned long x = (unsigned long)addr; + unsigned long y = x - __START_KERNEL_map; + + /* use the carry flag to determine if x was < __START_KERNEL_map */ + if (unlikely(x > y)) { + x = y + phys_base; + + if (y >= KERNEL_IMAGE_SIZE) + return false; + } else { + x = y + (__START_KERNEL_map - PAGE_OFFSET); + + /* carry flag will be set if starting x was >= PAGE_OFFSET */ + if ((x > y) || !kmsan_phys_addr_valid(x)) + return false; + } + + return pfn_valid(x >> PAGE_SHIFT); +} + +static unsigned long vmalloc_meta(void *addr, bool is_origin) +{ + unsigned long addr64 = (unsigned long)addr, off; + + KMSAN_WARN_ON(is_origin && !IS_ALIGNED(addr64, KMSAN_ORIGIN_SIZE)); + if (kmsan_internal_is_vmalloc_addr(addr)) { + off = addr64 - VMALLOC_START; + return off + (is_origin ? KMSAN_VMALLOC_ORIGIN_START : + KMSAN_VMALLOC_SHADOW_START); + } + if (kmsan_internal_is_module_addr(addr)) { + off = addr64 - MODULES_VADDR; + return off + (is_origin ? KMSAN_MODULES_ORIGIN_START : + KMSAN_MODULES_SHADOW_START); + } + return 0; +} + +static struct page *virt_to_page_or_null(void *vaddr) +{ + if (kmsan_virt_addr_valid(vaddr)) + return virt_to_page(vaddr); + else + return NULL; +} + +struct shadow_origin_ptr kmsan_get_shadow_origin_ptr(void *address, u64 size, + bool store) +{ + struct shadow_origin_ptr ret; + void *shadow; + + /* + * Even if we redirect this memory access to the dummy page, it will + * go out of bounds. + */ + KMSAN_WARN_ON(size > PAGE_SIZE); + + if (!kmsan_enabled || kmsan_in_runtime()) + goto return_dummy; + + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(address, size)); + shadow = kmsan_get_metadata(address, KMSAN_META_SHADOW); + if (!shadow) + goto return_dummy; + + ret.shadow = shadow; + ret.origin = kmsan_get_metadata(address, KMSAN_META_ORIGIN); + return ret; + +return_dummy: + if (store) { + /* Ignore this store. */ + ret.shadow = dummy_store_page; + ret.origin = dummy_store_page; + } else { + /* This load will return zero. */ + ret.shadow = dummy_load_page; + ret.origin = dummy_load_page; + } + return ret; +} + +/* + * Obtain the shadow or origin pointer for the given address, or NULL if there's + * none. The caller must check the return value for being non-NULL if needed. + * The return value of this function should not depend on whether we're in the + * runtime or not. + */ +void *kmsan_get_metadata(void *address, bool is_origin) +{ + u64 addr = (u64)address, pad, off; + struct page *page; + void *ret; + + if (is_origin && !IS_ALIGNED(addr, KMSAN_ORIGIN_SIZE)) { + pad = addr % KMSAN_ORIGIN_SIZE; + addr -= pad; + } + address = (void *)addr; + if (kmsan_internal_is_vmalloc_addr(address) || + kmsan_internal_is_module_addr(address)) + return (void *)vmalloc_meta(address, is_origin); + + page = virt_to_page_or_null(address); + if (!page) + return NULL; + if (!page_has_metadata(page)) + return NULL; + off = addr % PAGE_SIZE; + + ret = (is_origin ? origin_ptr_for(page) : shadow_ptr_for(page)) + off; + return ret; +} + +/* Allocate metadata for pages allocated at boot time. */ +void __init kmsan_init_alloc_meta_for_range(void *start, void *end) +{ + struct page *shadow_p, *origin_p; + void *shadow, *origin; + struct page *page; + u64 addr, size; + + start = (void *)ALIGN_DOWN((u64)start, PAGE_SIZE); + size = ALIGN((u64)end - (u64)start, PAGE_SIZE); + shadow = memblock_alloc(size, PAGE_SIZE); + origin = memblock_alloc(size, PAGE_SIZE); + for (addr = 0; addr < size; addr += PAGE_SIZE) { + page = virt_to_page_or_null((char *)start + addr); + shadow_p = virt_to_page_or_null((char *)shadow + addr); + set_no_shadow_origin_page(shadow_p); + shadow_page_for(page) = shadow_p; + origin_p = virt_to_page_or_null((char *)origin + addr); + set_no_shadow_origin_page(origin_p); + origin_page_for(page) = origin_p; + } +} + +/* Called from mm/memory.c */ +void kmsan_copy_page_meta(struct page *dst, struct page *src) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + if (!dst || !page_has_metadata(dst)) + return; + if (!src || !page_has_metadata(src)) { + kmsan_internal_unpoison_memory(page_address(dst), PAGE_SIZE, + /*checked*/ false); + return; + } + + kmsan_enter_runtime(); + __memcpy(shadow_ptr_for(dst), shadow_ptr_for(src), PAGE_SIZE); + __memcpy(origin_ptr_for(dst), origin_ptr_for(src), PAGE_SIZE); + kmsan_leave_runtime(); +} + +/* Called from mm/page_alloc.c */ +void kmsan_alloc_page(struct page *page, unsigned int order, gfp_t flags) +{ + bool initialized = (flags & __GFP_ZERO) || !kmsan_enabled; + struct page *shadow, *origin; + depot_stack_handle_t handle; + int pages = 1 << order; + int i; + + if (!page) + return; + + shadow = shadow_page_for(page); + origin = origin_page_for(page); + + if (initialized) { + __memset(page_address(shadow), 0, PAGE_SIZE * pages); + __memset(page_address(origin), 0, PAGE_SIZE * pages); + return; + } + + /* Zero pages allocated by the runtime should also be initialized. */ + if (kmsan_in_runtime()) + return; + + __memset(page_address(shadow), -1, PAGE_SIZE * pages); + kmsan_enter_runtime(); + handle = kmsan_save_stack_with_flags(flags, /*extra_bits*/ 0); + kmsan_leave_runtime(); + /* + * Addresses are page-aligned, pages are contiguous, so it's ok + * to just fill the origin pages with |handle|. + */ + for (i = 0; i < PAGE_SIZE * pages / sizeof(handle); i++) + ((depot_stack_handle_t *)page_address(origin))[i] = handle; +} + +/* Called from mm/page_alloc.c */ +void kmsan_free_page(struct page *page, unsigned int order) +{ + return; // really nothing to do here. Could rewrite shadow instead. +} + +/* Called from mm/vmalloc.c */ +void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, + pgprot_t prot, struct page **pages, + unsigned int page_shift) +{ + unsigned long shadow_start, origin_start, shadow_end, origin_end; + struct page **s_pages, **o_pages; + int nr, i, mapped; + + if (!kmsan_enabled) + return; + + shadow_start = vmalloc_meta((void *)start, KMSAN_META_SHADOW); + shadow_end = vmalloc_meta((void *)end, KMSAN_META_SHADOW); + if (!shadow_start) + return; + + nr = (end - start) / PAGE_SIZE; + s_pages = kcalloc(nr, sizeof(struct page *), GFP_KERNEL); + o_pages = kcalloc(nr, sizeof(struct page *), GFP_KERNEL); + if (!s_pages || !o_pages) + goto ret; + for (i = 0; i < nr; i++) { + s_pages[i] = shadow_page_for(pages[i]); + o_pages[i] = origin_page_for(pages[i]); + } + prot = __pgprot(pgprot_val(prot) | _PAGE_NX); + prot = PAGE_KERNEL; + + origin_start = vmalloc_meta((void *)start, KMSAN_META_ORIGIN); + origin_end = vmalloc_meta((void *)end, KMSAN_META_ORIGIN); + kmsan_enter_runtime(); + mapped = __vmap_pages_range_noflush(shadow_start, shadow_end, prot, + s_pages, page_shift); + KMSAN_WARN_ON(mapped); + mapped = __vmap_pages_range_noflush(origin_start, origin_end, prot, + o_pages, page_shift); + KMSAN_WARN_ON(mapped); + kmsan_leave_runtime(); + flush_tlb_kernel_range(shadow_start, shadow_end); + flush_tlb_kernel_range(origin_start, origin_end); + flush_cache_vmap(shadow_start, shadow_end); + flush_cache_vmap(origin_start, origin_end); + +ret: + kfree(s_pages); + kfree(o_pages); +} + +void kmsan_setup_meta(struct page *page, struct page *shadow, + struct page *origin, int order) +{ + int i; + + for (i = 0; i < (1 << order); i++) { + set_no_shadow_origin_page(&shadow[i]); + set_no_shadow_origin_page(&origin[i]); + shadow_page_for(&page[i]) = &shadow[i]; + origin_page_for(&page[i]) = &origin[i]; + } +} diff --git a/scripts/Makefile.kmsan b/scripts/Makefile.kmsan new file mode 100644 index 0000000000000..9793591f9855c --- /dev/null +++ b/scripts/Makefile.kmsan @@ -0,0 +1 @@ +export CFLAGS_KMSAN := -fsanitize=kernel-memory diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib index d1f865b8c0cba..3a0dbcea51d01 100644 --- a/scripts/Makefile.lib +++ b/scripts/Makefile.lib @@ -162,6 +162,15 @@ _c_flags += $(if $(patsubst n%,, \ endif endif +ifeq ($(CONFIG_KMSAN),y) +_c_flags += $(if $(patsubst n%,, \ + $(KMSAN_SANITIZE_$(basetarget).o)$(KMSAN_SANITIZE)y), \ + $(CFLAGS_KMSAN)) +_c_flags += $(if $(patsubst n%,, \ + $(KMSAN_ENABLE_CHECKS_$(basetarget).o)$(KMSAN_ENABLE_CHECKS)y), \ + , -mllvm -msan-disable-checks=1) +endif + ifeq ($(CONFIG_UBSAN),y) _c_flags += $(if $(patsubst n%,, \ $(UBSAN_SANITIZE_$(basetarget).o)$(UBSAN_SANITIZE)$(CONFIG_UBSAN_SANITIZE_ALL)), \ From patchwork Tue Dec 14 16:20:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676357 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83682C43219 for ; Tue, 14 Dec 2021 16:29:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C4BE6B0089; Tue, 14 Dec 2021 11:22:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3994F6B008A; Tue, 14 Dec 2021 11:22:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 23B0B6B008C; Tue, 14 Dec 2021 11:22:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0197.hostedemail.com [216.40.44.197]) by kanga.kvack.org (Postfix) with ESMTP id 11AA66B0089 for ; Tue, 14 Dec 2021 11:22:45 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id CD82888CC7 for ; Tue, 14 Dec 2021 16:22:34 +0000 (UTC) X-FDA: 78916917828.14.42551C7 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf20.hostedemail.com (Postfix) with ESMTP id C9BFB1C0019 for ; Tue, 14 Dec 2021 16:22:31 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id f3-20020a5d50c3000000b00183ce1379feso4865825wrt.5 for ; Tue, 14 Dec 2021 08:22:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=v27suEImYCH1RWXe/1ZFLgxZ+ZgAGIliQ0KnY8gXJkw=; b=S+neo1BC67KP01dukd8NNmSIdQmnHn24FhUGscdevRCdK/42TvHClIY3O5ml7Quo0U BPRKv50AiEGkEyZO4VY8Ng2Q3AsHH/9n5YZMpXK3nOBS1PMJ+Rv012vrceW90vT3rOnT 1M8jdsL0QHxtwu2/qODH4gqnvQ2l6bDZDTyoQoPw980LzWS0pyxejw6CfpJD6bFTxjxn OlxHBThm13zT0CZcBBkFLgBqf7/tkopsGYLF9i4Jpsy70Vdh+Xs5morzyT2tI1qfhjgI HMPN0CtrKqzm5YSzy+Lij9L/18UpwAXbipEO21jBLUpDlIRUX8MhC/zlI+FVqlmtEmOq Y4Xg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=v27suEImYCH1RWXe/1ZFLgxZ+ZgAGIliQ0KnY8gXJkw=; b=xU8rCaqOlBYbmnkhW3rmfp2dh8ti9rkKERmTS3w/d0IDDGZBHUVlrvtXYh5tsMEGuK RXHTKve6Rcq0MEbBCMH/E6odKo17ljYvF2ZSEAC0HDQduFFfGML9qNMfVNvUXgvXbz5w 2Nw40Xo5IicK8zWL0DhdnkHddFbpL7wmdWMpZdCaz+3XksJohnsIJJJ1336eOdF5JZ4E AZOAWP0yoh5nEBFnAx0lLs1uekZwP1rUjHigkMFem/wUq9t7NqNx3rXRr+uXFmDqp/BS 4Cp5VMQ/op2j9DfEUOrCOJiruYIallT6uVgxIIMq68+DK99fdZo8A85meJoDzZLnW74S dNBA== X-Gm-Message-State: AOAM531AZNoCyauIrlro/v0za/Qfp1XwvZRicDjTCOv2SlnFMx0Y5EHg eWKsrp9d/VdioSIdH4h+R0LAOwSlXPw= X-Google-Smtp-Source: ABdhPJwWpA7c1G+q3WbDfvjyQeS9ZstbuSBGtacVhYhTANMjcKvuP3bS1lPvI1M15XVB7V7hiKxTaFSM7ac= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a1c:43c2:: with SMTP id q185mr36440wma.30.1639498951456; Tue, 14 Dec 2021 08:22:31 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:21 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-15-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 14/43] MAINTAINERS: add entry for KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: C9BFB1C0019 X-Stat-Signature: s773gny196hc3k4iuper3oobj54mobkk Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=S+neo1BC; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of 3x8S4YQYKCEgqvsno1qyyqvo.mywvsx47-wwu5kmu.y1q@flex--glider.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3x8S4YQYKCEgqvsno1qyyqvo.mywvsx47-wwu5kmu.y1q@flex--glider.bounces.google.com X-HE-Tag: 1639498951-847025 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add entry for KMSAN maintainers/reviewers. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ic5836c2bceb6b63f71a60d3327d18af3aa3dab77 --- MAINTAINERS | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 13f9a84a617e3..94add5a5404e4 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -10615,6 +10615,18 @@ F: kernel/kmod.c F: lib/test_kmod.c F: tools/testing/selftests/kmod/ +KMSAN +M: Alexander Potapenko +R: Marco Elver +R: Dmitry Vyukov +L: kasan-dev@googlegroups.com +S: Maintained +F: Documentation/dev-tools/kmsan.rst +F: include/linux/kmsan*.h +F: lib/Kconfig.kmsan +F: mm/kmsan/ +F: scripts/Makefile.kmsan + KPROBES M: Naveen N. Rao M: Anil S Keshavamurthy From patchwork Tue Dec 14 16:20:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676359 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70EF9C433FE for ; Tue, 14 Dec 2021 16:30:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 265966B008A; Tue, 14 Dec 2021 11:22:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 213446B008C; Tue, 14 Dec 2021 11:22:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 08CCB6B0092; Tue, 14 Dec 2021 11:22:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay037.a.hostedemail.com [64.99.140.37]) by kanga.kvack.org (Postfix) with ESMTP id EC9A36B008A for ; Tue, 14 Dec 2021 11:22:45 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id C970B8043B for ; Tue, 14 Dec 2021 16:22:35 +0000 (UTC) X-FDA: 78916917912.01.6C42455 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf31.hostedemail.com (Postfix) with ESMTP id E3BB020016 for ; Tue, 14 Dec 2021 16:22:29 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id c4-20020adfed84000000b00185ca4eba36so4843834wro.21 for ; Tue, 14 Dec 2021 08:22:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=SHGFxV/gzg6jQ5t0gjyp2ovTzZOCQJ4LpEXLIh7BBvo=; b=gLqDvJvAj1s+UtVet2O47yEADN974yana8zFcjRXXUxtrkBKyBLgaZ2HAqBwJY+UcL hI1MbnIUZUKNrQBOdT7a48hxDTAFPFAgQ8+yJJnH6ZuS0SfKPeOzZnGLwzRFEy+icZsB Qw+w5kU3lfplzKHYhJo2uyWN8VCDHvEfS6rz5RS2v/UzOgeLulPvPcExx4WVOSIYK0k4 xd36oeHrEaaXfZepx2tUz9ytnGjtCilrSqyRWeNzXQ8TMvoceZY0UeofgGM2ZUlQ16hU LjSd0+s8vf43ARXxZMDOaqa4S20p3RKJ1hy8wRZsJsK2M0VrZ/HJ5/IQDGJGWPlrCP5u t1ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=SHGFxV/gzg6jQ5t0gjyp2ovTzZOCQJ4LpEXLIh7BBvo=; b=xf6mIkWe95+IShNtyD4JNlf++xYvmRsqq8Ng5le8nMn7a+t6kTGHgDi5EVEeLQeUzt 1aIrwPicknvVAMH26W1BTqigaErgL34Nfj2+HZmFbf3tsjHOGv/JzxQePi/sgVw0tuiz h2BdHFvnVEzLSwuyznnHRNdyUH7vO46n6O2QwafdJTGDjsYrc4r6EMGCGbfIubwN6Ndd Gt/G78yTl2yjQggqm/Qi6mWOx/y/StMzLlxp/SwTIddQOu0NIIzU/bnicjv/RuwrCTC9 0CwzNewDIU4vbh710OlW1I2XZonN3QuDIiCftmdvlXTrITkkGxPe1aFZgY1BqAmnQjET 9qLQ== X-Gm-Message-State: AOAM533nPKZ+4XlSgJR16ro/AKEcm7jrvXF63JZtGrbj51ulxwAE0UmZ YCLbkTuHTFLux6pOaNa3LoBYyNJerDI= X-Google-Smtp-Source: ABdhPJzy0w1gHA+9P/NuPa42yjowF48uXfrttj3XSDC2uNI+hUQVLdANIi/yKdgT316+epfJDUX1ID5jeus= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a5d:648d:: with SMTP id o13mr6875509wri.636.1639498953968; Tue, 14 Dec 2021 08:22:33 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:22 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-16-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 15/43] kmsan: mm: maintain KMSAN metadata for page operations From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=gLqDvJvA; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf31.hostedemail.com: domain of 3ycS4YQYKCEosxupq3s00sxq.o0yxuz69-yyw7mow.03s@flex--glider.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3ycS4YQYKCEosxupq3s00sxq.o0yxuz69-yyw7mow.03s@flex--glider.bounces.google.com X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: E3BB020016 X-Stat-Signature: jpdyskud14int4bz61u515xgisny5y48 X-HE-Tag: 1639498949-212231 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Insert KMSAN hooks that make the necessary bookkeeping changes: - poison page shadow and origins in alloc_pages()/free_page(); - clear page shadow and origins in clear_page(), copy_user_highpage(); - copy page metadata in copy_highpage(), wp_page_copy(); - handle vmap()/vunmap()/iounmap(); Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I6d4f53a0e7eab46fa29f0348f3095d9f2e326850 --- arch/x86/include/asm/page_64.h | 13 +++++++++++++ arch/x86/mm/ioremap.c | 3 +++ include/linux/highmem.h | 3 +++ mm/memory.c | 2 ++ mm/page_alloc.c | 14 ++++++++++++++ mm/vmalloc.c | 20 ++++++++++++++++++-- 6 files changed, 53 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h index 4bde0dc66100c..c10547510f1f4 100644 --- a/arch/x86/include/asm/page_64.h +++ b/arch/x86/include/asm/page_64.h @@ -44,14 +44,27 @@ void clear_page_orig(void *page); void clear_page_rep(void *page); void clear_page_erms(void *page); +/* This is an assembly header, avoid including too much of kmsan.h */ +#ifdef CONFIG_KMSAN +void kmsan_unpoison_memory(const void *addr, size_t size); +#endif +__no_sanitize_memory static inline void clear_page(void *page) { +#ifdef CONFIG_KMSAN + /* alternative_call_2() changes @page. */ + void *page_copy = page; +#endif alternative_call_2(clear_page_orig, clear_page_rep, X86_FEATURE_REP_GOOD, clear_page_erms, X86_FEATURE_ERMS, "=D" (page), "0" (page) : "cc", "memory", "rax", "rcx"); +#ifdef CONFIG_KMSAN + /* Clear KMSAN shadow for the pages that have it. */ + kmsan_unpoison_memory(page_copy, PAGE_SIZE); +#endif } void copy_page(void *to, void *from); diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 026031b3b7829..4d0349ecc7cd7 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -474,6 +475,8 @@ void iounmap(volatile void __iomem *addr) return; } + kmsan_iounmap_page_range((unsigned long)addr, + (unsigned long)addr + get_vm_area_size(p)); memtype_free(p->phys_addr, p->phys_addr + get_vm_area_size(p)); /* Finally remove it */ diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 39bb9b47fa9cd..3e1898a44d7e3 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include @@ -277,6 +278,7 @@ static inline void copy_user_highpage(struct page *to, struct page *from, vfrom = kmap_local_page(from); vto = kmap_local_page(to); copy_user_page(vto, vfrom, vaddr, to); + kmsan_unpoison_memory(page_address(to), PAGE_SIZE); kunmap_local(vto); kunmap_local(vfrom); } @@ -292,6 +294,7 @@ static inline void copy_highpage(struct page *to, struct page *from) vfrom = kmap_local_page(from); vto = kmap_local_page(to); copy_page(vto, vfrom); + kmsan_copy_page_meta(to, from); kunmap_local(vto); kunmap_local(vfrom); } diff --git a/mm/memory.c b/mm/memory.c index 8f1de811a1dcb..ea9e48daadb15 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -51,6 +51,7 @@ #include #include #include +#include #include #include #include @@ -3003,6 +3004,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) put_page(old_page); return 0; } + kmsan_copy_page_meta(new_page, old_page); } if (mem_cgroup_charge(page_folio(new_page), mm, GFP_KERNEL)) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c5952749ad40b..fa8029b714a81 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include #include @@ -1288,6 +1289,7 @@ static __always_inline bool free_pages_prepare(struct page *page, VM_BUG_ON_PAGE(PageTail(page), page); trace_mm_page_free(page, order); + kmsan_free_page(page, order); if (unlikely(PageHWPoison(page)) && !order) { /* @@ -1734,6 +1736,9 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn, { if (early_page_uninitialised(pfn)) return; + if (!kmsan_memblock_free_pages(page, order)) + /* KMSAN will take care of these pages. */ + return; __free_pages_core(page, order); } @@ -3663,6 +3668,14 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, /* * Allocate a page from the given zone. Use pcplists for order-0 allocations. */ + +/* + * Do not instrument rmqueue() with KMSAN. This function may call + * __msan_poison_alloca() through a call to set_pfnblock_flags_mask(). + * If __msan_poison_alloca() attempts to allocate pages for the stack depot, it + * may call rmqueue() again, which will result in a deadlock. + */ +__no_sanitize_memory static inline struct page *rmqueue(struct zone *preferred_zone, struct zone *zone, unsigned int order, @@ -5389,6 +5402,7 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, } trace_mm_page_alloc(page, order, alloc_gfp, ac.migratetype); + kmsan_alloc_page(page, order, alloc_gfp); return page; } diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d2a00ad4e1dd1..333de26b3c56e 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -319,6 +319,9 @@ int ioremap_page_range(unsigned long addr, unsigned long end, err = vmap_range_noflush(addr, end, phys_addr, pgprot_nx(prot), ioremap_max_page_shift); flush_cache_vmap(addr, end); + if (!err) + kmsan_ioremap_page_range(addr, end, phys_addr, prot, + ioremap_max_page_shift); return err; } @@ -418,7 +421,7 @@ static void vunmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, * * This is an internal function only. Do not use outside mm/. */ -void vunmap_range_noflush(unsigned long start, unsigned long end) +void __vunmap_range_noflush(unsigned long start, unsigned long end) { unsigned long next; pgd_t *pgd; @@ -440,6 +443,12 @@ void vunmap_range_noflush(unsigned long start, unsigned long end) arch_sync_kernel_mappings(start, end); } +void vunmap_range_noflush(unsigned long start, unsigned long end) +{ + kmsan_vunmap_range_noflush(start, end); + __vunmap_range_noflush(start, end); +} + /** * vunmap_range - unmap kernel virtual addresses * @addr: start of the VM area to unmap @@ -574,7 +583,7 @@ static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end, * * This is an internal function only. Do not use outside mm/. */ -int vmap_pages_range_noflush(unsigned long addr, unsigned long end, +int __vmap_pages_range_noflush(unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, unsigned int page_shift) { unsigned int i, nr = (end - addr) >> PAGE_SHIFT; @@ -600,6 +609,13 @@ int vmap_pages_range_noflush(unsigned long addr, unsigned long end, return 0; } +int vmap_pages_range_noflush(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, unsigned int page_shift) +{ + kmsan_vmap_pages_range_noflush(addr, end, prot, pages, page_shift); + return __vmap_pages_range_noflush(addr, end, prot, pages, page_shift); +} + /** * vmap_pages_range - map pages to a kernel virtual address * @addr: start of the VM area to map From patchwork Tue Dec 14 16:20:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676361 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5C3EC433F5 for ; Tue, 14 Dec 2021 16:31:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 179FA6B008C; Tue, 14 Dec 2021 11:22:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 128DE6B0092; Tue, 14 Dec 2021 11:22:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F0C766B0093; Tue, 14 Dec 2021 11:22:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0216.hostedemail.com [216.40.44.216]) by kanga.kvack.org (Postfix) with ESMTP id E210C6B008C for ; Tue, 14 Dec 2021 11:22:48 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id A788789086 for ; Tue, 14 Dec 2021 16:22:38 +0000 (UTC) X-FDA: 78916917996.15.10B96AF Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf28.hostedemail.com (Postfix) with ESMTP id 45FB5C000C for ; Tue, 14 Dec 2021 16:22:38 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id y11-20020a056402358b00b003f7ce63b89eso744513edc.3 for ; Tue, 14 Dec 2021 08:22:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=l+xyVLYgnwfnFRoJMzTKh10TTGj06mIlcmxJ5/UGlBQ=; b=QooFOWp2lk3/dKVKqOC7MDgcKwTO9DGY48nPfSpN6VPUKJEZ0SbsY1PzLSGm0HOGyh A0v+L3T0XhHrYN84dELB+dSr61gJdgPYVpYRMz7zu8WU3qXKDQzRIi0NL2e8SVfMvc8p c14YgIf/6oMVYfqCfWyzQ5jBFZ4txHdG+dSpPpw3+/pgqTcFFQEo1WS7Igeun3EJI3B/ JO6EPKjSnDb2mm6geVtm1W5m058MaTfx2hfyOSP6BjwFa/8xWrVisENezxQfB1oZssEC FSfyawA8GY+PgsVbf/6u8Vw6m/CqLTjPwLCN+lvMtHUr0RIO9cq3e1mEylRB0+I899xL RoLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=l+xyVLYgnwfnFRoJMzTKh10TTGj06mIlcmxJ5/UGlBQ=; b=AsGKT+3IRD/yMUYCeJtr3WBQrMG8ym8XSD5EYjqrn+/67HqDwjeidlp/0nasVP2bi1 bHAZzitgqMJ2U9BNCuKc/fo6OVAhYGrGS8t9L2sHnAx4NX2Ifi0eZZ9IxMdELWO5FTvp SXnWQUQv+UOIeDB1oB1HtIg5oA8imUZdJI6shJ6OSb/EsPKAAjXT9e9FX9BPqispJAtE Cn6fGvq8TG2gdknfIH/7AGBbHXcxqmmps+MjMLPUMKxzeLXRHLEJcR/4FozLpl8yIa5P 9f7jlzKvJNB0TZF8xVEQx2VuB72ppdwU2KSHCtBxMkR2J4uG+YA9D6G8zBvlcSy8Xgre VtTA== X-Gm-Message-State: AOAM532rT6a1SE9gX6TY342buo8NW9YZgGOa0mfdYFuUcOPRUGn4h+5C W7kzzgzNXONDoLmzIBAT1bdGD/YXccc= X-Google-Smtp-Source: ABdhPJztDlcuNNiWUlHLDRp2NxbkZc1rACcRnXIrugF8C58vuD1abHGLk1hNOWIpStZ9q4sY8JIsWSVOIBI= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a17:906:d975:: with SMTP id rp21mr6647872ejb.756.1639498956586; Tue, 14 Dec 2021 08:22:36 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:23 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-17-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 16/43] kmsan: mm: call KMSAN hooks from SLUB code From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Queue-Id: 45FB5C000C X-Stat-Signature: khwwty69muk5rz4rs8mgok68qx1nrfqn Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=QooFOWp2; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf28.hostedemail.com: domain of 3zMS4YQYKCE0v0xst6v33v0t.r310x29C-11zAprz.36v@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3zMS4YQYKCE0v0xst6v33v0t.r310x29C-11zAprz.36v@flex--glider.bounces.google.com X-Rspamd-Server: rspam11 X-HE-Tag: 1639498958-12236 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In order to report uninitialized memory coming from heap allocations KMSAN has to poison them unless they're created with __GFP_ZERO. It's handy that we need KMSAN hooks in the places where init_on_alloc/init_on_free initialization is performed. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I6954b386c5c5d7f99f48bb6cbcc74b75136ce86e --- mm/slab.h | 1 + mm/slub.c | 26 +++++++++++++++++++++++--- 2 files changed, 24 insertions(+), 3 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index 56ad7eea3ddfb..6175a74047b47 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -521,6 +521,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, memset(p[i], 0, s->object_size); kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, flags); + kmsan_slab_alloc(s, p[i], flags); } memcg_slab_post_alloc_hook(s, objcg, flags, size, p); diff --git a/mm/slub.c b/mm/slub.c index abe7db581d686..5a63486e52531 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -346,10 +347,13 @@ static inline void *freelist_dereference(const struct kmem_cache *s, (unsigned long)ptr_addr); } +/* + * See the comment to get_freepointer_safe(). + */ static inline void *get_freepointer(struct kmem_cache *s, void *object) { object = kasan_reset_tag(object); - return freelist_dereference(s, object + s->offset); + return kmsan_init(freelist_dereference(s, object + s->offset)); } static void prefetch_freepointer(const struct kmem_cache *s, void *object) @@ -357,18 +361,28 @@ static void prefetch_freepointer(const struct kmem_cache *s, void *object) prefetchw(object + s->offset); } +/* + * When running under KMSAN, get_freepointer_safe() may return an uninitialized + * pointer value in the case the current thread loses the race for the next + * memory chunk in the freelist. In that case this_cpu_cmpxchg_double() in + * slab_alloc_node() will fail, so the uninitialized value won't be used, but + * KMSAN will still check all arguments of cmpxchg because of imperfect + * handling of inline assembly. + * To work around this problem, use kmsan_init() to force initialize the + * return value of get_freepointer_safe(). + */ static inline void *get_freepointer_safe(struct kmem_cache *s, void *object) { unsigned long freepointer_addr; void *p; if (!debug_pagealloc_enabled_static()) - return get_freepointer(s, object); + return kmsan_init(get_freepointer(s, object)); object = kasan_reset_tag(object); freepointer_addr = (unsigned long)object + s->offset; copy_from_kernel_nofault(&p, (void **)freepointer_addr, sizeof(p)); - return freelist_ptr(s, p, freepointer_addr); + return kmsan_init(freelist_ptr(s, p, freepointer_addr)); } static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp) @@ -1678,6 +1692,7 @@ static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) ptr = kasan_kmalloc_large(ptr, size, flags); /* As ptr might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ptr, size, 1, flags); + kmsan_kmalloc_large(ptr, size, flags); return ptr; } @@ -1685,12 +1700,14 @@ static __always_inline void kfree_hook(void *x) { kmemleak_free(x); kasan_kfree_large(x); + kmsan_kfree_large(x); } static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x, bool init) { kmemleak_free_recursive(x, s->flags); + kmsan_slab_free(s, x); debug_check_no_locks_freed(x, s->object_size); @@ -3729,6 +3746,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, */ slab_post_alloc_hook(s, objcg, flags, size, p, slab_want_init_on_alloc(flags, s)); + return i; error: slub_put_cpu_ptr(s->cpu_slab); @@ -5905,6 +5923,7 @@ static char *create_unique_id(struct kmem_cache *s) p += sprintf(p, "%07u", s->size); BUG_ON(p > name + ID_STR_LENGTH - 1); + kmsan_unpoison_memory(name, p - name); return name; } @@ -6006,6 +6025,7 @@ static int sysfs_slab_alias(struct kmem_cache *s, const char *name) al->name = name; al->next = alias_list; alias_list = al; + kmsan_unpoison_memory(al, sizeof(struct saved_alias)); return 0; } From patchwork Tue Dec 14 16:20:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676363 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8EA0C4332F for ; Tue, 14 Dec 2021 16:31:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C4DF76B0092; Tue, 14 Dec 2021 11:22:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BFE946B0093; Tue, 14 Dec 2021 11:22:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A9F016B0095; Tue, 14 Dec 2021 11:22:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0002.hostedemail.com [216.40.44.2]) by kanga.kvack.org (Postfix) with ESMTP id 99F7C6B0092 for ; Tue, 14 Dec 2021 11:22:55 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 618FF88CC7 for ; Tue, 14 Dec 2021 16:22:45 +0000 (UTC) X-FDA: 78916918290.16.B8FF642 Received: from mail-lj1-f202.google.com (mail-lj1-f202.google.com [209.85.208.202]) by imf30.hostedemail.com (Postfix) with ESMTP id 633A68001E for ; Tue, 14 Dec 2021 16:22:41 +0000 (UTC) Received: by mail-lj1-f202.google.com with SMTP id p1-20020a2e7401000000b00218d0d11e91so5596245ljc.15 for ; Tue, 14 Dec 2021 08:22:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ue972WbBd9grr+IhEpVqeTCdlW/8PMCymIUxIIoyED4=; b=Vr7vmjXO/ZQNUilyHT6HA/4buGSLSIYLl/qRHTXk31zD+EY7VC067MV2GQ1gcTKpSL 34PYjt2oZaAHuR1Fn95QkJiDKtzO0ehg1sLaVJ3mV9Y/Q8LNlPIsbFpa/FZvil+9C22s RghE2q3Ejxg6cCW4gIHjwKLyaeOXsYQgRcgWD67OGB4gKUL2V1URRinmICyKUNyEpVoH YKwBGln2CVKOxUWHj7Mi81ouKRz2wk5IcSIbyOVRKabXX5rqPMETegoTJdq7sPCtpPmD xJeoztIdPUYuXWvmGaaEgBf6cjVJ0hyqY+1sd85pw7pOleo8UCuNLkXCZnmUeWgy27o7 ZsKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ue972WbBd9grr+IhEpVqeTCdlW/8PMCymIUxIIoyED4=; b=cZCugBbELGNGw8MAipFhY+yppHEfowmgcRrKvzVa5f1tjamE1bsNYGdaiSplblhbxm dfLRHUB3kdwzOl9Ly0BVjLhlrYAstZZaBcswbn1f13+kOuwppr59Np6Bh0ifTL4B7Cxk w5/UK7xvnNX+LkndUhSgbosB0luUeMPxCciHZtpkbL2SaMecbCXP4PZv/GTG4KB0g2HD Ywo/BpJ8jQr2EHf3zjKvoL6dkGr7MHx/37OEyW1FSuIwqIqKQiAM/Biv9WoVlRecUMs4 2OBxCQhOyCpK8NcLZZ4r4TBJ+I34uc5q+UwvmbvCWoA2/dsCMcAPjdnak4wnHUb6aBJr PreA== X-Gm-Message-State: AOAM533rhbj1TSpb5/6AIgEKODJqQ6nE1N5eUBPB/SV47FIsrTFFzahs /+5yRtPji7scnXibe8EZ+jVew8jYR4c= X-Google-Smtp-Source: ABdhPJwof4yAzLY6jvFwcb35lrlAFHi9TWhnsoYR/pbbpZnrib18G9NF9LFXSvNYzy5cHEt+ggjwBFIkk8g= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:ac2:5110:: with SMTP id q16mr5605822lfb.56.1639498959625; Tue, 14 Dec 2021 08:22:39 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:24 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-18-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 17/43] kmsan: handle task creation and exiting From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 633A68001E X-Stat-Signature: e7113xrtyhsdzfw9r845ct4ta4wkpoxw Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Vr7vmjXO; spf=pass (imf30.hostedemail.com: domain of 3z8S4YQYKCFAy30vw9y66y3w.u64305CF-442Dsu2.69y@flex--glider.bounces.google.com designates 209.85.208.202 as permitted sender) smtp.mailfrom=3z8S4YQYKCFAy30vw9y66y3w.u64305CF-442Dsu2.69y@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1639498961-784215 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Tell KMSAN that a new task is created, so the tool creates a backing metadata structure for that task. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I0f41c3a1c7d66f7e14aabcfdfc7c69addb945805 --- kernel/exit.c | 2 ++ kernel/fork.c | 2 ++ 2 files changed, 4 insertions(+) diff --git a/kernel/exit.c b/kernel/exit.c index f702a6a63686e..a276f6716dcd5 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -59,6 +59,7 @@ #include #include #include +#include #include #include #include @@ -767,6 +768,7 @@ void __noreturn do_exit(long code) profile_task_exit(tsk); kcov_task_exit(tsk); + kmsan_task_exit(tsk); coredump_task_exit(tsk); ptrace_event(PTRACE_EVENT_EXIT, code); diff --git a/kernel/fork.c b/kernel/fork.c index 3244cc56b697d..5d53ffab2cda7 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -37,6 +37,7 @@ #include #include #include +#include #include #include #include @@ -955,6 +956,7 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) account_kernel_stack(tsk, 1); kcov_task_init(tsk); + kmsan_task_create(tsk); kmap_local_fork(tsk); #ifdef CONFIG_FAULT_INJECTION From patchwork Tue Dec 14 16:20:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3625AC433F5 for ; Tue, 14 Dec 2021 16:32:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A06706B0093; Tue, 14 Dec 2021 11:22:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 9DB986B0095; Tue, 14 Dec 2021 11:22:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8CA1F6B0096; Tue, 14 Dec 2021 11:22:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0002.hostedemail.com [216.40.44.2]) by kanga.kvack.org (Postfix) with ESMTP id 7D62E6B0093 for ; Tue, 14 Dec 2021 11:22:56 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4CFDA8908A for ; Tue, 14 Dec 2021 16:22:46 +0000 (UTC) X-FDA: 78916918332.20.4F37309 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf13.hostedemail.com (Postfix) with ESMTP id E2BBA20012 for ; Tue, 14 Dec 2021 16:22:40 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id 201-20020a1c04d2000000b003335bf8075fso11526558wme.0 for ; Tue, 14 Dec 2021 08:22:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=e8wDcDLgqLYcyyBsUvkh60ghxris53AouQaXxm5r3VY=; b=bwbWTEarveUVX+itJ0fuWYEkY+Ic1ruhFCrxUeHNnxH5C8YooQA7cUsnyl8QWo37I6 qTSM1De/qqc5mhyJtWkyChUpcc1nvO5H+mOj9FiHjcds3GvwduBveBltJ+MOJP/LyV7c K/DQVpI3lfpjDdqbPpWHBynUKrxIhmcfVj1tonRgeT8hXBQQWaaChDGRY91NmnLmrBd8 qK7LRFOohNDU/Dbk3RB2WOgoN4+ftz4obf+oEFc+u9s1lPaqCHjHqdN5zv3+L7i8La57 4WiPTF5fv5H95tgHlFg9ybWEeaQttWWACxX/wPbwtcvTtH5Nq8FL5fccCq06rND3IElh TzAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=e8wDcDLgqLYcyyBsUvkh60ghxris53AouQaXxm5r3VY=; b=taEjIxfjur12ZqiFq0+Blgfinm5mXMCoc3JlqMWVArVk24zhE/AJ8vyWtMqcJjqZse E7ukqzyg4zn62WwQK3eEMS3oMDZckKQawHOEf2xJQbNvNA/A03KdLKro2Gi+urV8523D IQP0Uv5er3OZp89PKIpA+jyLrQtJdR3QKXZWJnYLf2BWmNAoOSEERWcB5QdEhHIpxMdZ SLWHpXJRrgMZX4LWWyQ9YqZQ8pN/QE9d8qbJcZVNTgDkxyOL94CP1/i1CGfSNc5kv2FV /yNx1A3/CTzaFIe8gzlOKRYQMKxyWonr02DoIaydhusBAQCOx4u5QN7zTFDT6NyZFZ5S o/Ig== X-Gm-Message-State: AOAM531hBLM+VKLuQcNfnX6XwoLMdBelvsPDx+5JWg+TiDsnBuQP1uFq rBB6zlbSNiBp3nPxzTdaMddUPdeVA3M= X-Google-Smtp-Source: ABdhPJzoy7nwddl4O5eQjXMqW0hLllhbb9+Sf9gHpEsPLwPy4hBpBZLzkiU1SPitQ3yLHKKXprNAepEWH4s= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a5d:4646:: with SMTP id j6mr1679288wrs.485.1639498962614; Tue, 14 Dec 2021 08:22:42 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:25 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-19-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 18/43] kmsan: unpoison @tlb in arch_tlb_gather_mmu() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: E2BBA20012 X-Stat-Signature: prdyh6oud7ow18kf35a41jrt9bzg8anz Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=bwbWTEar; spf=pass (imf13.hostedemail.com: domain of 30sS4YQYKCFM163yzC19916z.x97638FI-775Gvx5.9C1@flex--glider.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=30sS4YQYKCFM163yzC19916z.x97638FI-775Gvx5.9C1@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1639498960-874997 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a hack to reduce stackdepot pressure. struct mmu_gather contains 7 1-bit fields packed into a 32-bit unsigned int value. The remaining 25 bits remain uninitialized and are never used, but KMSAN updates the origin for them in zap_pXX_range() in mm/memory.c, thus creating very long origin chains. This is technically correct, but consumes too much memory. Unpoisoning the whole structure will prevent creating such chains. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I76abee411b8323acfdbc29bc3a60dca8cff2de77 --- mm/mmu_gather.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index 1b9837419bf9c..72e4c4ca01d27 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -1,6 +1,7 @@ #include #include #include +#include #include #include #include @@ -252,6 +253,15 @@ void tlb_flush_mmu(struct mmu_gather *tlb) static void __tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, bool fullmm) { + /* + * struct mmu_gather contains 7 1-bit fields packed into a 32-bit + * unsigned int value. The remaining 25 bits remain uninitialized + * and are never used, but KMSAN updates the origin for them in + * zap_pXX_range() in mm/memory.c, thus creating very long origin + * chains. This is technically correct, but consumes too much memory. + * Unpoisoning the whole structure will prevent creating such chains. + */ + kmsan_unpoison_memory(tlb, sizeof(*tlb)); tlb->mm = mm; tlb->fullmm = fullmm; From patchwork Tue Dec 14 16:20:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56E5EC433F5 for ; Tue, 14 Dec 2021 16:32:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C2DF6B0095; Tue, 14 Dec 2021 11:22:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 372266B0096; Tue, 14 Dec 2021 11:22:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 23A516B0098; Tue, 14 Dec 2021 11:22:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0054.hostedemail.com [216.40.44.54]) by kanga.kvack.org (Postfix) with ESMTP id 151726B0095 for ; Tue, 14 Dec 2021 11:22:58 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C71E0180DA19C for ; Tue, 14 Dec 2021 16:22:47 +0000 (UTC) X-FDA: 78916918374.15.B8EA9B6 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf06.hostedemail.com (Postfix) with ESMTP id 99A4E180013 for ; Tue, 14 Dec 2021 16:22:46 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id n16-20020a05600c3b9000b003331973fdbbso8130362wms.0 for ; Tue, 14 Dec 2021 08:22:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=pDpFAq/kgLbL4mZTug/Zn/jZQvoibLkfmHC/vYYPKLY=; b=V6TLCTroMQ1YFeJUAuCxWCM8TfORF5zXzcc4ylqnnnUmQFBR0wkShjX3TJqpedLfoJ OhGXscqmWNGR3eRaLf8qdFbwlf3KhIX0EiqDRTN8sJuq8UUzEnWhnr5dTFLKiyXS6X2T sIPx9Sbr+F6MfZddMdDtG76pRYMoFyVl7xP2sc3dWOPGAIQl/vWIKp5uvz26MCsufIC8 D5t9esKWY42zFJ+INBQpNQr3Zq9zBsN5DOTYEB6P/fW2+gp6TbhSXY7HXXs2KYyuG8Wu 3Ad+I8gxvLI3sfzZkkglyYeUQ55b52BS6bBCkBjRSFvNDAzqQvwiBjzOqMH9VhpVlD8X kS7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pDpFAq/kgLbL4mZTug/Zn/jZQvoibLkfmHC/vYYPKLY=; b=JLXfCJPHOMeGyXj1XkwlRaSsCFlAWzgxx6z8NiAJOdBcR4xa02QtBER/wKaa7cIpWc oa6V6smuDs908NoQTw+9sS0Ll0ghB/2zpCnzI5dGsIEWtHrIh86SHucWBBtYteyMESCP DS3Bb2BCqDhTxXEFuJCjXSX3J6uXqXM6dbnLPy3/0th21aHbh/kr9IL5a5qSzUKpI5IW g4+wZshbQIKYMjNeSLLJYl4jFeF0cwKMotdb8vVM7rawaJw2RsQT4An162F23p269fxj AFAhw7qY58RLm0+dKhaPkugWHYGjfRByncWL/ouo+4cCsay19WiaBh45XIxckF4m8W2y 8qHA== X-Gm-Message-State: AOAM5339txm/Cxd3TJsRf6rztWtKFzpxZYlzVwnEx5zR80YkyoQu0xyR waeI2hIT8/KTqM77yU0wvt7dUPW7JuI= X-Google-Smtp-Source: ABdhPJz5liaqc6HieOETlQr+PtYSR7daCRhoU6NJ7zV/jC4EZ7e8Cq5zAy4tyC1sowvOKbwx+OaMmBwZf9M= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a05:600c:1d1b:: with SMTP id l27mr5819167wms.1.1639498965316; Tue, 14 Dec 2021 08:22:45 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:26 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-20-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 19/43] kmsan: init: call KMSAN initialization routines From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Queue-Id: 99A4E180013 X-Stat-Signature: 63qd8u9tyu3z5ou86cxnwh5ffse7e54s Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=V6TLCTro; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf06.hostedemail.com: domain of 31cS4YQYKCFY49612F4CC492.0CA96BIL-AA8Jy08.CF4@flex--glider.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=31cS4YQYKCFY49612F4CC492.0CA96BIL-AA8Jy08.CF4@flex--glider.bounces.google.com X-Rspamd-Server: rspam11 X-HE-Tag: 1639498966-382255 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kmsan_initialize_shadow() creates metadata pages for mappings created at boot time. kmsan_initialize() initializes the bookkeeping for init_task and enables KMSAN. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I7bc53706141275914326df2345881ffe0cdd16bd --- init/main.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/init/main.c b/init/main.c index bb984ed79de0e..2fc5025db0810 100644 --- a/init/main.c +++ b/init/main.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -834,6 +835,7 @@ static void __init mm_init(void) init_mem_debugging_and_hardening(); kfence_alloc_pool(); report_meminit(); + kmsan_init_shadow(); stack_depot_init(); mem_init(); mem_init_print_info(); @@ -848,6 +850,7 @@ static void __init mm_init(void) init_espfix_bsp(); /* Should be run after espfix64 is set up. */ pti_init(); + kmsan_init_runtime(); } #ifdef CONFIG_HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET From patchwork Tue Dec 14 16:20:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0CCE4C433F5 for ; Tue, 14 Dec 2021 16:33:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4AE476B0096; Tue, 14 Dec 2021 11:23:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 45C496B0098; Tue, 14 Dec 2021 11:23:00 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 324556B0099; Tue, 14 Dec 2021 11:23:00 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0129.hostedemail.com [216.40.44.129]) by kanga.kvack.org (Postfix) with ESMTP id 230956B0096 for ; Tue, 14 Dec 2021 11:23:00 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id D0C218249980 for ; Tue, 14 Dec 2021 16:22:49 +0000 (UTC) X-FDA: 78916918458.28.8247133 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf08.hostedemail.com (Postfix) with ESMTP id 65B1B160007 for ; Tue, 14 Dec 2021 16:22:46 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id l4-20020a05600c1d0400b00332f47a0fa3so8100150wms.8 for ; Tue, 14 Dec 2021 08:22:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=L9bpZFAJqwWr0sK9cJmNI2AyaK0oQbsHwHE0NZymKYw=; b=psK3I4np4MgxxiZg24Ts6iYYQ66nfNvuOqGX5AV9cqUH1vZZOxWpAWjpD9z7asOJqr A4CCuPFhy50SccNFOdcppzE9XFAmI2LJPzFGf5VLD1TaoeUH/M9J/OeKmNVCs56IuPTO XgnJfweCAiaABHyvxz7o+vsCL0xBmxUIYsIMueUOq21Wm1yTojJtPI9Z3jNdrAqVvh/q 69vsjF1Kv8a5bNSD6N7FJS6B4Q4G4OBPOIfQJuhZPf/tLUF3Ro21MZoroHW2P5hUno3R jrQ6N6CzhD5gex4V8nbCtj7x715xxTwRuMHnd9Toq71qs921Xei6dnI+DQpnY+bUI0/6 s/sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=L9bpZFAJqwWr0sK9cJmNI2AyaK0oQbsHwHE0NZymKYw=; b=HUNwk+hLVel0dwoM1a9/R4n1Fr8GjO+A0QfnMizKCAjEIKLuB+TA8h/EOW4AvgCRN9 JW8BtGl7L70GY5GiM7bumTMHnFt4n/3rxg07Xq0E2zU2xlrHp+uJGZec4v7Fh53DFDp/ fgehcGGreQ5h7bSc+fY/ctRSS0Z3FA8Xr1FR05Llj5FPKh3y4f69aoPiEfPW2t3ElRzK TYgfAZ/+KIQC4RGa4U2lF32JlExXxdRHJIpc1kuLzhpyHtweXC7AeU5DTVBh4aws9Ees sY+PMrha8Q+E4A22vGBIlPEVKmOAi3qsK1XxEzz9CWoMf18GLJb0Z24fbiBrLVWTZwtS hmZA== X-Gm-Message-State: AOAM533aiqSJPKTmojvCT/RQRhzIiX4khG+g+31tqWM43HOJWvHiZiwi YT331GYdt59Sw61Oqak93awRPzh8Tw4= X-Google-Smtp-Source: ABdhPJzq0aOyE1rk6smoEKoJQb17yWIza1caNiEXRrMx6w9vfjQeetvWU3p7URL7lNRDdTsZsQQeu0EE/T0= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:adf:df0b:: with SMTP id y11mr6688786wrl.181.1639498968312; Tue, 14 Dec 2021 08:22:48 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:27 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-21-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 20/43] instrumented.h: add KMSAN support From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 65B1B160007 X-Stat-Signature: mjy7b69f9rapsr8oxrfgxbj5r5ugscpd Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=psK3I4np; spf=pass (imf08.hostedemail.com: domain of 32MS4YQYKCFk7C945I7FF7C5.3FDC9ELO-DDBM13B.FI7@flex--glider.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=32MS4YQYKCFk7C945I7FF7C5.3FDC9ELO-DDBM13B.FI7@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1639498966-371160 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To avoid false positives, KMSAN needs to unpoison the data copied from the userspace. To detect infoleaks - check the memory buffer passed to copy_to_user(). Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I43e93b9c02709e6be8d222342f1b044ac8bdbaaf --- include/linux/instrumented.h | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h index ee8f7d17d34f5..c73c1b19e9227 100644 --- a/include/linux/instrumented.h +++ b/include/linux/instrumented.h @@ -2,7 +2,7 @@ /* * This header provides generic wrappers for memory access instrumentation that - * the compiler cannot emit for: KASAN, KCSAN. + * the compiler cannot emit for: KASAN, KCSAN, KMSAN. */ #ifndef _LINUX_INSTRUMENTED_H #define _LINUX_INSTRUMENTED_H @@ -10,6 +10,7 @@ #include #include #include +#include #include /** @@ -117,6 +118,7 @@ instrument_copy_to_user(void __user *to, const void *from, unsigned long n) { kasan_check_read(from, n); kcsan_check_read(from, n); + kmsan_copy_to_user(to, from, n, 0); } /** @@ -151,6 +153,7 @@ static __always_inline void instrument_copy_from_user_after(const void *to, const void __user *from, unsigned long n, unsigned long left) { + kmsan_unpoison_memory(to, n - left); } #endif /* _LINUX_INSTRUMENTED_H */ From patchwork Tue Dec 14 16:20:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E4513C433EF for ; Tue, 14 Dec 2021 16:33:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0404D6B0098; Tue, 14 Dec 2021 11:23:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F322D6B0099; Tue, 14 Dec 2021 11:23:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E21806B009A; Tue, 14 Dec 2021 11:23:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay037.a.hostedemail.com [64.99.140.37]) by kanga.kvack.org (Postfix) with ESMTP id D33DC6B0098 for ; Tue, 14 Dec 2021 11:23:02 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 9351F8D3 for ; Tue, 14 Dec 2021 16:22:52 +0000 (UTC) X-FDA: 78916918584.02.121943A Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf08.hostedemail.com (Postfix) with ESMTP id 09E6C160014 for ; Tue, 14 Dec 2021 16:22:48 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id 187-20020a1c02c4000000b003335872db8dso11509966wmc.2 for ; Tue, 14 Dec 2021 08:22:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=n9hdbxHO53pPet9rVLXGJjutLvzY9K7ZBDkEw/sZKCE=; b=JHEdQfKv7HR3RWanTG4ZydCDLNq2lrvP4HdZTn5IwnSpjM0qsm+p2r8ZUTpBiAafC8 AbZmuX3hs+QHumnreQKR/wXxbZbyh6O8uXOK660iyElIgkT34bHEwtakJkkMXVe2kIaY r8eGAvVksDisZy1RRbBXmpjVGBbwjaAoK1MGxW8Nmq8+9wBg27pQsts9zu6ZTa9mzXHB 5QlQTAGVPBZ51vjVlHvv3fNVTZ2G4eq+0V7zlB2In/ViuSb06ajg56lchx7kFmT9VT5f /7Yo3X6pCK0XEbrztcdPbAK2XNRMT4Z8dEGdUZCxnhqQylBUTRxGFuuec79E+ZaBC4Rc 7KUA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=n9hdbxHO53pPet9rVLXGJjutLvzY9K7ZBDkEw/sZKCE=; b=bInGopx+KI8TaY7qA4UE6+CRxYguLKj7Elbry6WSK3GEuPVVS7ZMkRNxG4kYQ40Ov6 /P8RxNXP0X0IChy4Ok829CKx93HjYiIGO1mUbcHALsW65rmfwuBbiun79YE6M509boi3 CGbZdloRkrslpstRXR/StMCcwUdnX9TATLCwQiu+lxEf9pvv5tLbUgblVBe0LUHL/AOz Z9l11UgZdRykYSkmGhsIXXS/jO0Ty2ezJ+iXRp5SLbTjuk2z6Gi7PHTD6bvC8g5KGi7Z dEvlOEvREk1jzPiDA2k4YNQGH67Z4Z4EiS03yupMphlHG4I15QKx2YeoI2JY04bIRVsW syGA== X-Gm-Message-State: AOAM533PSOZwbU6VSketCD83PmA0F3/PTpl0mmyQ+HQAqHa0JlcmIHiY p+LhNePj8d5NAH9x3dmnJRMkkzv1jZ0= X-Google-Smtp-Source: ABdhPJysWuIKajBu41fBxQFQ/tie5K32G3b9ULI3MmpkCFPKsk2dJs2OmxSKdjDhXSGcDI1uWyhJxYHBE1s= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a7b:cb98:: with SMTP id m24mr2273632wmi.188.1639498970919; Tue, 14 Dec 2021 08:22:50 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:28 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-22-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 21/43] kmsan: mark noinstr as __no_sanitize_memory From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: zpkty1ain586owdt448jdotu77xx3cni Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=JHEdQfKv; spf=pass (imf08.hostedemail.com: domain of 32sS4YQYKCFs9EB67K9HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--glider.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=32sS4YQYKCFs9EB67K9HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 09E6C160014 X-HE-Tag: 1639498968-563109 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: noinstr functions should never be instrumented, so make KMSAN skip them by applying the __no_sanitize_memory attribute. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I3c9abe860b97b49bc0c8026918b17a50448dec0d --- include/linux/compiler_types.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 1d32f4c03c9ef..37b82564e93e5 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -210,7 +210,8 @@ struct ftrace_likely_data { /* Section for code which can't be instrumented at all */ #define noinstr \ noinline notrace __attribute((__section__(".noinstr.text"))) \ - __no_kcsan __no_sanitize_address __no_profile __no_sanitize_coverage + __no_kcsan __no_sanitize_address __no_profile __no_sanitize_coverage \ + __no_sanitize_memory #endif /* __KERNEL__ */ From patchwork Tue Dec 14 16:20:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D29B6C433EF for ; Tue, 14 Dec 2021 16:34:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3C3396B0099; Tue, 14 Dec 2021 11:23:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3733D6B009A; Tue, 14 Dec 2021 11:23:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 214646B009B; Tue, 14 Dec 2021 11:23:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0092.hostedemail.com [216.40.44.92]) by kanga.kvack.org (Postfix) with ESMTP id 0CB0D6B0099 for ; Tue, 14 Dec 2021 11:23:05 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id C6BAB180DD593 for ; Tue, 14 Dec 2021 16:22:54 +0000 (UTC) X-FDA: 78916918668.11.3A7B079 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf03.hostedemail.com (Postfix) with ESMTP id 646362000D for ; Tue, 14 Dec 2021 16:22:54 +0000 (UTC) Received: by mail-yb1-f201.google.com with SMTP id y125-20020a25dc83000000b005c2326bf744so36916930ybe.21 for ; Tue, 14 Dec 2021 08:22:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xrXvVkI6r4Vmn+7KKReTw1DvnU9RNEXMh/9+RUR+L9Y=; b=Sf/NXwY0Sja8M/PzUs1W+sY8VyA/Djybj8zo8xITaOYjOmAxm2jPwF1T6L9fwmnneP IPag0hEreQOp5npdzJEAderB2zK/8azetsweUYqaC2pZWtDo8K6FwZJl7uwJ1gYr9tQS QCLJNL5Nzb8WvzGLA1v5Risf+XOONz0kc7aMiFz7INAy72z3Yz+SfoeKd39ntJ1lnynL VveKA/P4j1Cqgoi/v1qwzegffbhOteL0iqZW34xOZYPFV7AGuzwyOTrZaf5WBnl/IsMX vHItLWbetXCgCXXM6PkvNRzL3C1XS63MTy0t4Kh7hY7rJ9tzmp71ZeQq4IwZAVp0zBrH F8UQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xrXvVkI6r4Vmn+7KKReTw1DvnU9RNEXMh/9+RUR+L9Y=; b=XnEqdCcLfseEEgxA7SGVafrUUbUrK1lsgmy4koyKkOQKaQzN7TNboFl/rA2yAVjxX1 wLMveNQ40TKoy8o7IuPa3XO9g+j+EOevNdWIjeoSPU7WLcPZ7O8MDN33/6J7wPFX0YCm KAC98EMKc+fijnBmYKsyZsOb9pe5C4jzh7Wx/ff3PRYCSjQyxNjL3/fZIPhSiSgbdaRu nWImSzfQpD6mGI13kCzyrkwSBspRYo8Vj4vVSpvHqiGxDajPhCXqsy/e6ZvlfncmCTvj ZmqD6jdzwnOj9f8H7QHQrYx2XcrVgm4BJFEi0xXan8ifCflN10jtyXnOsEICDFnfwFRZ HPhg== X-Gm-Message-State: AOAM533VXWrUULEr/MJRtu/tRpgjcAtwnTq5/fNwZ0go7VnLABheJwxT WGmBzQ3s6lMElIuTXrzylvzNoRcOTPg= X-Google-Smtp-Source: ABdhPJxaoBdLntARzlcI6TdYmeYb+XZlVtgE1jknLgRjufiSnIl0WbIGjxFSocuh1vUQ39TfRs9x7ezE8HA= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a25:acc3:: with SMTP id x3mr6310ybd.332.1639498973606; Tue, 14 Dec 2021 08:22:53 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:29 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-23-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 22/43] kmsan: initialize the output of READ_ONCE_NOCHECK() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="Sf/NXwY0"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf03.hostedemail.com: domain of 33cS4YQYKCF4CHE9ANCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--glider.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=33cS4YQYKCF4CHE9ANCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--glider.bounces.google.com X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 646362000D X-Stat-Signature: dimi5ozrt5yeti6ib1mdnsmbx1q5q5ip X-HE-Tag: 1639498974-248480 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: READ_ONCE_NOCHECK() is already used by KASAN to ignore memory accesses from e.g. stack unwinders. Define READ_ONCE_NOCHECK() for KMSAN so that it returns initialized values. This helps defeat false positives from leftover stack contents. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I07499eb3e8e59c0ad2fd486cedc932d958b37afd --- include/asm-generic/rwonce.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/include/asm-generic/rwonce.h b/include/asm-generic/rwonce.h index 8d0a6280e9824..7cf993af8e1ea 100644 --- a/include/asm-generic/rwonce.h +++ b/include/asm-generic/rwonce.h @@ -25,6 +25,7 @@ #include #include #include +#include /* * Yes, this permits 64-bit accesses on 32-bit architectures. These will @@ -69,14 +70,14 @@ unsigned long __read_once_word_nocheck(const void *addr) /* * Use READ_ONCE_NOCHECK() instead of READ_ONCE() if you need to load a - * word from memory atomically but without telling KASAN/KCSAN. This is + * word from memory atomically but without telling KASAN/KCSAN/KMSAN. This is * usually used by unwinding code when walking the stack of a running process. */ #define READ_ONCE_NOCHECK(x) \ ({ \ compiletime_assert(sizeof(x) == sizeof(unsigned long), \ "Unsupported access size for READ_ONCE_NOCHECK()."); \ - (typeof(x))__read_once_word_nocheck(&(x)); \ + kmsan_init((typeof(x))__read_once_word_nocheck(&(x))); \ }) static __no_kasan_or_inline From patchwork Tue Dec 14 16:20:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676381 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 00D9BC433F5 for ; Tue, 14 Dec 2021 16:34:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B3DC16B009A; Tue, 14 Dec 2021 11:23:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AEBFD6B009B; Tue, 14 Dec 2021 11:23:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 98D156B009C; Tue, 14 Dec 2021 11:23:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0247.hostedemail.com [216.40.44.247]) by kanga.kvack.org (Postfix) with ESMTP id 896F26B009A for ; Tue, 14 Dec 2021 11:23:08 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 509F9180DA19C for ; Tue, 14 Dec 2021 16:22:58 +0000 (UTC) X-FDA: 78916918836.21.593505B Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf12.hostedemail.com (Postfix) with ESMTP id 738254000D for ; Tue, 14 Dec 2021 16:22:52 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id q17-20020adfcd91000000b0017bcb12ad4fso4850892wrj.12 for ; Tue, 14 Dec 2021 08:22:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=RahC8+o5lwzgAvVh7zx+7oLLYsmuRte28muTwb6Tt+E=; b=kXx7p+TaPFKll9TwTsjKqgOr+UNTl28fMuejIwJkUW6waRnU3wWq6CRAGRiOmMac5D gTwgJsYAX2xUIUEmYzvGp2hyPCP4Wm56BgSaKglv8p253lIAFKtOY7B1DPzLwsEBSgO3 v8bHgT4qtyZDjeAG+BxPNiXz6Qdhn2gdCfTrLJNeLq51/HcT+lJ2Ymj5ZoFxtQHe6oAH IP/iLEHWxBPgmzVXkPVim068SXK6M2LIzTb5O32c/PJFtTHPYZt4cKS+VoyYpNtZVNo1 ucPlb5/R1XeuTBxqsz+siScqUDO0GJ60AvXx4VJ4v0JlBp6aMTl7d0fSj2nEnuPbOiBY xwFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RahC8+o5lwzgAvVh7zx+7oLLYsmuRte28muTwb6Tt+E=; b=JHynhPwc5ClldTe8oH1O424ANe3aaWMANRmUejMYb1qdTl3lc2BFX4QX8d2TBSo50T gzbaspOjyCot2t5k4mHSfpoJI3EVyCzwEUsT9bk6g33NDHhVHMuNEa3jE8YMPkjmhT1N KrY98rIOpNywqlBk2t+ueEYXOVOXbElMw5+swhtmEPTWS6h1K2KDHjfKxk8LBjR2Kjw0 wXoZOtbclwenROsxjtspzHs5EqEmt4Ig1zXPu+T2egDNHSzRBSaZFzDENj1foLWVsLIU Tu83CFIw2AfldaErF8bzhguhpKKIjOpTMhDKjLVrKShe4iba/gTgy0G3p+hs5UOUT556 ZKMw== X-Gm-Message-State: AOAM532iDQP35BpR+sFZ0flvAJWN2UJZzocU475K541WWSw56G8yO43H I05X67n1oHe2vIb13B86znP0Tadoqls= X-Google-Smtp-Source: ABdhPJyS0Vu024wH2nH284+5iPoP3BAzIuLrJ7O37SST95Qm9Dj5aepJ2oaLYM+pIKc9+sfuQR2fDWvghJc= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a7b:c097:: with SMTP id r23mr46571900wmh.193.1639498976421; Tue, 14 Dec 2021 08:22:56 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:30 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-24-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 23/43] kmsan: make READ_ONCE_TASK_STACK() return initialized values From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 738254000D X-Stat-Signature: 1f5btdz8pttyrrc8zpqz1t1bc8byxmqq Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=kXx7p+Ta; spf=pass (imf12.hostedemail.com: domain of 34MS4YQYKCGEFKHCDQFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--glider.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=34MS4YQYKCGEFKHCDQFNNFKD.BNLKHMTW-LLJU9BJ.NQF@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1639498972-727630 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To avoid false positives, assume that reading from the task stack always produces initialized values. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I9e2350bf3e88688dd83537e12a23456480141997 --- arch/x86/include/asm/unwind.h | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/unwind.h b/arch/x86/include/asm/unwind.h index 2a1f8734416dc..51173b19ac4d5 100644 --- a/arch/x86/include/asm/unwind.h +++ b/arch/x86/include/asm/unwind.h @@ -129,18 +129,19 @@ unsigned long unwind_recover_ret_addr(struct unwind_state *state, } /* - * This disables KASAN checking when reading a value from another task's stack, - * since the other task could be running on another CPU and could have poisoned - * the stack in the meantime. + * This disables KASAN/KMSAN checking when reading a value from another task's + * stack, since the other task could be running on another CPU and could have + * poisoned the stack in the meantime. Frame pointers are uninitialized by + * default, so for KMSAN we mark the return value initialized unconditionally. */ -#define READ_ONCE_TASK_STACK(task, x) \ -({ \ - unsigned long val; \ - if (task == current) \ - val = READ_ONCE(x); \ - else \ - val = READ_ONCE_NOCHECK(x); \ - val; \ +#define READ_ONCE_TASK_STACK(task, x) \ +({ \ + unsigned long val; \ + if (task == current && !IS_ENABLED(CONFIG_KMSAN)) \ + val = READ_ONCE(x); \ + else \ + val = READ_ONCE_NOCHECK(x); \ + val; \ }) static inline bool task_on_another_cpu(struct task_struct *task) From patchwork Tue Dec 14 16:20:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676383 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F742C433EF for ; Tue, 14 Dec 2021 16:35:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 54AF36B009B; Tue, 14 Dec 2021 11:23:12 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4F8B76B009C; Tue, 14 Dec 2021 11:23:12 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 39A1C6B009D; Tue, 14 Dec 2021 11:23:12 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay031.a.hostedemail.com [64.99.140.31]) by kanga.kvack.org (Postfix) with ESMTP id 2A8926B009B for ; Tue, 14 Dec 2021 11:23:12 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id DBC7D209F0 for ; Tue, 14 Dec 2021 16:23:01 +0000 (UTC) X-FDA: 78916918962.02.11FB71A Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf02.hostedemail.com (Postfix) with ESMTP id 7148B80010 for ; Tue, 14 Dec 2021 16:22:59 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id v17-20020adfedd1000000b0017c5e737b02so4843361wro.18 for ; Tue, 14 Dec 2021 08:23:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=AyJ0ZqWOel/2D4HWBPDswP5GfSOk+9CiGC18feqWdd8=; b=owzTx/s/IRE7jtPg15TedhCNaLEs4uGSfr/gp3zEFMNNIbk1T5Uv82XLp5GPnod0rB t8Zgo0byFWDItlV4tPAUD/9YiA4IYRITgu8Yo9AM8UvewoyPeDEjMdChENDzN5KNmm/a PpzvPZVyUndXWVIIcSexe5Xwx/nzxgX3gMEzNMT30LECd4KU0JNw26djYxGLEFbBe9j0 girOXrTgZMWMfmT2tiZzUnSSUVWzmhOHaLtmmxk6HgITzib4Wj0OUL3A5lJa0FU2y3bg tAEndZiHrww+m2doYDlbGhdUlXGOw/KpcK4cV53T9HRaxkvjXBIRZfgHqB+Rgar9rdpQ PEDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=AyJ0ZqWOel/2D4HWBPDswP5GfSOk+9CiGC18feqWdd8=; b=BFnq8HvlyeOjhfmXi4eK4GDgmPxQqjm3Mq4+q4D3vnUpboFggTy8t0ajtD4amoYoZY gDSTEkE7wgm0UHzXm1HvU3G7DD9ZmXF1/sKiSVk96gPZ/ngp6h10tSiaP4yg2J6HNFsM +1zSKPRhnUMIL3zbClBy8qY/mfT5UI2JPbQEP6s/xo75UMCA1d68rVuaJbmxWW4kv4u6 +6npG+gFX5GMeisQqOYPkwRJ73n5ZJGBGnTl8uZqWhGRkg/VE493A0RmnFSf7NWeRSOh uS+ocGKWpwBdzTLBK71CNguXoHIY6wg85pL45y5/x6gZGcWytYDdAd3XQwlQKHDd3rmm zhkA== X-Gm-Message-State: AOAM531zT9kKbfTYSEWSyiOmzTAV8Fk+2ZzoJILb1S2txC+O31yMeknk dM4HNvTLlh1bRzEq94TWUf5pTCUZVpU= X-Google-Smtp-Source: ABdhPJzRQB458KBoMRlCfkBENSs3MkgnzQYZFx0iy2viOhd4ZvO5pegU/iViez2WB7OKekaU/2u4sl2nV/s= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a05:600c:1d1b:: with SMTP id l27mr5819387wms.1.1639498979200; Tue, 14 Dec 2021 08:22:59 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:31 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-25-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 24/43] kmsan: disable KMSAN instrumentation for certain kernel parts From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 7148B80010 X-Stat-Signature: 6zadtnrorcunsznmskfb3md1z9jf857d Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="owzTx/s/"; spf=pass (imf02.hostedemail.com: domain of 348S4YQYKCGQINKFGTIQQING.EQONKPWZ-OOMXCEM.QTI@flex--glider.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=348S4YQYKCGQINKFGTIQQING.EQONKPWZ-OOMXCEM.QTI@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1639498979-975285 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Instrumenting some files with KMSAN will result in kernel being unable to link, boot or crashing at runtime for various reasons (e.g. infinite recursion caused by instrumentation hooks calling instrumented code again). Completely omit KMSAN instrumentation in the following places: - arch/x86/boot and arch/x86/realmode/rm, as KMSAN doesn't work for i386; - arch/x86/entry/vdso, which isn't linked with KMSAN runtime; - three files in arch/x86/kernel - boot problems; - arch/x86/mm/cpu_entry_area.c - recursion; - EFI stub - build failures; - kcov, stackdepot, lockdep - recursion. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Id5e5c4a9f9d53c24a35ebb633b814c414628d81b --- arch/x86/boot/Makefile | 1 + arch/x86/boot/compressed/Makefile | 1 + arch/x86/entry/vdso/Makefile | 3 +++ arch/x86/kernel/Makefile | 2 ++ arch/x86/kernel/cpu/Makefile | 1 + arch/x86/mm/Makefile | 2 ++ arch/x86/realmode/rm/Makefile | 1 + drivers/firmware/efi/libstub/Makefile | 1 + kernel/Makefile | 1 + kernel/locking/Makefile | 3 ++- lib/Makefile | 1 + 11 files changed, 16 insertions(+), 1 deletion(-) diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile index b5aecb524a8aa..d5623232b763f 100644 --- a/arch/x86/boot/Makefile +++ b/arch/x86/boot/Makefile @@ -12,6 +12,7 @@ # Sanitizer runtimes are unavailable and cannot be linked for early boot code. KASAN_SANITIZE := n KCSAN_SANITIZE := n +KMSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y # Kernel does not boot with kcov instrumentation here. diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile index 431bf7f846c3c..c4a284b738e71 100644 --- a/arch/x86/boot/compressed/Makefile +++ b/arch/x86/boot/compressed/Makefile @@ -20,6 +20,7 @@ # Sanitizer runtimes are unavailable and cannot be linked for early boot code. KASAN_SANITIZE := n KCSAN_SANITIZE := n +KMSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile index a2dddcc189f69..f2a175d872b07 100644 --- a/arch/x86/entry/vdso/Makefile +++ b/arch/x86/entry/vdso/Makefile @@ -11,6 +11,9 @@ include $(srctree)/lib/vdso/Makefile # Sanitizer runtimes are unavailable and cannot be linked here. KASAN_SANITIZE := n +KMSAN_SANITIZE_vclock_gettime.o := n +KMSAN_SANITIZE_vgetcpu.o := n + UBSAN_SANITIZE := n KCSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 2ff3e600f4269..0b9fc3ecce2de 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -35,6 +35,8 @@ KASAN_SANITIZE_cc_platform.o := n # With some compiler versions the generated code results in boot hangs, caused # by several compilation units. To be safe, disable all instrumentation. KCSAN_SANITIZE := n +KMSAN_SANITIZE_head$(BITS).o := n +KMSAN_SANITIZE_nmi.o := n OBJECT_FILES_NON_STANDARD_test_nx.o := y diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile index 9661e3e802be5..f10a921ee7565 100644 --- a/arch/x86/kernel/cpu/Makefile +++ b/arch/x86/kernel/cpu/Makefile @@ -12,6 +12,7 @@ endif # If these files are instrumented, boot hangs during the first second. KCOV_INSTRUMENT_common.o := n KCOV_INSTRUMENT_perf_event.o := n +KMSAN_SANITIZE_common.o := n # As above, instrumenting secondary CPU boot code causes boot hangs. KCSAN_SANITIZE_common.o := n diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index 5864219221ca8..747d4630d52ce 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -10,6 +10,8 @@ KASAN_SANITIZE_mem_encrypt_identity.o := n # Disable KCSAN entirely, because otherwise we get warnings that some functions # reference __initdata sections. KCSAN_SANITIZE := n +# Avoid recursion by not calling KMSAN hooks for CEA code. +KMSAN_SANITIZE_cpu_entry_area.o := n ifdef CONFIG_FUNCTION_TRACER CFLAGS_REMOVE_mem_encrypt.o = -pg diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile index 83f1b6a56449f..f614009d3e4e2 100644 --- a/arch/x86/realmode/rm/Makefile +++ b/arch/x86/realmode/rm/Makefile @@ -10,6 +10,7 @@ # Sanitizer runtimes are unavailable and cannot be linked here. KASAN_SANITIZE := n KCSAN_SANITIZE := n +KMSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile index d0537573501e9..81432d0c904b1 100644 --- a/drivers/firmware/efi/libstub/Makefile +++ b/drivers/firmware/efi/libstub/Makefile @@ -46,6 +46,7 @@ GCOV_PROFILE := n # Sanitizer runtimes are unavailable and cannot be linked here. KASAN_SANITIZE := n KCSAN_SANITIZE := n +KMSAN_SANITIZE := n UBSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y diff --git a/kernel/Makefile b/kernel/Makefile index 186c49582f45b..e5dd600e63d8a 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -39,6 +39,7 @@ KCOV_INSTRUMENT_kcov.o := n KASAN_SANITIZE_kcov.o := n KCSAN_SANITIZE_kcov.o := n UBSAN_SANITIZE_kcov.o := n +KMSAN_SANITIZE_kcov.o := n CFLAGS_kcov.o := $(call cc-option, -fno-conserve-stack) -fno-stack-protector # Don't instrument error handlers diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile index d51cabf28f382..ea925731fa40f 100644 --- a/kernel/locking/Makefile +++ b/kernel/locking/Makefile @@ -5,8 +5,9 @@ KCOV_INSTRUMENT := n obj-y += mutex.o semaphore.o rwsem.o percpu-rwsem.o -# Avoid recursion lockdep -> KCSAN -> ... -> lockdep. +# Avoid recursion lockdep -> sanitizer -> ... -> lockdep. KCSAN_SANITIZE_lockdep.o := n +KMSAN_SANITIZE_lockdep.o := n ifdef CONFIG_FUNCTION_TRACER CFLAGS_REMOVE_lockdep.o = $(CC_FLAGS_FTRACE) diff --git a/lib/Makefile b/lib/Makefile index 364c23f155781..8e5ae9d5966de 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -268,6 +268,7 @@ obj-$(CONFIG_IRQ_POLL) += irq_poll.o CFLAGS_stackdepot.o += -fno-builtin obj-$(CONFIG_STACKDEPOT) += stackdepot.o KASAN_SANITIZE_stackdepot.o := n +KMSAN_SANITIZE_stackdepot.o := n KCOV_INSTRUMENT_stackdepot.o := n libfdt_files = fdt.o fdt_ro.o fdt_wip.o fdt_rw.o fdt_sw.o fdt_strerror.o \ From patchwork Tue Dec 14 16:20:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676385 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB591C43217 for ; Tue, 14 Dec 2021 16:36:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 522FF6B009C; Tue, 14 Dec 2021 11:23:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 483726B009D; Tue, 14 Dec 2021 11:23:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2FDA46B009E; Tue, 14 Dec 2021 11:23:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0164.hostedemail.com [216.40.44.164]) by kanga.kvack.org (Postfix) with ESMTP id 211D26B009C for ; Tue, 14 Dec 2021 11:23:15 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id DB6AE180E1E20 for ; Tue, 14 Dec 2021 16:23:04 +0000 (UTC) X-FDA: 78916919088.13.E5358AE Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf27.hostedemail.com (Postfix) with ESMTP id 230E94001A for ; Tue, 14 Dec 2021 16:23:03 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id t9-20020aa7d709000000b003e83403a5cbso17474755edq.19 for ; Tue, 14 Dec 2021 08:23:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xZMxa2IZ9PyGSHm3E9nGFaCLmWiCwIuxu82/2O/up20=; b=XR+HPRlPVgp78GgAQuXZH5EQe+yT8LUijF4sCwhvF/Hnl2NqxVTDK9SVnMw66GBEJu 9MJcKFO47mf2RV6dBMSD8MJlu42CJz5Fobu3cWzWHeBIHvHhwEQfcveY7UAMaO3+lbOj 7pGfZC6Hhi0JHaqKHBVS3+baxymCGyDAP/tOyKyVsTFQxZ0jk9GsbnJWM2v+Nvltl9Xl vDDvMPR/OWSerW+7NPfx4TCClYTIbelFjoom9NBRy7ROahI4mxVi+nSTfWE+K/i72AtV 1npD3HLpmFQnKAcdbk/A/Rdy/L4fROYY8r74NOnhZ6NfET6hFBZEqAs8Oiu8SZiJw3tX K+gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xZMxa2IZ9PyGSHm3E9nGFaCLmWiCwIuxu82/2O/up20=; b=M3BnZHtTCrV/oWoFqotoHESqj9cN3rWzNL+i/IPmxiieZyX/vzgF/+9db4c/Q0tuzv UiPsTxmKysrJBfMavmQ74RrZQYX78A1Is4qrpL8AfiI28u+0mFZjcR9T9Chi0UpEj9QH lWWH4y6dfeRhEyqyyQAZotjgHOjZjAqXgeYgDaQbjLCq5J3E1He6qNd1rwcNdNVQ9c7j a6JcoTC4N5c4VQvU7QH0lw6peiFLDZVrLFraPQjsrEWct/nzf0PNgiaJ3XP1lOXv5FEE 9Ud6e3wLXBBj72i+8zPZ/PFK0VEXheFG8YaUy5Ixlowrb+Eo2Ue+9ykzKZD+8hB+b+Ew /5sQ== X-Gm-Message-State: AOAM530cvVMsUVDA8hQsMtd7OZlhrfOquA+mvD57ZKfW2c86h6l/nLM8 KB0K1sHeamhTBQQQO9ddHLMsbAA/4og= X-Google-Smtp-Source: ABdhPJxDkINsrpRRtIs9J8NAwlws+KUcn4Kx5kT7tsSTkMEEIdTp8/ftKoVvOM/tsE5G/T9PjyGyMQkzT+E= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a17:906:9459:: with SMTP id z25mr6688322ejx.331.1639498982009; Tue, 14 Dec 2021 08:23:02 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:32 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-26-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 25/43] kmsan: skip shadow checks in files doing context switches From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Queue-Id: 230E94001A X-Stat-Signature: arzirdyn14rq9gzwfnpt58uqexwwqha8 Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=XR+HPRlP; spf=pass (imf27.hostedemail.com: domain of 35sS4YQYKCGcLQNIJWLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=35sS4YQYKCGcLQNIJWLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam10 X-HE-Tag: 1639498983-247633 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When instrumenting functions, KMSAN obtains the per-task state (mostly pointers to metadata for function arguments and return values) once per function at its beginning. If a function performs a context switch, instrumented code won't notice that, and will still refer to the old state, possibly corrupting it or using stale data. This may result in false positive reports. To deal with that, we need to apply __no_kmsan_checks to the functions performing context switching - this will result in skipping all KMSAN shadow checks and marking newly created values as initialized, preventing all false positive reports in those functions. False negatives are still possible, but we expect them to be rare and impersistent. To improve maintainability, we choose to apply __no_kmsan_checks not just to a handful of functions, but to the whole files that may perform context switching - this is done via KMSAN_ENABLE_CHECKS:=n. This decision can be reconsidered in the future, when KMSAN won't need so much attention. Suggested-by: Marco Elver Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Id40563d36792b4482534c9a0134965d77a5581fa --- arch/x86/kernel/Makefile | 4 ++++ kernel/sched/Makefile | 4 ++++ 2 files changed, 8 insertions(+) diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 0b9fc3ecce2de..308d4d0323263 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -38,6 +38,10 @@ KCSAN_SANITIZE := n KMSAN_SANITIZE_head$(BITS).o := n KMSAN_SANITIZE_nmi.o := n +# Some functions in process_64.c perform context switching. +# Apply __no_kmsan_checks to the whole file to avoid false positives. +KMSAN_ENABLE_CHECKS_process_64.o := n + OBJECT_FILES_NON_STANDARD_test_nx.o := y ifdef CONFIG_FRAME_POINTER diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile index c7421f2d05e15..d9bf8223a064a 100644 --- a/kernel/sched/Makefile +++ b/kernel/sched/Makefile @@ -17,6 +17,10 @@ KCOV_INSTRUMENT := n # eventually. KCSAN_SANITIZE := n +# Some functions in core.c perform context switching. Apply __no_kmsan_checks +# to the whole file to avoid false positives. +KMSAN_ENABLE_CHECKS_core.o := n + ifneq ($(CONFIG_SCHED_OMIT_FRAME_POINTER),y) # According to Alan Modra , the -fno-omit-frame-pointer is # needed for x86 only. Why this used to be enabled for all architectures is beyond From patchwork Tue Dec 14 16:20:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676387 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EE54C433F5 for ; Tue, 14 Dec 2021 16:36:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B8C7F6B009F; Tue, 14 Dec 2021 11:23:17 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B39B26B009E; Tue, 14 Dec 2021 11:23:17 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9DAE16B009F; Tue, 14 Dec 2021 11:23:17 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0238.hostedemail.com [216.40.44.238]) by kanga.kvack.org (Postfix) with ESMTP id 7C94D6B009D for ; Tue, 14 Dec 2021 11:23:17 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 42EEA8249980 for ; Tue, 14 Dec 2021 16:23:07 +0000 (UTC) X-FDA: 78916919214.27.78C3D49 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf05.hostedemail.com (Postfix) with ESMTP id 3505910001A for ; Tue, 14 Dec 2021 16:23:05 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id b1-20020a5d6341000000b001901ddd352eso4897332wrw.7 for ; Tue, 14 Dec 2021 08:23:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=RCylrb6G91/E/2vtsmKFgkT1r5eRWWtCKOMrnaR3rS0=; b=SC1C+6vpXr8HhRaNadHiQrHplXa2kTitGVDtEEVvo/VrrTc0GKtS90GfxGyyjv/uIU Gl/r4wNi4cjwD4k7LpB8i9yt1uiBCrmAnLNIaiGGNAa5OoTgjezb10lfEPoQ/GaeQqmf 5qUDDqWGueIOP/FgPPX7tknIguxeVvNUN++V/9a+2dDpTxqjWw4PQpyEc8noRfwFoXyv xy6PPtcuiudOOiEZccKacAw0PDC4zXCg+pdnGBrwgtN8FZ+qpJpxq26TBTjQXuIXJ0Mi 1/jfDPSYbAYAUEVRomUDfFdagkdwem60SUAf9u9FUETpXnhkyTL+cqPlolp9b8UvxxVm y5Yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=RCylrb6G91/E/2vtsmKFgkT1r5eRWWtCKOMrnaR3rS0=; b=vcvplbanet/LIIfkCTUVaNCA4PYSVyn/pe2QMELaZXfnpXEZJqIP2Drjn6AcV/Var+ +EgYGDE51LK9aDWPotREBV6P6BPpiiRocGoq6QWbg1Fv8o+z1/lFKsxZEUijGOcN6ciN hSz8ZFFPqf5SHWatlgyKjl4B1DLco9Gph2uBliATgE3y3YcQ7YrXXzC8UflegqQ/j9jx nIP1hIEQrYX2B0v1f4231ISpvIUnIqFIHF0s52NoMnnZC0O1zvC9KVIStYWDWICihT4q q7nPh0mPjlO0ZMqT4O+GPlc8MZDvJuZNsXHfr1W80wGExHujyE+g9sQHXju+9jR2DPUv fgtw== X-Gm-Message-State: AOAM5319jDEOV8YJo5Sq11eXLYFaFs8GlSw6+eC6cjOcqe4HglcuI3D2 EUD3QvJGDgH2bxVP3tX3/7rQovJAvyU= X-Google-Smtp-Source: ABdhPJw9UBXdJlo13EpvnyI9qmtetaMmbSMsl/SrkoqoZXzI79hK6SLrS6tAUR/JTk3AKx81zpcHzTtcriI= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:adf:f088:: with SMTP id n8mr6603937wro.411.1639498984734; Tue, 14 Dec 2021 08:23:04 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:33 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-27-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 26/43] kmsan: virtio: check/unpoison scatterlist in vring_map_one_sg() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 3505910001A X-Stat-Signature: z5pmqheqnu58wt9d8sd1octs8xwxdpyd Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=SC1C+6vp; spf=pass (imf05.hostedemail.com: domain of 36MS4YQYKCGkNSPKLYNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--glider.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=36MS4YQYKCGkNSPKLYNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1639498985-45907 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If vring doesn't use the DMA API, KMSAN is unable to tell whether the memory is initialized by hardware. Explicitly call kmsan_handle_dma() from vring_map_one_sg() in this case to prevent false positives. Signed-off-by: Alexander Potapenko Acked-by: Michael S. Tsirkin --- Link: https://linux-review.googlesource.com/id/I211533ecb86a66624e151551f83ddd749536b3af --- drivers/virtio/virtio_ring.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 6d2614e34470f..bf4d5b331e99d 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -331,8 +332,15 @@ static dma_addr_t vring_map_one_sg(const struct vring_virtqueue *vq, struct scatterlist *sg, enum dma_data_direction direction) { - if (!vq->use_dma_api) + if (!vq->use_dma_api) { + /* + * If DMA is not used, KMSAN doesn't know that the scatterlist + * is initialized by the hardware. Explicitly check/unpoison it + * depending on the direction. + */ + kmsan_handle_dma(sg_page(sg), sg->offset, sg->length, direction); return (dma_addr_t)sg_phys(sg); + } /* * We can't use dma_map_sg, because we don't use scatterlists in From patchwork Tue Dec 14 16:20:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676389 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8D8A3C433F5 for ; Tue, 14 Dec 2021 16:37:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7E1316B009D; Tue, 14 Dec 2021 11:23:19 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 790CE6B009E; Tue, 14 Dec 2021 11:23:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5E42F6B00A0; Tue, 14 Dec 2021 11:23:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay040.a.hostedemail.com [64.99.140.40]) by kanga.kvack.org (Postfix) with ESMTP id 4CA016B009D for ; Tue, 14 Dec 2021 11:23:19 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 1800F606D6 for ; Tue, 14 Dec 2021 16:23:09 +0000 (UTC) X-FDA: 78916919298.01.3EF1816 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf16.hostedemail.com (Postfix) with ESMTP id 96A18180010 for ; Tue, 14 Dec 2021 16:23:08 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id ay17-20020a05600c1e1100b0033f27b76819so11496754wmb.4 for ; Tue, 14 Dec 2021 08:23:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=skozoj/jy7S42qGZfIJzyseQ0Pabpjak8N8BbHk2KcY=; b=Vd5CTpkkxcmGYalWsgO4iC+qSrXDthqHuJx6DrgC9iKo9HMwYyEQam2KetfJFVgSY7 KV59+O/gwGsNyRbKQw1w2uWXfAW2IVyiqyvEK3YOmIZdzFL8NMvh7GzAUop7zbTkJY++ ToXxFC6R+EfY2ptaU6k3DzHZuReo4bXtopgd4CXDlT0QaS/IkccBGx/VkphSd7S0a3c+ H2xpV8p9Tfae70/+S/YSPipRHzy5gGA/28QaqcOHVTfBy4NITdKNZ+yzGUO32S9YiCLG QQrXA8OJk4+U7ZF0z3L7QYfGwz3Mcp0npreVe2kKeecacCvBHADZUhWOIfdNT2RHuk/D CSWA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=skozoj/jy7S42qGZfIJzyseQ0Pabpjak8N8BbHk2KcY=; b=Q11yIJR4ND8EspgbuvQ2n2G8FRCAswGmc0zM2nnIpOtfhQ/IhtiyWiOymZS6c/LZMo WydBxl8MIfRAlfJvSJS9kZJ0YZ52ZME5YowDbdhxvDkP6pY/L5q7ljSDmL/RRT3BzA40 ZXlAN51XrPV5Smce3hMWuQKnsQ4tnmKqgqnZE6MTX47xCALrXRGGEmub8FWd7izF80/+ lzOtZKC9OaAGx1WudnJYza6yQka+NpXIyagLK80hFF9b65B2PQTP+jQ775DYa4u/PuTA qxYNUnUolYKOsR6zPVQw3cI2mgzYhQCY8Ug30K6kUtkpq0UFWn8YxYIIJqIGUEE67eCK pI5Q== X-Gm-Message-State: AOAM532K8FBnfC/GSftAX9i0k5QqGWvlzsmPD9Yw/WGS5YThSS6zcRR7 qTxccM13xHxgLVmt/EWvI7rin6ZMBTc= X-Google-Smtp-Source: ABdhPJyZT3GAy+sA4QLGQ8+5dOxsVBZiOrAVdpV3J2YUvqzm9NVSSyPrs05zBKAwEC5raJXcUkuZKFzoASg= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:adf:d1e2:: with SMTP id g2mr6927018wrd.179.1639498987401; Tue, 14 Dec 2021 08:23:07 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:34 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-28-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 27/43] x86: kmsan: add iomem support From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Vd5CTpkk; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of 368S4YQYKCGwQVSNObQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--glider.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=368S4YQYKCGwQVSNObQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--glider.bounces.google.com X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 96A18180010 X-Stat-Signature: m64kmbkout4sm4uhjdmd7iox7ik9e6uc X-HE-Tag: 1639498988-16686 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Functions from lib/iomap.c and arch/x86/lib/iomem.c interact with hardware, so KMSAN must ensure that: - every read function returns an initialized value - every write function checks values before sending them to hardware. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I45527599f09090aca046dfe1a26df453adab100d --- arch/x86/lib/iomem.c | 5 +++++ lib/iomap.c | 40 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 45 insertions(+) diff --git a/arch/x86/lib/iomem.c b/arch/x86/lib/iomem.c index df50451d94ef7..2307770f3f4c8 100644 --- a/arch/x86/lib/iomem.c +++ b/arch/x86/lib/iomem.c @@ -1,6 +1,7 @@ #include #include #include +#include #define movs(type,to,from) \ asm volatile("movs" type:"=&D" (to), "=&S" (from):"0" (to), "1" (from):"memory") @@ -37,6 +38,8 @@ void memcpy_fromio(void *to, const volatile void __iomem *from, size_t n) n-=2; } rep_movs(to, (const void *)from, n); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_memory(to, n); } EXPORT_SYMBOL(memcpy_fromio); @@ -45,6 +48,8 @@ void memcpy_toio(volatile void __iomem *to, const void *from, size_t n) if (unlikely(!n)) return; + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(from, n); /* Align any unaligned destination IO */ if (unlikely(1 & (unsigned long)to)) { movs("b", to, from); diff --git a/lib/iomap.c b/lib/iomap.c index fbaa3e8f19d6c..bdda1a42771b2 100644 --- a/lib/iomap.c +++ b/lib/iomap.c @@ -6,6 +6,7 @@ */ #include #include +#include #include @@ -70,26 +71,31 @@ static void bad_io_access(unsigned long port, const char *access) #define mmio_read64be(addr) swab64(readq(addr)) #endif +__no_sanitize_memory unsigned int ioread8(const void __iomem *addr) { IO_COND(addr, return inb(port), return readb(addr)); return 0xff; } +__no_sanitize_memory unsigned int ioread16(const void __iomem *addr) { IO_COND(addr, return inw(port), return readw(addr)); return 0xffff; } +__no_sanitize_memory unsigned int ioread16be(const void __iomem *addr) { IO_COND(addr, return pio_read16be(port), return mmio_read16be(addr)); return 0xffff; } +__no_sanitize_memory unsigned int ioread32(const void __iomem *addr) { IO_COND(addr, return inl(port), return readl(addr)); return 0xffffffff; } +__no_sanitize_memory unsigned int ioread32be(const void __iomem *addr) { IO_COND(addr, return pio_read32be(port), return mmio_read32be(addr)); @@ -142,18 +148,21 @@ static u64 pio_read64be_hi_lo(unsigned long port) return lo | (hi << 32); } +__no_sanitize_memory u64 ioread64_lo_hi(const void __iomem *addr) { IO_COND(addr, return pio_read64_lo_hi(port), return readq(addr)); return 0xffffffffffffffffULL; } +__no_sanitize_memory u64 ioread64_hi_lo(const void __iomem *addr) { IO_COND(addr, return pio_read64_hi_lo(port), return readq(addr)); return 0xffffffffffffffffULL; } +__no_sanitize_memory u64 ioread64be_lo_hi(const void __iomem *addr) { IO_COND(addr, return pio_read64be_lo_hi(port), @@ -161,6 +170,7 @@ u64 ioread64be_lo_hi(const void __iomem *addr) return 0xffffffffffffffffULL; } +__no_sanitize_memory u64 ioread64be_hi_lo(const void __iomem *addr) { IO_COND(addr, return pio_read64be_hi_lo(port), @@ -188,22 +198,32 @@ EXPORT_SYMBOL(ioread64be_hi_lo); void iowrite8(u8 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, outb(val,port), writeb(val, addr)); } void iowrite16(u16 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, outw(val,port), writew(val, addr)); } void iowrite16be(u16 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write16be(val,port), mmio_write16be(val, addr)); } void iowrite32(u32 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, outl(val,port), writel(val, addr)); } void iowrite32be(u32 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write32be(val,port), mmio_write32be(val, addr)); } EXPORT_SYMBOL(iowrite8); @@ -239,24 +259,32 @@ static void pio_write64be_hi_lo(u64 val, unsigned long port) void iowrite64_lo_hi(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64_lo_hi(val, port), writeq(val, addr)); } void iowrite64_hi_lo(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64_hi_lo(val, port), writeq(val, addr)); } void iowrite64be_lo_hi(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64be_lo_hi(val, port), mmio_write64be(val, addr)); } void iowrite64be_hi_lo(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64be_hi_lo(val, port), mmio_write64be(val, addr)); } @@ -328,14 +356,20 @@ static inline void mmio_outsl(void __iomem *addr, const u32 *src, int count) void ioread8_rep(const void __iomem *addr, void *dst, unsigned long count) { IO_COND(addr, insb(port,dst,count), mmio_insb(addr, dst, count)); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_memory(dst, count); } void ioread16_rep(const void __iomem *addr, void *dst, unsigned long count) { IO_COND(addr, insw(port,dst,count), mmio_insw(addr, dst, count)); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_memory(dst, count * 2); } void ioread32_rep(const void __iomem *addr, void *dst, unsigned long count) { IO_COND(addr, insl(port,dst,count), mmio_insl(addr, dst, count)); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_memory(dst, count * 4); } EXPORT_SYMBOL(ioread8_rep); EXPORT_SYMBOL(ioread16_rep); @@ -343,14 +377,20 @@ EXPORT_SYMBOL(ioread32_rep); void iowrite8_rep(void __iomem *addr, const void *src, unsigned long count) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(src, count); IO_COND(addr, outsb(port, src, count), mmio_outsb(addr, src, count)); } void iowrite16_rep(void __iomem *addr, const void *src, unsigned long count) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(src, count * 2); IO_COND(addr, outsw(port, src, count), mmio_outsw(addr, src, count)); } void iowrite32_rep(void __iomem *addr, const void *src, unsigned long count) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(src, count * 4); IO_COND(addr, outsl(port, src,count), mmio_outsl(addr, src, count)); } EXPORT_SYMBOL(iowrite8_rep); From patchwork Tue Dec 14 16:20:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676391 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 480BBC433F5 for ; Tue, 14 Dec 2021 16:37:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 041036B009E; Tue, 14 Dec 2021 11:23:23 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F31BB6B00A0; Tue, 14 Dec 2021 11:23:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D853D6B00A1; Tue, 14 Dec 2021 11:23:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0221.hostedemail.com [216.40.44.221]) by kanga.kvack.org (Postfix) with ESMTP id C9AEB6B009E for ; Tue, 14 Dec 2021 11:23:22 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 9251C87714 for ; Tue, 14 Dec 2021 16:23:12 +0000 (UTC) X-FDA: 78916919424.10.BDBE4DF Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf31.hostedemail.com (Postfix) with ESMTP id CD9392000F for ; Tue, 14 Dec 2021 16:23:06 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id u4-20020a5d4684000000b0017c8c1de97dso4854174wrq.16 for ; Tue, 14 Dec 2021 08:23:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=x5YBqNptwlSdtxXj6Mpbhns9uYlzM7sKdh+OiWmHV8E=; b=Eq16fqo7/umIC+mRw+0mkHSO2SVMHikYc1m6jSZV8W6hhVaM9NMvFdQfrWKGA3yg/i 7vUEVTKL+jqzhdEkXU9xVd75uPZ4Rmx4C1qHm0ZEuH7q6SV7Qvuyq3wzniVYxjvJom4F 4OBdWZpawHisawFRRG9ZiNgKyC5uZphoOmN+OvbQMyHKkPhl4uA1SbeivqjtkKXkYXvj ayyTsXbSYK8GBBsDpjliMHxRjjyXaQDfpdlmy8yxUniRXrvUu+NwM9AzRrogCDV9SX9P LlNfxBQ96uMgIvTX+ftvkx34ETbts7JwLHCgfgsywDgA/Qo2UWkAtoDM5JW5QNO9UuHJ Hb+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=x5YBqNptwlSdtxXj6Mpbhns9uYlzM7sKdh+OiWmHV8E=; b=5u5x4dT5hZPxJwdK3s02BNAvcBYkrOG4mvjuyQGG7xmUMJK4eFxs8ZG4RLHft575AY PsLF18y/MaI3uz+4qqWzHSwUbybNXAEiXi0jqh4FIXy4dlT87QcWTyMD+R4leG77dnEm ccM5Xk3gZ13b7Dq5e3NiXjnvRQwS6UZV+0orznfYIZXKFT45SFep8k+MpJwD9y0QKXnq eX+RL+/0YvrE2VTPHMjvNyBXIkrNGutfrbRsacQm/koslTzEpzfLNMfTxPG5L6t7iTdq OPaNZ+ILdCCb3GJKSSxn6Wlt+YEubbS0B43e1ENS6Jf9KAaQ+i2jAalkH+06RJJAb3Jl OnKg== X-Gm-Message-State: AOAM532IJ+im9NjhctUVNQZfgGfS8/UmF+m9ew9vOVwnAYVYBFcSwoFr c/YtU3u92+gRth9cybuHDB1xKjDFo9s= X-Google-Smtp-Source: ABdhPJz2ZQuIEn5ZmHrH4VnKPAPp72t+EVJM3lTIPSyuDMPR4kQmFKNO01R60JjhCJQ/Lb82u7miTPMRVUQ= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:adf:ed52:: with SMTP id u18mr6938942wro.609.1639498989844; Tue, 14 Dec 2021 08:23:09 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:35 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-29-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 28/43] kmsan: dma: unpoison DMA mappings From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: CD9392000F X-Stat-Signature: j4e133c8x8dakthswb875ogkjosfiagk Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Eq16fqo7; spf=pass (imf31.hostedemail.com: domain of 37cS4YQYKCG4SXUPQdSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--glider.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=37cS4YQYKCG4SXUPQdSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1639498986-997677 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN doesn't know about DMA memory writes performed by devices. We unpoison such memory when it's mapped to avoid false positive reports. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ia162dc4c5a92e74d4686c1be32a4dfeffc5c32cd --- kernel/dma/mapping.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 9478eccd1c8e6..0560080813761 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -156,6 +156,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, addr = dma_direct_map_page(dev, page, offset, size, dir, attrs); else addr = ops->map_page(dev, page, offset, size, dir, attrs); + kmsan_handle_dma(page, offset, size, dir); debug_dma_map_page(dev, page, offset, size, dir, addr, attrs); return addr; @@ -194,11 +195,13 @@ static int __dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, else ents = ops->map_sg(dev, sg, nents, dir, attrs); - if (ents > 0) + if (ents > 0) { + kmsan_handle_dma_sg(sg, nents, dir); debug_dma_map_sg(dev, sg, nents, ents, dir, attrs); - else if (WARN_ON_ONCE(ents != -EINVAL && ents != -ENOMEM && - ents != -EIO)) + } else if (WARN_ON_ONCE(ents != -EINVAL && ents != -ENOMEM && + ents != -EIO)) { return -EIO; + } return ents; } From patchwork Tue Dec 14 16:20:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676393 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3AB77C433F5 for ; Tue, 14 Dec 2021 16:38:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 39B0E6B0071; Tue, 14 Dec 2021 11:23:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 34AA46B00A0; Tue, 14 Dec 2021 11:23:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1EBC66B00A1; Tue, 14 Dec 2021 11:23:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id 0CBB26B0071 for ; Tue, 14 Dec 2021 11:23:25 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id CB442180DD593 for ; Tue, 14 Dec 2021 16:23:14 +0000 (UTC) X-FDA: 78916919508.29.BB44928 Received: from mail-lf1-f74.google.com (mail-lf1-f74.google.com [209.85.167.74]) by imf07.hostedemail.com (Postfix) with ESMTP id 6941F4000C for ; Tue, 14 Dec 2021 16:23:14 +0000 (UTC) Received: by mail-lf1-f74.google.com with SMTP id f15-20020a056512228f00b004037c0ab223so9014615lfu.16 for ; Tue, 14 Dec 2021 08:23:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=olRZ/+DsKDaAFuaGG/j7BFeaFh0ExVig2xDV8tj79mY=; b=AZMbPv6YU75omW1PCY67bzSuU/ZVSn7oXZSZ2epvS8xVB1tmOznv1nY4Ye3LciG8CF SJbN8SvrgxgB1hrzotG9LvFYgvZrX2/O/92JBYMGvOozkVY2s7YDFK7i30mwJAStwLNU GHjq6GKC2p7ur/RolwtuvKxhB5ZzIYEBCaPUAXV7ReSn4ANGtvtEHpsOG+2+CureWcYC UhqRbNO9zV/IPkP7zKKl06hAU7vtjVbAe51+z4+t0/yP4Z/kvGehTffXMIQArgFRhFHw LhMWPx1NUKj6vZ0vF7s29kb/5xBQApSzCLXoFS6UcZ4xXjKSz6rIzJTwiNS3U3UhE152 QeIQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=olRZ/+DsKDaAFuaGG/j7BFeaFh0ExVig2xDV8tj79mY=; b=IZ4VSL3Bef1BOziSN/9zXjNng44WgKDNR0JKLPg+GyA0cBOQ5der54+QPOBdwr9VkA 1YV1OHBaxLpcLPtgcpK+xKe0IlLZTknLpuULwMjYEBkmAVTdXuUymNPYlD6ts4DnZfkx Ql/pZ5DYlqGuNQo7PHmINuZqubGQbjWjUnm4+mrpPCXtXyxphfoIqxD/Jzi6L23nxfvR 1IxwhFKpVfFbHVpEvTsi7jWT/wUSd9ay9yLpQfgxOpXeaCy6p58YBIgUUX9QVe0D2+FL eEj1BL8VDdZNRAgUjenkvNVriNGxM/JQMwVCOVUjNS+s1aJTPUscB3ipNyzfjKAh0ahe fx/A== X-Gm-Message-State: AOAM531PJUKs0xCckxD3+ExiJUH/eyittXbHTj2eR0Tl9CpsE+m4ACko jymzWGJjS02zWdKQyPoFczvor5XA5AI= X-Google-Smtp-Source: ABdhPJxHTLK1yWIsE4vKmI5uGLTSba2Nc1bS0PGDcbFD0uKbOw3JLrJKYPGR2xP/idljX/8IJjgOdYVH8N8= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a05:6512:2246:: with SMTP id i6mr5863432lfu.24.1639498992700; Tue, 14 Dec 2021 08:23:12 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:36 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-30-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 29/43] kmsan: handle memory sent to/from USB From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=AZMbPv6Y; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf07.hostedemail.com: domain of 38MS4YQYKCHEVaXSTgVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--glider.bounces.google.com designates 209.85.167.74 as permitted sender) smtp.mailfrom=38MS4YQYKCHEVaXSTgVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--glider.bounces.google.com X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 6941F4000C X-Stat-Signature: 1yaxeo5her4g4sq3iy7wsihm5s9oc3ds X-HE-Tag: 1639498994-159697 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Depending on the value of is_out kmsan_handle_urb() KMSAN either marks the data copied to the kernel from a USB device as initialized, or checks the data sent to the device for being initialized. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ifa67fb72015d4de14c30e971556f99fc8b2ee506 --- drivers/usb/core/urb.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/usb/core/urb.c b/drivers/usb/core/urb.c index 30727729a44cc..0e84acc9aea53 100644 --- a/drivers/usb/core/urb.c +++ b/drivers/usb/core/urb.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -426,6 +427,7 @@ int usb_submit_urb(struct urb *urb, gfp_t mem_flags) URB_SETUP_MAP_SINGLE | URB_SETUP_MAP_LOCAL | URB_DMA_SG_COMBINED); urb->transfer_flags |= (is_out ? URB_DIR_OUT : URB_DIR_IN); + kmsan_handle_urb(urb, is_out); if (xfertype != USB_ENDPOINT_XFER_CONTROL && dev->state < USB_STATE_CONFIGURED) From patchwork Tue Dec 14 16:20:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676395 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 92786C433EF for ; Tue, 14 Dec 2021 16:38:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 76B996B00A0; Tue, 14 Dec 2021 11:23:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 71B256B00A1; Tue, 14 Dec 2021 11:23:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 595716B00A2; Tue, 14 Dec 2021 11:23:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0247.hostedemail.com [216.40.44.247]) by kanga.kvack.org (Postfix) with ESMTP id 454C76B00A0 for ; Tue, 14 Dec 2021 11:23:27 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 05B3882499B9 for ; Tue, 14 Dec 2021 16:23:17 +0000 (UTC) X-FDA: 78916919634.21.B6650AA Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf04.hostedemail.com (Postfix) with ESMTP id 9526840010 for ; Tue, 14 Dec 2021 16:23:16 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id v22-20020a50a456000000b003e7cbfe3dfeso17498644edb.11 for ; Tue, 14 Dec 2021 08:23:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=P7FKI/5L9I73iDgWPemkPODmd6ouDJmmgCvvlijOM/4=; b=DI8s2TcL5nly41bQJ32Vbip0YpaHlk/FcTaf0LYqNYofAHYPOXH/t2g8GmRaYbDjPg 0GIT8h94zs1MimNnTN77swOjnnM54PG9XxzRuk/VuvF3osquWjODz8/FaK4VnANPjxNR miLRWofvIbk/i+YYqTNQrIvitA6Gi1yOFDIL0TTggYq67B5WCcrERcUdfROn8qlI+nzT CYFu2MkEU4/jObO/R1HW803ClByvqLM+WCnnkBs50VIGyCk4ZhkUh459pYHnBJO3s7l3 qAw4NDVGeu276pPr53A+434hOWGkB9QTiMw3pllN2zlR7cPyDC75qh0boJlqszdeBJjh yefA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=P7FKI/5L9I73iDgWPemkPODmd6ouDJmmgCvvlijOM/4=; b=RGd2JXpq98dpYYpkKTHvntzjIx9++au/ftXIwpfbu9Lp8GRuKATYa3xNr8m6t+U64c icBp4QQ30oKz3eaZ5ohvWd6b9QfwlKDerD++ypFtnMHy7M6o0le7aPVQUokRmDxeepfl 6g5ezizGlbEfB7c7B5Zmmg2l58u9g7CypgXmyy4Jbf5XUMjCrI75fHL+LjNdyn4Q0iqV X9yYZcPjcP91FmR4OKV2hzAoGWU7+3ArhgyxSmzbfym7OiJ2OdE8tIx58qZ/35CNs25q WVG9Go4qPCD+amvX9VEee1/u605YumIN+3Ul3YSnjDeseBdcwbDDL7yNkHwO5m/IQH+N dMng== X-Gm-Message-State: AOAM530bWFyCGsXDi4F6tvlyn+H5mED/oY/57hmWc4J+HNJLcW6dHOvS 8sYwyl/wGyYbXYgSWYc0ddJPMgYCtNQ= X-Google-Smtp-Source: ABdhPJwC3Znl5BDR1bIFff+vEAjaFA7vGDzKqEApFsi/dZd+jheMERJGedKII4WcV8lBYH3AYoWzy8fTLZ8= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a17:906:9144:: with SMTP id y4mr6773278ejw.98.1639498995240; Tue, 14 Dec 2021 08:23:15 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:37 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-31-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 30/43] kmsan: add tests for KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=DI8s2TcL; spf=pass (imf04.hostedemail.com: domain of 388S4YQYKCHQYdaVWjYggYdW.Ugedafmp-eecnSUc.gjY@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=388S4YQYKCHQYdaVWjYggYdW.Ugedafmp-eecnSUc.gjY@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Queue-Id: 9526840010 X-Stat-Signature: mmk1fbb8ipt7orjjktgy9icdhkfcom5c X-Rspamd-Server: rspam04 X-HE-Tag: 1639498996-813024 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The testing module triggers KMSAN warnings in different cases and checks that the errors are properly reported, using console probes to capture the tool's output. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I49c3f59014cc37fd13541c80beb0b75a75244650 --- lib/Kconfig.kmsan | 16 ++ mm/kmsan/Makefile | 4 + mm/kmsan/kmsan_test.c | 444 ++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 464 insertions(+) create mode 100644 mm/kmsan/kmsan_test.c diff --git a/lib/Kconfig.kmsan b/lib/Kconfig.kmsan index 02fd6db792b1f..940598f60b3a6 100644 --- a/lib/Kconfig.kmsan +++ b/lib/Kconfig.kmsan @@ -16,3 +16,19 @@ config KMSAN instrumentation provided by Clang and thus requires Clang to build. See for more details. + +if KMSAN + +config KMSAN_KUNIT_TEST + tristate "KMSAN integration test suite" if !KUNIT_ALL_TESTS + default KUNIT_ALL_TESTS + depends on TRACEPOINTS && KUNIT + help + Test suite for KMSAN, testing various error detection scenarios, + and checking that reports are correctly output to console. + + Say Y here if you want the test to be built into the kernel and run + during boot; say M if you want the test to build as a module; say N + if you are unsure. + +endif diff --git a/mm/kmsan/Makefile b/mm/kmsan/Makefile index f57a956cb1c8b..7be6a7e92394f 100644 --- a/mm/kmsan/Makefile +++ b/mm/kmsan/Makefile @@ -20,3 +20,7 @@ CFLAGS_init.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_instrumentation.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_report.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_shadow.o := $(CC_FLAGS_KMSAN_RUNTIME) + +obj-$(CONFIG_KMSAN_KUNIT_TEST) += kmsan_test.o +KMSAN_SANITIZE_kmsan_test.o := y +CFLAGS_kmsan_test.o += $(call cc-disable-warning, uninitialized) diff --git a/mm/kmsan/kmsan_test.c b/mm/kmsan/kmsan_test.c new file mode 100644 index 0000000000000..caf1094411487 --- /dev/null +++ b/mm/kmsan/kmsan_test.c @@ -0,0 +1,444 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Test cases for KMSAN. + * For each test case checks the presence (or absence) of generated reports. + * Relies on 'console' tracepoint to capture reports as they appear in the + * kernel log. + * + * Copyright (C) 2021, Google LLC. + * Author: Alexander Potapenko + * + */ + +#include +#include "kmsan.h" + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static DEFINE_PER_CPU(int, per_cpu_var); + +/* Report as observed from console. */ +static struct { + spinlock_t lock; + bool available; + bool ignore; /* Stop console output collection. */ + char header[256]; +} observed = { + .lock = __SPIN_LOCK_UNLOCKED(observed.lock), +}; + +/* Probe for console output: obtains observed lines of interest. */ +static void probe_console(void *ignore, const char *buf, size_t len) +{ + unsigned long flags; + + if (observed.ignore) + return; + spin_lock_irqsave(&observed.lock, flags); + + if (strnstr(buf, "BUG: KMSAN: ", len)) { + /* + * KMSAN report and related to the test. + * + * The provided @buf is not NUL-terminated; copy no more than + * @len bytes and let strscpy() add the missing NUL-terminator. + */ + strscpy(observed.header, buf, + min(len + 1, sizeof(observed.header))); + WRITE_ONCE(observed.available, true); + observed.ignore = true; + } + spin_unlock_irqrestore(&observed.lock, flags); +} + +/* Check if a report related to the test exists. */ +static bool report_available(void) +{ + return READ_ONCE(observed.available); +} + +/* Information we expect in a report. */ +struct expect_report { + const char *error_type; /* Error type. */ + /* + * Kernel symbol from the error header, or NULL if no report is + * expected. + */ + const char *symbol; +}; + +/* Check observed report matches information in @r. */ +static bool report_matches(const struct expect_report *r) +{ + typeof(observed.header) expected_header; + unsigned long flags; + bool ret = false; + const char *end; + char *cur; + + /* Doubled-checked locking. */ + if (!report_available() || !r->symbol) + return (!report_available() && !r->symbol); + + /* Generate expected report contents. */ + + /* Title */ + cur = expected_header; + end = &expected_header[sizeof(expected_header) - 1]; + + cur += scnprintf(cur, end - cur, "BUG: KMSAN: %s", r->error_type); + + scnprintf(cur, end - cur, " in %s", r->symbol); + /* The exact offset won't match, remove it; also strip module name. */ + cur = strchr(expected_header, '+'); + if (cur) + *cur = '\0'; + + spin_lock_irqsave(&observed.lock, flags); + if (!report_available()) + goto out; /* A new report is being captured. */ + + /* Finally match expected output to what we actually observed. */ + ret = strstr(observed.header, expected_header); +out: + spin_unlock_irqrestore(&observed.lock, flags); + + return ret; +} + +/* ===== Test cases ===== */ + +/* Prevent replacing branch with select in LLVM. */ +static noinline void check_true(char *arg) +{ + pr_info("%s is true\n", arg); +} + +static noinline void check_false(char *arg) +{ + pr_info("%s is false\n", arg); +} + +#define USE(x) \ + do { \ + if (x) \ + check_true(#x); \ + else \ + check_false(#x); \ + } while (0) + +#define EXPECTATION_ETYPE_FN(e, reason, fn) \ + struct expect_report e = { \ + .error_type = reason, \ + .symbol = fn, \ + } + +#define EXPECTATION_NO_REPORT(e) EXPECTATION_ETYPE_FN(e, NULL, NULL) +#define EXPECTATION_UNINIT_VALUE_FN(e, fn) \ + EXPECTATION_ETYPE_FN(e, "uninit-value", fn) +#define EXPECTATION_UNINIT_VALUE(e) EXPECTATION_UNINIT_VALUE_FN(e, __func__) +#define EXPECTATION_USE_AFTER_FREE(e) \ + EXPECTATION_ETYPE_FN(e, "use-after-free", __func__) + +static int signed_sum3(int a, int b, int c) +{ + return a + b + c; +} + +static void test_uninit_kmalloc(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE(expect); + int *ptr; + + kunit_info(test, "uninitialized kmalloc test (UMR report)\n"); + ptr = kmalloc(sizeof(int), GFP_KERNEL); + USE(*ptr); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static void test_init_kmalloc(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + int *ptr; + + kunit_info(test, "initialized kmalloc test (no reports)\n"); + ptr = kmalloc(sizeof(int), GFP_KERNEL); + memset(ptr, 0, sizeof(int)); + USE(*ptr); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static void test_init_kzalloc(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + int *ptr; + + kunit_info(test, "initialized kzalloc test (no reports)\n"); + ptr = kzalloc(sizeof(int), GFP_KERNEL); + USE(*ptr); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static void test_uninit_multiple_args(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE(expect); + volatile char b = 3, c; + volatile int a; + + kunit_info(test, "uninitialized local passed to fn (UMR report)\n"); + USE(signed_sum3(a, b, c)); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static void test_uninit_stack_var(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE(expect); + volatile int cond; + + kunit_info(test, "uninitialized stack variable (UMR report)\n"); + USE(cond); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static void test_init_stack_var(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + volatile int cond = 1; + + kunit_info(test, "initialized stack variable (no reports)\n"); + USE(cond); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static noinline void two_param_fn_2(int arg1, int arg2) +{ + USE(arg1); + USE(arg2); +} + +static noinline void one_param_fn(int arg) +{ + two_param_fn_2(arg, arg); + USE(arg); +} + +static noinline void two_param_fn(int arg1, int arg2) +{ + int init = 0; + + one_param_fn(init); + USE(arg1); + USE(arg2); +} + +static void test_params(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE_FN(expect, "two_param_fn"); + volatile int uninit, init = 1; + + kunit_info(test, + "uninit passed through a function parameter (UMR report)\n"); + two_param_fn(uninit, init); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static noinline void do_uninit_local_array(char *array, int start, int stop) +{ + volatile char uninit; + int i; + + for (i = start; i < stop; i++) + array[i] = uninit; +} + +static void test_uninit_kmsan_check_memory(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE_FN(expect, "test_uninit_kmsan_check_memory"); + volatile char local_array[8]; + + kunit_info( + test, + "kmsan_check_memory() called on uninit local (UMR report)\n"); + do_uninit_local_array((char *)local_array, 5, 7); + + kmsan_check_memory((char *)local_array, 8); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static void test_init_kmsan_vmap_vunmap(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + const int npages = 2; + struct page **pages; + void *vbuf; + int i; + + kunit_info(test, "pages initialized via vmap (no reports)\n"); + + pages = kmalloc_array(npages, sizeof(struct page), GFP_KERNEL); + for (i = 0; i < npages; i++) + pages[i] = alloc_page(GFP_KERNEL); + vbuf = vmap(pages, npages, VM_MAP, PAGE_KERNEL); + memset(vbuf, 0xfe, npages * PAGE_SIZE); + for (i = 0; i < npages; i++) + kmsan_check_memory(page_address(pages[i]), PAGE_SIZE); + + if (vbuf) + vunmap(vbuf); + for (i = 0; i < npages; i++) + if (pages[i]) + __free_page(pages[i]); + kfree(pages); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static void test_init_vmalloc(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + int npages = 8, i; + char *buf; + + kunit_info(test, "pages initialized via vmap (no reports)\n"); + buf = vmalloc(PAGE_SIZE * npages); + buf[0] = 1; + memset(buf, 0xfe, PAGE_SIZE * npages); + USE(buf[0]); + for (i = 0; i < npages; i++) + kmsan_check_memory(&buf[PAGE_SIZE * i], PAGE_SIZE); + vfree(buf); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static void test_uaf(struct kunit *test) +{ + EXPECTATION_USE_AFTER_FREE(expect); + volatile int value; + volatile int *var; + + kunit_info(test, "use-after-free in kmalloc-ed buffer (UMR report)\n"); + var = kmalloc(80, GFP_KERNEL); + var[3] = 0xfeedface; + kfree((int *)var); + /* Copy the invalid value before checking it. */ + value = var[3]; + USE(value); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static void test_percpu_propagate(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE(expect); + volatile int uninit, check; + + kunit_info(test, + "uninit local stored to per_cpu memory (UMR report)\n"); + + this_cpu_write(per_cpu_var, uninit); + check = this_cpu_read(per_cpu_var); + USE(check); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static void test_printk(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE_FN(expect, "number"); + volatile int uninit; + + kunit_info(test, "uninit local passed to pr_info() (UMR report)\n"); + pr_info("%px contains %d\n", &uninit, uninit); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static struct kunit_case kmsan_test_cases[] = { + KUNIT_CASE(test_uninit_kmalloc), + KUNIT_CASE(test_init_kmalloc), + KUNIT_CASE(test_init_kzalloc), + KUNIT_CASE(test_uninit_multiple_args), + KUNIT_CASE(test_uninit_stack_var), + KUNIT_CASE(test_init_stack_var), + KUNIT_CASE(test_params), + KUNIT_CASE(test_uninit_kmsan_check_memory), + KUNIT_CASE(test_init_kmsan_vmap_vunmap), + KUNIT_CASE(test_init_vmalloc), + KUNIT_CASE(test_uaf), + KUNIT_CASE(test_percpu_propagate), + KUNIT_CASE(test_printk), + {}, +}; + +/* ===== End test cases ===== */ + +static int test_init(struct kunit *test) +{ + unsigned long flags; + + spin_lock_irqsave(&observed.lock, flags); + observed.header[0] = '\0'; + observed.ignore = false; + observed.available = false; + spin_unlock_irqrestore(&observed.lock, flags); + + return 0; +} + +static void test_exit(struct kunit *test) +{ +} + +static struct kunit_suite kmsan_test_suite = { + .name = "kmsan", + .test_cases = kmsan_test_cases, + .init = test_init, + .exit = test_exit, +}; +static struct kunit_suite *kmsan_test_suites[] = { &kmsan_test_suite, NULL }; + +static void register_tracepoints(struct tracepoint *tp, void *ignore) +{ + check_trace_callback_type_console(probe_console); + if (!strcmp(tp->name, "console")) + WARN_ON(tracepoint_probe_register(tp, probe_console, NULL)); +} + +static void unregister_tracepoints(struct tracepoint *tp, void *ignore) +{ + if (!strcmp(tp->name, "console")) + tracepoint_probe_unregister(tp, probe_console, NULL); +} + +/* + * We only want to do tracepoints setup and teardown once, therefore we have to + * customize the init and exit functions and cannot rely on kunit_test_suite(). + */ +static int __init kmsan_test_init(void) +{ + /* + * Because we want to be able to build the test as a module, we need to + * iterate through all known tracepoints, since the static registration + * won't work here. + */ + for_each_kernel_tracepoint(register_tracepoints, NULL); + return __kunit_test_suites_init(kmsan_test_suites); +} + +static void kmsan_test_exit(void) +{ + __kunit_test_suites_exit(kmsan_test_suites); + for_each_kernel_tracepoint(unregister_tracepoints, NULL); + tracepoint_synchronize_unregister(); +} + +late_initcall_sync(kmsan_test_init); +module_exit(kmsan_test_exit); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Alexander Potapenko "); From patchwork Tue Dec 14 16:20:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676399 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68E96C4332F for ; Tue, 14 Dec 2021 16:39:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DD4E06B00A1; Tue, 14 Dec 2021 11:23:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D83606B00A2; Tue, 14 Dec 2021 11:23:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C72E46B00A3; Tue, 14 Dec 2021 11:23:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay026.a.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id B88846B00A1 for ; Tue, 14 Dec 2021 11:23:29 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 85B392087C for ; Tue, 14 Dec 2021 16:23:19 +0000 (UTC) X-FDA: 78916919718.05.9A41168 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf04.hostedemail.com (Postfix) with ESMTP id 109DC40010 for ; Tue, 14 Dec 2021 16:23:18 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id a11-20020adffb8b000000b001a0b0f4afe9so2062417wrr.13 for ; Tue, 14 Dec 2021 08:23:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=K7KNAkPu1tNb8bk4GPAMmgVYqdU6o+fe9Mb7fPVoOyI=; b=G0L/Wdncp6Gni0cblJguzTT3iS9QmzVrffr3F/SfdUNYfhxPW9fK7kTpBD4PnldE1q ppln1bGIuFkwcycMoMXPBzJCPg64PGFdw6LxG/xXSdTjx9b7aO+phT9wOvWs7F9e/j9a dVjrC4LrZYW65Q8Mxljf+M2EXFWMJu/dZmj731PaaqJ2aRorwhDuwU5TDTo66+7L/g/a OZXlOAjxK7I56M4gxWcNfBT1Ta0iLNBOGmoTs5MhpoamF9qbyxugIJhBLcYtr9bQDgPr +XZSssz5qZ+zxw+qIUjWCnUuQRw4oJpTS0W4LBdCbg4jljRNLmRAzcLKNLnUamEzkSYt QKcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=K7KNAkPu1tNb8bk4GPAMmgVYqdU6o+fe9Mb7fPVoOyI=; b=XfWuXOLSUyep8mKcFbIJytTdF03idPvKuWfbJBGaWxrVc4trb+HQ2DL+a9gX1fue6+ OivLLCWYWcBcsTy+Qx384bLGs/SjxceQaLK9/e6MKOXs3LR9Yk2gLZBX8BTBuMoau8wV Q0qkhOx1zA83ETiTsT7nHRKKMpjCtt6GXDCQtUiMwIkCmzPirfc25ac7G40qrCu5A/vQ XAitnD57twpm1sQ4OxVStT/FkaHpfmhqypu260LAH5eu9etq2XkFlgEPTl9AgNGpsHMm xAJ1hRxyxNDbB4g8YMduDrENRxSCH0/xOLnIjZunqamcQEi+9IGDkIyW2AQndCWUfug6 nhDQ== X-Gm-Message-State: AOAM531R6JHKi/dWTOIdTAHLesDo5AE0jVKhHgMM5ohJeV2aqWdQj+tK SRy7sRFVvPpn9rPhGFYWsDtQugkr7Os= X-Google-Smtp-Source: ABdhPJyh8hwv6S6A0JAsQFUdEw4jAxTz8TFmEuQoPKffBGTuVheoEhUhVINkFtZLYMlQqoKqsIhCPBJ7rWE= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a05:600c:3b83:: with SMTP id n3mr9260wms.116.1639498997872; Tue, 14 Dec 2021 08:23:17 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:38 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-32-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 31/43] kmsan: disable strscpy() optimization under KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 109DC40010 X-Stat-Signature: jt9s9mtgqn8cu7jw4db6m683uh3j5j7g Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="G0L/Wdnc"; spf=pass (imf04.hostedemail.com: domain of 39cS4YQYKCHYafcXYlaiiafY.Wigfchor-ggepUWe.ila@flex--glider.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=39cS4YQYKCHYafcXYlaiiafY.Wigfchor-ggepUWe.ila@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1639498998-464589 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Disable the efficient 8-byte reading under KMSAN to avoid false positives. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Iffd8336965e88fce915db2e6a9d6524422975f69 --- lib/string.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/lib/string.c b/lib/string.c index 485777c9da832..4ece4c7e7831b 100644 --- a/lib/string.c +++ b/lib/string.c @@ -197,6 +197,14 @@ ssize_t strscpy(char *dest, const char *src, size_t count) max = 0; #endif + /* + * read_word_at_a_time() below may read uninitialized bytes after the + * trailing zero and use them in comparisons. Disable this optimization + * under KMSAN to prevent false positive reports. + */ + if (IS_ENABLED(CONFIG_KMSAN)) + max = 0; + while (max >= sizeof(unsigned long)) { unsigned long c, data; From patchwork Tue Dec 14 16:20:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676401 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 37540C433EF for ; Tue, 14 Dec 2021 16:40:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 068276B00A2; Tue, 14 Dec 2021 11:23:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 016316B00A3; Tue, 14 Dec 2021 11:23:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF9BC6B00A4; Tue, 14 Dec 2021 11:23:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id D07656B00A2 for ; Tue, 14 Dec 2021 11:23:32 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9B5B8181AC9CC for ; Tue, 14 Dec 2021 16:23:22 +0000 (UTC) X-FDA: 78916919844.07.7FACD95 Received: from mail-lj1-f201.google.com (mail-lj1-f201.google.com [209.85.208.201]) by imf24.hostedemail.com (Postfix) with ESMTP id C4C01180002 for ; Tue, 14 Dec 2021 16:23:19 +0000 (UTC) Received: by mail-lj1-f201.google.com with SMTP id p21-20020a2e9ad5000000b00219ee503efeso5612112ljj.14 for ; Tue, 14 Dec 2021 08:23:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=iKsGybaA1AF0uo5Rm3x+Pkm5+t/N8eu2x4JfmdYC3rY=; b=ZWgDP6Ii5viesShK83UAmFO6XPY51RARPAakgm5T6FpZ01hl3Oe4OVcbnMvPn0qdtx 4wBla4QT1qaA6b0WWxzVlHWic7a4fsIirqLKdcjvdv5mslYLl5P/j+ep247d6Ueuc4RF FIIEar8/t5IKCro5fWhjLVJZzH1F9kW6Kr1T3VlTW8wnpXVnFz0P6UQMy0miV/x7QTpg UpLO/alhew6nXNwXoQoy9M3PmOhAP9e7O54p5eK/98vQpoRJGp9Zjucxn7clG/YVELaG clHeqKMcMVgTy9rKN3dMLBGOfu7eqK1TUktZXdRMXCiRFbLp6DLuf/5mZbPBxPS587uv 3GzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=iKsGybaA1AF0uo5Rm3x+Pkm5+t/N8eu2x4JfmdYC3rY=; b=51YyU2x16WtfvuCca2/pqyE/Szqw8di1C8mkdGj3mQhyK56yn8Rb3VvcgLiKSsKNYV 2s+/cAh1vcePVej+WALIPzBXyfcwMAJVptLwoEnJREEMppHlA+y5MlRjw2hxRNPMO7jf CEgGC0BPwOH/PPLpk8OtVc4GAm8WAPfKPue86f/EtfBEP4twd9ocdH8/I9r5YbvHNt4S jAkxAvA5OBsgXWyPKj/RRQJsCde6vhXtQNvfJQJbJhW+/vif7H3GXRvphwJK7EqWQL0p A9ygjqq+N0YpJzaFlrxDsXXi0Vr8jHSbaR7ooLAvFW/AbVj01Ed5e9RlS/DvHg5K+2Mo XM/g== X-Gm-Message-State: AOAM530J6F70oQ++70SQgyYfPqSkDuDrf6j1aw2V3HJjtYLiCPYsRGpj /5QjPJIZHcshBty5CYpO1kQI9C5MqPo= X-Google-Smtp-Source: ABdhPJyVHAmS4Hof8ncG2oeJsuYsA4417lJ7Ni1Ox0qNtMudhX4tGmPpZhFU3jYjvIOzKq7axCKUDKGhI3o= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a19:c3d6:: with SMTP id t205mr5766238lff.441.1639499000478; Tue, 14 Dec 2021 08:23:20 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:39 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-33-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 32/43] crypto: kmsan: disable accelerated configs under KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ZWgDP6Ii; spf=pass (imf24.hostedemail.com: domain of 3-MS4YQYKCHkdifabodlldib.Zljifkru-jjhsXZh.lod@flex--glider.bounces.google.com designates 209.85.208.201 as permitted sender) smtp.mailfrom=3-MS4YQYKCHkdifabodlldib.Zljifkru-jjhsXZh.lod@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: C4C01180002 X-Stat-Signature: 9bw4dmyuaoncjqo8pxnz6j7km8qbm5ot X-HE-Tag: 1639498999-113831 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN is unable to understand when initialized values come from assembly. Disable accelerated configs in KMSAN builds to prevent false positive reports. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Idb2334bf3a1b68b31b399709baefaa763038cc50 --- crypto/Kconfig | 30 ++++++++++++++++++++++++++++++ drivers/net/Kconfig | 1 + 2 files changed, 31 insertions(+) diff --git a/crypto/Kconfig b/crypto/Kconfig index 285f82647d2b7..c6c71acf7d56e 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -290,6 +290,7 @@ config CRYPTO_CURVE25519 config CRYPTO_CURVE25519_X86 tristate "x86_64 accelerated Curve25519 scalar multiplication library" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_LIB_CURVE25519_GENERIC select CRYPTO_ARCH_HAVE_LIB_CURVE25519 @@ -338,11 +339,13 @@ config CRYPTO_AEGIS128 config CRYPTO_AEGIS128_SIMD bool "Support SIMD acceleration for AEGIS-128" depends on CRYPTO_AEGIS128 && ((ARM || ARM64) && KERNEL_MODE_NEON) + depends on !KMSAN # avoid false positives from assembly default y config CRYPTO_AEGIS128_AESNI_SSE2 tristate "AEGIS-128 AEAD algorithm (x86_64 AESNI+SSE2 implementation)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_AEAD select CRYPTO_SIMD help @@ -478,6 +481,7 @@ config CRYPTO_NHPOLY1305 config CRYPTO_NHPOLY1305_SSE2 tristate "NHPoly1305 hash function (x86_64 SSE2 implementation)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_NHPOLY1305 help SSE2 optimized implementation of the hash function used by the @@ -486,6 +490,7 @@ config CRYPTO_NHPOLY1305_SSE2 config CRYPTO_NHPOLY1305_AVX2 tristate "NHPoly1305 hash function (x86_64 AVX2 implementation)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_NHPOLY1305 help AVX2 optimized implementation of the hash function used by the @@ -599,6 +604,7 @@ config CRYPTO_CRC32C config CRYPTO_CRC32C_INTEL tristate "CRC32c INTEL hardware acceleration" depends on X86 + depends on !KMSAN # avoid false positives from assembly select CRYPTO_HASH help In Intel processor with SSE4.2 supported, the processor will @@ -639,6 +645,7 @@ config CRYPTO_CRC32 config CRYPTO_CRC32_PCLMUL tristate "CRC32 PCLMULQDQ hardware acceleration" depends on X86 + depends on !KMSAN # avoid false positives from assembly select CRYPTO_HASH select CRC32 help @@ -704,6 +711,7 @@ config CRYPTO_BLAKE2S config CRYPTO_BLAKE2S_X86 tristate "BLAKE2s digest algorithm (x86 accelerated version)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_LIB_BLAKE2S_GENERIC select CRYPTO_ARCH_HAVE_LIB_BLAKE2S @@ -718,6 +726,7 @@ config CRYPTO_CRCT10DIF config CRYPTO_CRCT10DIF_PCLMUL tristate "CRCT10DIF PCLMULQDQ hardware acceleration" depends on X86 && 64BIT && CRC_T10DIF + depends on !KMSAN # avoid false positives from assembly select CRYPTO_HASH help For x86_64 processors with SSE4.2 and PCLMULQDQ supported, @@ -765,6 +774,7 @@ config CRYPTO_POLY1305 config CRYPTO_POLY1305_X86_64 tristate "Poly1305 authenticator algorithm (x86_64/SSE2/AVX2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_LIB_POLY1305_GENERIC select CRYPTO_ARCH_HAVE_LIB_POLY1305 help @@ -853,6 +863,7 @@ config CRYPTO_SHA1 config CRYPTO_SHA1_SSSE3 tristate "SHA1 digest algorithm (SSSE3/AVX/AVX2/SHA-NI)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SHA1 select CRYPTO_HASH help @@ -864,6 +875,7 @@ config CRYPTO_SHA1_SSSE3 config CRYPTO_SHA256_SSSE3 tristate "SHA256 digest algorithm (SSSE3/AVX/AVX2/SHA-NI)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SHA256 select CRYPTO_HASH help @@ -876,6 +888,7 @@ config CRYPTO_SHA256_SSSE3 config CRYPTO_SHA512_SSSE3 tristate "SHA512 digest algorithm (SSSE3/AVX/AVX2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SHA512 select CRYPTO_HASH help @@ -1034,6 +1047,7 @@ config CRYPTO_WP512 config CRYPTO_GHASH_CLMUL_NI_INTEL tristate "GHASH hash function (CLMUL-NI accelerated)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_CRYPTD help This is the x86_64 CLMUL-NI accelerated implementation of @@ -1084,6 +1098,7 @@ config CRYPTO_AES_TI config CRYPTO_AES_NI_INTEL tristate "AES cipher algorithms (AES-NI)" depends on X86 + depends on !KMSAN # avoid false positives from assembly select CRYPTO_AEAD select CRYPTO_LIB_AES select CRYPTO_ALGAPI @@ -1208,6 +1223,7 @@ config CRYPTO_BLOWFISH_COMMON config CRYPTO_BLOWFISH_X86_64 tristate "Blowfish cipher algorithm (x86_64)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_BLOWFISH_COMMON imply CRYPTO_CTR @@ -1238,6 +1254,7 @@ config CRYPTO_CAMELLIA config CRYPTO_CAMELLIA_X86_64 tristate "Camellia cipher algorithm (x86_64)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER imply CRYPTO_CTR help @@ -1254,6 +1271,7 @@ config CRYPTO_CAMELLIA_X86_64 config CRYPTO_CAMELLIA_AESNI_AVX_X86_64 tristate "Camellia cipher algorithm (x86_64/AES-NI/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_CAMELLIA_X86_64 select CRYPTO_SIMD @@ -1272,6 +1290,7 @@ config CRYPTO_CAMELLIA_AESNI_AVX_X86_64 config CRYPTO_CAMELLIA_AESNI_AVX2_X86_64 tristate "Camellia cipher algorithm (x86_64/AES-NI/AVX2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_CAMELLIA_AESNI_AVX_X86_64 help Camellia cipher algorithm module (x86_64/AES-NI/AVX2). @@ -1317,6 +1336,7 @@ config CRYPTO_CAST5 config CRYPTO_CAST5_AVX_X86_64 tristate "CAST5 (CAST-128) cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_CAST5 select CRYPTO_CAST_COMMON @@ -1340,6 +1360,7 @@ config CRYPTO_CAST6 config CRYPTO_CAST6_AVX_X86_64 tristate "CAST6 (CAST-256) cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_CAST6 select CRYPTO_CAST_COMMON @@ -1373,6 +1394,7 @@ config CRYPTO_DES_SPARC64 config CRYPTO_DES3_EDE_X86_64 tristate "Triple DES EDE cipher algorithm (x86-64)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_LIB_DES imply CRYPTO_CTR @@ -1430,6 +1452,7 @@ config CRYPTO_CHACHA20 config CRYPTO_CHACHA20_X86_64 tristate "ChaCha stream cipher algorithms (x86_64/SSSE3/AVX2/AVX-512VL)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_LIB_CHACHA_GENERIC select CRYPTO_ARCH_HAVE_LIB_CHACHA @@ -1473,6 +1496,7 @@ config CRYPTO_SERPENT config CRYPTO_SERPENT_SSE2_X86_64 tristate "Serpent cipher algorithm (x86_64/SSE2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_SERPENT select CRYPTO_SIMD @@ -1492,6 +1516,7 @@ config CRYPTO_SERPENT_SSE2_X86_64 config CRYPTO_SERPENT_SSE2_586 tristate "Serpent cipher algorithm (i586/SSE2)" depends on X86 && !64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_SERPENT select CRYPTO_SIMD @@ -1511,6 +1536,7 @@ config CRYPTO_SERPENT_SSE2_586 config CRYPTO_SERPENT_AVX_X86_64 tristate "Serpent cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_SERPENT select CRYPTO_SIMD @@ -1531,6 +1557,7 @@ config CRYPTO_SERPENT_AVX_X86_64 config CRYPTO_SERPENT_AVX2_X86_64 tristate "Serpent cipher algorithm (x86_64/AVX2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SERPENT_AVX_X86_64 help Serpent cipher algorithm, by Anderson, Biham & Knudsen. @@ -1672,6 +1699,7 @@ config CRYPTO_TWOFISH_586 config CRYPTO_TWOFISH_X86_64 tristate "Twofish cipher algorithm (x86_64)" depends on (X86 || UML_X86) && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_ALGAPI select CRYPTO_TWOFISH_COMMON imply CRYPTO_CTR @@ -1689,6 +1717,7 @@ config CRYPTO_TWOFISH_X86_64 config CRYPTO_TWOFISH_X86_64_3WAY tristate "Twofish cipher algorithm (x86_64, 3-way parallel)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_TWOFISH_COMMON select CRYPTO_TWOFISH_X86_64 @@ -1709,6 +1738,7 @@ config CRYPTO_TWOFISH_X86_64_3WAY config CRYPTO_TWOFISH_AVX_X86_64 tristate "Twofish cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_SIMD select CRYPTO_TWOFISH_COMMON diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index 6cccc3dc00bcf..d09dabc607a69 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -76,6 +76,7 @@ config WIREGUARD tristate "WireGuard secure network tunnel" depends on NET && INET depends on IPV6 || !IPV6 + depends on !KMSAN # KMSAN doesn't support the crypto configs below select NET_UDP_TUNNEL select DST_CACHE select CRYPTO From patchwork Tue Dec 14 16:20:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676403 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12918C433F5 for ; Tue, 14 Dec 2021 16:40:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4ABD66B00A3; Tue, 14 Dec 2021 11:23:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 45BBD6B00A4; Tue, 14 Dec 2021 11:23:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 324266B00A5; Tue, 14 Dec 2021 11:23:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0245.hostedemail.com [216.40.44.245]) by kanga.kvack.org (Postfix) with ESMTP id 22F316B00A3 for ; Tue, 14 Dec 2021 11:23:36 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id DB618181AEF09 for ; Tue, 14 Dec 2021 16:23:25 +0000 (UTC) X-FDA: 78916919970.17.7FF7CA6 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf09.hostedemail.com (Postfix) with ESMTP id C1ADB14000F for ; Tue, 14 Dec 2021 16:23:22 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id 187-20020a1c02c4000000b003335872db8dso11510646wmc.2 for ; Tue, 14 Dec 2021 08:23:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=lDpTBNT0I1htzh1pdmjOA/Qp6nY6br0HXEMQd54phuU=; b=ArbC5Qlw/tWeRiWIZ1Y48jy30BP5Te592XfZpzZzD9zBlvepDTy+VQtTG21UxQ8inK yQOL5/iUYpmsTAyvZc8sQ9pumSl56H835fYEsQC9eio3bftA2y4TABqOc425Iji7rzD0 do7F/rEnnb+z0YaTwIqETnbFACe96SBPqBguZuj54fKEJhmlonknYdgCZJITPILp2O8R g/Ch0maZBcRaaUc1L/BUARrklla29VQPtw7qZiFG7IkzEUDEQh2/qgw8/gSMxyyP557X lu6CsOw225+EBUfCeJYIAFjyIdvEIiMwSAf2MXapEHMvfzu2rBKYIJpXiVaqCG5w0PwO hITg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=lDpTBNT0I1htzh1pdmjOA/Qp6nY6br0HXEMQd54phuU=; b=HtpgWP8v2F0nF0wIbbD8gPdSS+4UBruURC0ig6+gDbkxWKaXIgu0rBQIaUfRJNfnOZ yopIPEd9nGCQyoRSiCBxOGfTqtKwsKT17SAfcm339niLlFHt/TMCTvXgXkPBqKZX9FS0 bOO+Vr3vHKD0rvaKi7YWCyNpJQ3iVx783OSWpWep8q5Q39DR1BNB2JJPauGhpW8dBZql upLUrso4TlLByiQFA8GQVoYYZn8jBo5SBhlUkngSfe9DM2sTT2LTbIBwHeoEzTxy2Ni9 2kzodtGeZjmZI8qFelZiwxbtx+G0AcrlxZW287EKmdkvhZf21q7aePng2YakVl8DuayB E3ww== X-Gm-Message-State: AOAM533HL8O+7XM7RYTss+3wQdNzkkXG2VXnZgycl/jDq742uPmCOvg+ iFdzCDPrUGUexA/tM28/cN7K47IXakQ= X-Google-Smtp-Source: ABdhPJzd3+JglaYvwaWg/MkbkJuCDnCFW7n3Gc9z2XLMoLSMqx+inrD6O9GNKx7jFbb1Shu5Z94RjWGqmAw= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a05:600c:1d1b:: with SMTP id l27mr5819855wms.1.1639499003427; Tue, 14 Dec 2021 08:23:23 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:40 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-34-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 33/43] kmsan: disable physical page merging in biovec From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: rap9mygdmt7d1h3fji4rmqhcak41rg9y Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ArbC5Qlw; spf=pass (imf09.hostedemail.com: domain of 3-8S4YQYKCHwlqnijwlttlqj.htrqnsz2-rrp0fhp.twl@flex--glider.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3-8S4YQYKCHwlqnijwlttlqj.htrqnsz2-rrp0fhp.twl@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: C1ADB14000F X-HE-Tag: 1639499002-48023 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN metadata for consequent physical pages may be inconsequent, therefore accessing such pages together may lead to metadata corruption. We disable merging pages in biovec to prevent such corruptions. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Iece16041be5ee47904fbc98121b105e5be5fea5c --- block/blk.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/block/blk.h b/block/blk.h index ccde6e6f17360..e0c62a5d5639e 100644 --- a/block/blk.h +++ b/block/blk.h @@ -103,6 +103,13 @@ static inline bool biovec_phys_mergeable(struct request_queue *q, phys_addr_t addr1 = page_to_phys(vec1->bv_page) + vec1->bv_offset; phys_addr_t addr2 = page_to_phys(vec2->bv_page) + vec2->bv_offset; + /* + * Merging consequent physical pages may not work correctly under KMSAN + * if their metadata pages aren't consequent. Just disable merging. + */ + if (IS_ENABLED(CONFIG_KMSAN)) + return false; + if (addr1 + vec1->bv_len != addr2) return false; if (xen_domain() && !xen_biovec_phys_mergeable(vec1, vec2->bv_page)) From patchwork Tue Dec 14 16:20:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676405 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80D82C433FE for ; Tue, 14 Dec 2021 16:41:21 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EE6BB6B00A5; Tue, 14 Dec 2021 11:23:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E9B0A6B00A7; Tue, 14 Dec 2021 11:23:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CC1F66B00A6; Tue, 14 Dec 2021 11:23:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0182.hostedemail.com [216.40.44.182]) by kanga.kvack.org (Postfix) with ESMTP id B8E226B00A4 for ; Tue, 14 Dec 2021 11:23:38 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 7D8D9180DD593 for ; Tue, 14 Dec 2021 16:23:28 +0000 (UTC) X-FDA: 78916920096.27.BE66B62 Received: from mail-lf1-f73.google.com (mail-lf1-f73.google.com [209.85.167.73]) by imf31.hostedemail.com (Postfix) with ESMTP id B39CE2000F for ; Tue, 14 Dec 2021 16:23:22 +0000 (UTC) Received: by mail-lf1-f73.google.com with SMTP id q13-20020a19f20d000000b0041fcb65b6c7so6759414lfh.8 for ; Tue, 14 Dec 2021 08:23:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=hJREyaJODzN59ymWNpAGAfGc3N4tvevP8UF37PAE+J8=; b=oTH1LhNHOLMxItxiNi4P80arf3V90zcZ2D7XaYrMWD5zhzjnQGBBNtzXBtP3OTeZbR tREgkLUiMUgnBhnlpBEcb9spfWY/cnsLiVkZ4gM7i0ckihbSR9YkmKCgOjV9S1v24RC6 rcCk1732bioSUmVR0kALhf4VayHC/sBhWGQ0rYLrqC8INyjm10g/NTh+VCKjt5Ba9t+O Hb8n1v7nq8PZW0tTa5I8gF39PqqrSQOKemQDGVLB0z3T64pOPjyD2JlTfUJ0Bbh2pRjR 2uBonmx/fZ6h/wKPOVGIAJrZhtWwZfkkFYy/2mpCLi/sxNccIF40o59aaOuAx9Kf03c9 v5Zw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=hJREyaJODzN59ymWNpAGAfGc3N4tvevP8UF37PAE+J8=; b=eeS67btgZQ/NNjqNLb/nrZ67JrTzVe1kBnV8/kEQihh8q7GI6q52vwgZ6/9LNNk7Kd c8bJ5esFh1Fnb7qs5W1a1sEcfQRVqnK1cAMBJwjc+oDZ5kQ7nzlScJ1y7vmYTLpDABh1 f0n/xGf3X6tDpFRVuwooTPDKHU15OuwXzymxS2S3KkbSVXOJ0jH3Br9iDK6OXaG171vl 28YPx+5whiv0neAfWB9r5aGHT6SjT02+Xochd7L8p+ItLrrSkc/ReoHIH9WGgKbCISV1 1dJwKwRNHG49TIY3zlVspl2cSk4YTi4auRi9g3ChBq5+Fnw6WdNQRuBRCT1O6BmPdiJd j0rA== X-Gm-Message-State: AOAM530f7EZdY6TXognJ4PaGefxEQg1nCFl4JiNYw0gjtaxq0vNWgjRP 712A8XKgY03VWTnhZQrZU4soUmZER7A= X-Google-Smtp-Source: ABdhPJxXRu645cHJwsRfEAXihfv0Go5u7KwQ4UTJ1n3lO79v1JEfsX0lC2OS7GVAzadP34TGcewWINvNhMI= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a05:6512:3991:: with SMTP id j17mr5582774lfu.545.1639499006493; Tue, 14 Dec 2021 08:23:26 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:41 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-35-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 34/43] kmsan: block: skip bio block merging logic for KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Eric Biggers X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: B39CE2000F X-Stat-Signature: k9puy3oawtuy4a6ssr3z15ahjfc6mu8k Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=oTH1LhNH; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf31.hostedemail.com: domain of 3_sS4YQYKCH8jolghujrrjoh.frpolqx0-ppnydfn.ruj@flex--glider.bounces.google.com designates 209.85.167.73 as permitted sender) smtp.mailfrom=3_sS4YQYKCH8jolghujrrjoh.frpolqx0-ppnydfn.ruj@flex--glider.bounces.google.com X-HE-Tag: 1639499002-246231 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN doesn't allow treating adjacent memory pages as such, if they were allocated by different alloc_pages() calls. The block layer however does so: adjacent pages end up being used together. To prevent this, make page_is_mergeable() return false under KMSAN. Suggested-by: Eric Biggers Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ie29cc2464c70032347c32ab2a22e1e7a0b37b905 --- block/bio.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/block/bio.c b/block/bio.c index 15ab0d6d1c06e..b94283463196d 100644 --- a/block/bio.c +++ b/block/bio.c @@ -805,6 +805,8 @@ static inline bool page_is_mergeable(const struct bio_vec *bv, return false; *same_page = ((vec_end_addr & PAGE_MASK) == page_addr); + if (!*same_page && IS_ENABLED(CONFIG_KMSAN)) + return false; if (*same_page) return true; return (bv->bv_page + bv_end / PAGE_SIZE) == (page + off / PAGE_SIZE); From patchwork Tue Dec 14 16:20:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676409 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD05FC433EF for ; Tue, 14 Dec 2021 16:42:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4FA636B00A6; Tue, 14 Dec 2021 11:23:45 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4AA7A6B00A7; Tue, 14 Dec 2021 11:23:45 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 398E36B00A8; Tue, 14 Dec 2021 11:23:45 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0240.hostedemail.com [216.40.44.240]) by kanga.kvack.org (Postfix) with ESMTP id 29EC16B00A6 for ; Tue, 14 Dec 2021 11:23:45 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E0F00181AEF0B for ; Tue, 14 Dec 2021 16:23:31 +0000 (UTC) X-FDA: 78916920222.14.33EE2F0 Received: from mail-lj1-f202.google.com (mail-lj1-f202.google.com [209.85.208.202]) by imf08.hostedemail.com (Postfix) with ESMTP id 0542B16000B for ; Tue, 14 Dec 2021 16:23:27 +0000 (UTC) Received: by mail-lj1-f202.google.com with SMTP id y11-20020a2e978b000000b00218df7f76feso5661147lji.11 for ; Tue, 14 Dec 2021 08:23:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uWmJ4cna30hge0JfJUa8tpBQzN6nH+qPm9bB+RIZvh4=; b=gnRo03X1rHbrBeUgP8SbekXfNxzT/s/gLZOQMVuQ98Mcrl1/W3OksXN1PcCbasxeAO UcbkG3ew4b8Be17NziHFV8PQdo/AVLvnN1uvzNE+/RCF8Yeo5nZcp8/4xUdImU0Nmub7 pUp8xOiBFqGYmE6XPUQwL+6h+P0/ZO9EIsfw8OtvYKyScA9IZPLC73pXdRqRDIlAz8Ac 68riDzHGh1lY5Kf42SDi08qLtAlbv7TaUPzDjvXTJ+J+g1Gk1S+wssewAKw3Q+7ruZI2 K9XseeVybm7Rc624Ez0X/JuNj+l4qFERRpZ+uevwUrAZ2RpTPmImVr4tiJ3KbMiF/MUH F4Aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uWmJ4cna30hge0JfJUa8tpBQzN6nH+qPm9bB+RIZvh4=; b=d+fBE3A7AU7i7b34nEFOYKiHH92wHZXz4W5k6WXLEF+0GgFTuH4ruMHJakwsKe+4/y 93Cog6ABB7EuBYAxmFRQDkmwI1vzaIErusjbOwnRP8cnf5rr5XucVGOAZg2oUMc+IK+1 q/Cg2rmH72KVDvtS3XvO56LkITDea5wa0cuKA/qP7zpQ+fbDXmW00PqnUqnGrQkFkFX0 fQ4J8q9ZYra454nNApXYbLCBru1tIk5xae0zXKLU1eV1zwmjItOUfL95sNzM0XBTyLpR vvo1dyNgxqnjcLBFwOxdkkt2C8xTChPwgmlTOmY/brbZpUTYHKQcZS6rHhYVv8xssLLl VRPQ== X-Gm-Message-State: AOAM533CKhlPyw9t7ElgqUJZpn6MtyrPDcG9Ava9ltQecjeHR0VChzcj AyIIT85suKwuqq8dLnkTt6Z1XGB2bi8= X-Google-Smtp-Source: ABdhPJxVRSRgL2C45JzlUfnk64Umy5VS7QXVkbIUWGYNVfQ53/NEIAiExfqrIiGa5vb43rmr7CHrDJLiI0s= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a05:6512:3993:: with SMTP id j19mr5680886lfu.581.1639499009067; Tue, 14 Dec 2021 08:23:29 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:42 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-36-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 35/43] x86: kmsan: use __msan_ string functions where possible. From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 0542B16000B X-Stat-Signature: 669esiiui4nejawr1da91n39kcz6ksdx Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=gnRo03X1; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf08.hostedemail.com: domain of 3AcW4YQYKCIImrojkxmuumrk.iusrot03-ssq1giq.uxm@flex--glider.bounces.google.com designates 209.85.208.202 as permitted sender) smtp.mailfrom=3AcW4YQYKCIImrojkxmuumrk.iusrot03-ssq1giq.uxm@flex--glider.bounces.google.com X-HE-Tag: 1639499007-868343 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Unless stated otherwise (by explicitly calling __memcpy(), __memset() or __memmove()) we want all string functions to call their __msan_ versions (e.g. __msan_memcpy() instead of memcpy()), so that shadow and origin values are updated accordingly. Bootloader must still use the default string functions to avoid crashes. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I7ca9bd6b4f5c9b9816404862ae87ca7984395f33 --- arch/x86/include/asm/string_64.h | 23 +++++++++++++++++++++-- include/linux/fortify-string.h | 2 ++ 2 files changed, 23 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h index 6e450827f677a..3b87d889b6e16 100644 --- a/arch/x86/include/asm/string_64.h +++ b/arch/x86/include/asm/string_64.h @@ -11,11 +11,23 @@ function. */ #define __HAVE_ARCH_MEMCPY 1 +#if defined(__SANITIZE_MEMORY__) +#undef memcpy +void *__msan_memcpy(void *dst, const void *src, size_t size); +#define memcpy __msan_memcpy +#else extern void *memcpy(void *to, const void *from, size_t len); +#endif extern void *__memcpy(void *to, const void *from, size_t len); #define __HAVE_ARCH_MEMSET +#if defined(__SANITIZE_MEMORY__) +extern void *__msan_memset(void *s, int c, size_t n); +#undef memset +#define memset __msan_memset +#else void *memset(void *s, int c, size_t n); +#endif void *__memset(void *s, int c, size_t n); #define __HAVE_ARCH_MEMSET16 @@ -55,7 +67,13 @@ static inline void *memset64(uint64_t *s, uint64_t v, size_t n) } #define __HAVE_ARCH_MEMMOVE +#if defined(__SANITIZE_MEMORY__) +#undef memmove +void *__msan_memmove(void *dest, const void *src, size_t len); +#define memmove __msan_memmove +#else void *memmove(void *dest, const void *src, size_t count); +#endif void *__memmove(void *dest, const void *src, size_t count); int memcmp(const void *cs, const void *ct, size_t count); @@ -64,8 +82,7 @@ char *strcpy(char *dest, const char *src); char *strcat(char *dest, const char *src); int strcmp(const char *cs, const char *ct); -#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) - +#if (defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)) /* * For files that not instrumented (e.g. mm/slub.c) we * should use not instrumented version of mem* functions. @@ -73,7 +90,9 @@ int strcmp(const char *cs, const char *ct); #undef memcpy #define memcpy(dst, src, len) __memcpy(dst, src, len) +#undef memmove #define memmove(dst, src, len) __memmove(dst, src, len) +#undef memset #define memset(s, c, n) __memset(s, c, n) #ifndef __NO_FORTIFY diff --git a/include/linux/fortify-string.h b/include/linux/fortify-string.h index a6cd6815f2490..b2c74cb85e20e 100644 --- a/include/linux/fortify-string.h +++ b/include/linux/fortify-string.h @@ -198,6 +198,7 @@ __FORTIFY_INLINE char *strncat(char *p, const char *q, __kernel_size_t count) return p; } +#ifndef CONFIG_KMSAN __FORTIFY_INLINE void *memset(void *p, int c, __kernel_size_t size) { size_t p_size = __builtin_object_size(p, 0); @@ -240,6 +241,7 @@ __FORTIFY_INLINE void *memmove(void *p, const void *q, __kernel_size_t size) fortify_panic(__func__); return __underlying_memmove(p, q, size); } +#endif extern void *__real_memscan(void *, int, __kernel_size_t) __RENAME(memscan); __FORTIFY_INLINE void *memscan(void *p, int c, __kernel_size_t size) From patchwork Tue Dec 14 16:20:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676407 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1BF3AC433F5 for ; Tue, 14 Dec 2021 16:41:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B3CA66B00A4; Tue, 14 Dec 2021 11:23:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AEBCB6B00A6; Tue, 14 Dec 2021 11:23:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 965BB6B00A7; Tue, 14 Dec 2021 11:23:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0221.hostedemail.com [216.40.44.221]) by kanga.kvack.org (Postfix) with ESMTP id 870246B00A4 for ; Tue, 14 Dec 2021 11:23:44 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 5881486E88 for ; Tue, 14 Dec 2021 16:23:34 +0000 (UTC) X-FDA: 78916920348.07.CBFDC62 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf19.hostedemail.com (Postfix) with ESMTP id D0D6E1A0012 for ; Tue, 14 Dec 2021 16:23:32 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id l11-20020a056402254b00b003f6a9bd7e81so3313777edb.13 for ; Tue, 14 Dec 2021 08:23:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=AQV6gal7xcYqVzVr1Fdti0Xi6wxVWlKKmnc0dLEZLM8=; b=b3jhMZ+gtnb0jZdcbymOiOH3N4uF7ztLrqbNz30Lv2gY/fV9Ysm768H49DQ8/xsqDs 3IWlVkODvwaD+EJvZuTNbktU0WaOOSZHUsqp8QqkiwCjohzFOu6txRL7zkE4px1LcdIk vxd5HuCl389N3V8vVv47QaEENGeKSAVUqrG28FVdb0SAZssc5+xj0XeyIIN9bKbyM+on Am44ePDuKhmUb7BWyplo6vc0OeMhMfMyxIOC81WBZkpDw1pLZir5v7e7d0vLqnwzMew6 hA5cisSZnjkJnZUIjDxzZR8BgPbTDD1a4PpQeqFA5R4A95BYLgdHW8cBeaznDvAgDZQB ASxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=AQV6gal7xcYqVzVr1Fdti0Xi6wxVWlKKmnc0dLEZLM8=; b=V9CS506pVPca1LD8KrMQd7mz8SPuu0IPjDItNL45IubvUg3goxp1Ts8C67+qh53qQO U4h2EVdk7+qVcLGGV4xJzBbLleTu1ix+QojujqxuF1d8+CEh2CHZi+asTBP/+QOi6lem Iu/kbt8Ee+zz7FETMKfVAhgBnVSzl55GNNR5dd+fAAl2khirJXjtBB9RWGWqKMDD2DID 7SBV3hqvvNn6F9ogu1Aa9UULo8CXD7y8hJu/ceuvFleb2ATVovk7QV7DF6bP1xIn/X89 d4WXvis29Mhkcd4rWWPeBoBH6ZXb2cQOXIhQ1pv4C1beCIVR3vKguOzD+x/oKPvIxFA8 o5AQ== X-Gm-Message-State: AOAM5304f7K9zaIiheJAlVkosOcl8hGcK6vfnGQ5ZcpRqKpi4I4RhEwC wR9n2BKP1Chf1phRdJ9NeJtjzCcZR6k= X-Google-Smtp-Source: ABdhPJwLPasgbm+76JeLxEcgZP9XNbaJpFUP/XX1HfS3uh/8mkJSzHomfuQTgR60DvU3f9qWQoPIPwmcbts= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a17:906:cd17:: with SMTP id oz23mr866387ejb.415.1639499011704; Tue, 14 Dec 2021 08:23:31 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:43 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-37-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 36/43] x86: kmsan: sync metadata pages on page fault From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: D0D6E1A0012 X-Stat-Signature: 4ji4x7pidngqz5wi7x53r6ex5qwuz3ei Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=b3jhMZ+g; spf=pass (imf19.hostedemail.com: domain of 3A8W4YQYKCIQotqlmzowwotm.kwutqv25-uus3iks.wzo@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3A8W4YQYKCIQotqlmzowwotm.kwutqv25-uus3iks.wzo@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1639499012-279268 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN assumes shadow and origin pages for every allocated page are accessible. For pages between [VMALLOC_START, VMALLOC_END] those metadata pages start at KMSAN_VMALLOC_SHADOW_START and KMSAN_VMALLOC_ORIGIN_START, therefore we must sync a bigger memory region. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ia5bd541e54f1ecc11b86666c3ec87c62ac0bdfb8 --- arch/x86/mm/fault.c | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 4bfed53e210ec..abed0aedf00d2 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -260,7 +260,7 @@ static noinline int vmalloc_fault(unsigned long address) } NOKPROBE_SYMBOL(vmalloc_fault); -void arch_sync_kernel_mappings(unsigned long start, unsigned long end) +void __arch_sync_kernel_mappings(unsigned long start, unsigned long end) { unsigned long addr; @@ -284,6 +284,26 @@ void arch_sync_kernel_mappings(unsigned long start, unsigned long end) } } +void arch_sync_kernel_mappings(unsigned long start, unsigned long end) +{ + __arch_sync_kernel_mappings(start, end); + /* + * KMSAN maintains two additional metadata page mappings for the + * [VMALLOC_START, VMALLOC_END) range. These mappings start at + * KMSAN_VMALLOC_SHADOW_START and KMSAN_VMALLOC_ORIGIN_START and + * have to be synced together with the vmalloc memory mapping. + */ + if (IS_ENABLED(CONFIG_KMSAN) && + start >= VMALLOC_START && end < VMALLOC_END) { + __arch_sync_kernel_mappings( + start - VMALLOC_START + KMSAN_VMALLOC_SHADOW_START, + end - VMALLOC_START + KMSAN_VMALLOC_SHADOW_START); + __arch_sync_kernel_mappings( + start - VMALLOC_START + KMSAN_VMALLOC_ORIGIN_START, + end - VMALLOC_START + KMSAN_VMALLOC_ORIGIN_START); + } +} + static bool low_pfn(unsigned long pfn) { return pfn < max_low_pfn; From patchwork Tue Dec 14 16:20:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676411 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A95E7C433F5 for ; Tue, 14 Dec 2021 16:42:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B15646B00A7; Tue, 14 Dec 2021 11:23:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AC4BA6B00A8; Tue, 14 Dec 2021 11:23:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 965056B00A9; Tue, 14 Dec 2021 11:23:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0039.hostedemail.com [216.40.44.39]) by kanga.kvack.org (Postfix) with ESMTP id 840C16B00A7 for ; Tue, 14 Dec 2021 11:23:47 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 0EBD484773 for ; Tue, 14 Dec 2021 16:23:36 +0000 (UTC) X-FDA: 78916920432.22.76E1BB3 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf26.hostedemail.com (Postfix) with ESMTP id 9C69C140006 for ; Tue, 14 Dec 2021 16:23:35 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id n11-20020aa7c68b000000b003e7d68e9874so17491142edq.8 for ; Tue, 14 Dec 2021 08:23:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=i4Z+HeLj0NbbaW9XsBkHpSzUhX9tYLnEL06CEMz9cxQ=; b=CtNztcjBEqraF6PWTnSXaJZwhKs4VOr8XcR8KBg/pBQ2Da/PejhZCJsQKJ2YjR7v2/ kbtjnRWYF6oHjIE1QUU9sNrJZ8oX1PRsaB0k0wwiGw3KahKYBs8kQxsYyIVySa1+C8gk 9CaiRdFR0QdNccdq18vSMjDF/91noPRWA/PSRMJ06eKAO4TKKxY6Xra42mzaC6fUiyvg daWq/hDnfRZaLNtnJTkxe1lSZQA8VJLyT1WDrfP17NsWbAfasFzl8yGpRlOTOKmlWzTu hifPioMqhyX4Qx2LkQcqzJfpAUnlpCQYhZJ4Vpv0aBceVSNFALwdReKnTErJKjqlvToM 3Abg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=i4Z+HeLj0NbbaW9XsBkHpSzUhX9tYLnEL06CEMz9cxQ=; b=ezdxtIy6bjt+GH3AS31MQEGvM8fFojgn29AdpJUvqlU27CPnrJK+BHw+VdGkpbIDx7 GicDnKzX3eO2r0J3SxzGnhez2VnCL7fZbXyyUd1ojAho2NPv3jJZOTyeeFCc9xW8+bf3 kxDnUZZZhjauHXHEyrSSb6Q/nH8xl5UrwGuIYUsAQ9bAp805rC6XXFBClz4wmurzgWUl wyoriET6D3G1R0BMjF1nmGm1GVguJ6Isht1Qhvmujwei3rJq4EqDi8L6PfPoywZo2BCn QR25etR1JdkmdbignaEJi3yWUojK3FACvqulZUdhujkG4agObs/9oWQXA1VtOkc0Liq9 wzQg== X-Gm-Message-State: AOAM532chmK/wke607Bg3xa1lmgY9aOtGu5vX6EvRyvsS1kADCKvkpCc 9nwiP2ScKlDRE7PezuiUrnTQxr+iK/8= X-Google-Smtp-Source: ABdhPJwqPu8nQr3pQusJoQrA+ALDwH/+qHW5RVV26eLzrA0etrVpkzgZzSUC2jzB1IRlcmv9IvT3/jPW864= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a17:906:1913:: with SMTP id a19mr6790529eje.484.1639499014263; Tue, 14 Dec 2021 08:23:34 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:44 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-38-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 37/43] x86: kasan: kmsan: support CONFIG_GENERIC_CSUM on x86, enable it for KASAN/KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=CtNztcjB; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of 3BsW4YQYKCIcrwtop2rzzrwp.nzxwty58-xxv6lnv.z2r@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3BsW4YQYKCIcrwtop2rzzrwp.nzxwty58-xxv6lnv.z2r@flex--glider.bounces.google.com X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 9C69C140006 X-Stat-Signature: hjdhhpau1oedpewy7i1zmasgh4ika71t X-HE-Tag: 1639499015-285494 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is needed to allow memory tools like KASAN and KMSAN see the memory accesses from the checksum code. Without CONFIG_GENERIC_CSUM the tools can't see memory accesses originating from handwritten assembly code. For KASAN it's a question of detecting more bugs, for KMSAN using the C implementation also helps avoid false positives originating from seemingly uninitialized checksum values. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I3e95247be55b1112af59dbba07e8cbf34e50a581 --- arch/x86/Kconfig | 4 ++++ arch/x86/include/asm/checksum.h | 16 ++++++++++------ arch/x86/lib/Makefile | 2 ++ 3 files changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 5c2ccb85f2efb..760570ff3f3e4 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -310,6 +310,10 @@ config GENERIC_ISA_DMA def_bool y depends on ISA_DMA_API +config GENERIC_CSUM + bool + default y if KMSAN || KASAN + config GENERIC_BUG def_bool y depends on BUG diff --git a/arch/x86/include/asm/checksum.h b/arch/x86/include/asm/checksum.h index bca625a60186c..6df6ece8a28ec 100644 --- a/arch/x86/include/asm/checksum.h +++ b/arch/x86/include/asm/checksum.h @@ -1,9 +1,13 @@ /* SPDX-License-Identifier: GPL-2.0 */ -#define _HAVE_ARCH_COPY_AND_CSUM_FROM_USER 1 -#define HAVE_CSUM_COPY_USER -#define _HAVE_ARCH_CSUM_AND_COPY -#ifdef CONFIG_X86_32 -# include +#ifdef CONFIG_GENERIC_CSUM +# include #else -# include +# define _HAVE_ARCH_COPY_AND_CSUM_FROM_USER 1 +# define HAVE_CSUM_COPY_USER +# define _HAVE_ARCH_CSUM_AND_COPY +# ifdef CONFIG_X86_32 +# include +# else +# include +# endif #endif diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile index c6506c6a70922..81be8498353a6 100644 --- a/arch/x86/lib/Makefile +++ b/arch/x86/lib/Makefile @@ -66,7 +66,9 @@ endif lib-$(CONFIG_X86_USE_3DNOW) += mmx_32.o else obj-y += iomap_copy_64.o +ifneq ($(CONFIG_GENERIC_CSUM),y) lib-y += csum-partial_64.o csum-copy_64.o csum-wrappers_64.o +endif lib-y += clear_page_64.o copy_page_64.o lib-y += memmove_64.o memset_64.o lib-y += copy_user_64.o From patchwork Tue Dec 14 16:20:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676413 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76B35C433EF for ; Tue, 14 Dec 2021 16:43:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C002F6B00A8; Tue, 14 Dec 2021 11:23:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BAEDC6B00A9; Tue, 14 Dec 2021 11:23:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A50496B00AA; Tue, 14 Dec 2021 11:23:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0202.hostedemail.com [216.40.44.202]) by kanga.kvack.org (Postfix) with ESMTP id 963656B00A8 for ; Tue, 14 Dec 2021 11:23:48 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 5FFCC8908A for ; Tue, 14 Dec 2021 16:23:38 +0000 (UTC) X-FDA: 78916920516.14.4F6B87D Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf04.hostedemail.com (Postfix) with ESMTP id F027940016 for ; Tue, 14 Dec 2021 16:23:37 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id k25-20020a05600c1c9900b00332f798ba1dso13396579wms.4 for ; Tue, 14 Dec 2021 08:23:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=YXzf2AmImQ6mywVvUf+Dd2SadORvaqMGa98N/8wEyCE=; b=WoieykU0vE3e7pyI/oyga8gaBev/YztJjf73B/DEvIxeumtlBa6EUTmjuc4jcmIMLX nVUK3c6ez+TVQFQQ2qMETFAL6aiqaRThJuloL8YuWdGBKZ5ep7MSrmnoDQm4AO9kgdAl Wh4CgLx/HAje1tAx7yMO3KMwNXVBThP1eOIpJDnxdyxM+4UrOYrM4a/bX/s2rpzdsiwU w83IY5U7lohcI5Alm77PKO3vv8zFBRwoiJyz1TJ9O0g6pUWPazaH/MqKhRKEXIDVDco2 g1gKxls5sjcT3gjd3g4b5NOHfK6sfVPvNX4hIXL++00vy5xc7dkvJxn86YqeRYlTuU4F FuEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=YXzf2AmImQ6mywVvUf+Dd2SadORvaqMGa98N/8wEyCE=; b=329hfiTw4u/5iPPx3u5+w8IXpCrQT+Keh+FFx6WQcZuZ/oYj+Xf+99af/KnoDXZoqE XycolFP0ww/jtC5zzE2+MhVKIRsXadAu1Yonrw/326XxTZXOgA1z34IY7fHAVstZ5sUr SgyqQRE76HMKlTJCWrKA0ln1QQmBtwjyZvN6J19OKeEBPxHcWmhfvry3nBmtlToSWvTx JVFvz+OBqhIkEXbz1RoLyMbS/z9pE6GlwEpO56xdYss8EnRoNS1lkg61HURQmW2RvIkJ KRsdmI7cgRyZOF1fghmolP9oclKB9d2DSX+gG2RLOn+1KW/n5eB8hMz3cwstWu0wKmSX ixwg== X-Gm-Message-State: AOAM531uTTXFYnvZESy/ZlZr4MYYt/8Jw/+3hCQAAPcTzk+CgHYl/Hkz cYtA6fPmTpPoJ1IeR5x4jDT+8vZglqY= X-Google-Smtp-Source: ABdhPJwPL8nrsavt08DbsW8quI6O5ixz52gAqZYkOXc5DfWMgtMM43VM2EUKGgLQG6hcWYM+aM6wKoh/iSg= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a5d:6111:: with SMTP id v17mr6757949wrt.512.1639499016732; Tue, 14 Dec 2021 08:23:36 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:45 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-39-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 38/43] x86: fs: kmsan: disable CONFIG_DCACHE_WORD_ACCESS From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Andrey Konovalov X-Stat-Signature: xoxjdfztwdkz6ds6fwg5zuygh5zzrqwm Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=WoieykU0; spf=pass (imf04.hostedemail.com: domain of 3CMW4YQYKCIktyvqr4t11tyr.p1zyv07A-zzx8npx.14t@flex--glider.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3CMW4YQYKCIktyvqr4t11tyr.p1zyv07A-zzx8npx.14t@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: F027940016 X-HE-Tag: 1639499017-948892 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: dentry_string_cmp() calls read_word_at_a_time(), which might read uninitialized bytes to optimize string comparisons. Disabling CONFIG_DCACHE_WORD_ACCESS should prohibit this optimization, as well as (probably) similar ones. Suggested-by: Andrey Konovalov Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I4c0073224ac2897cafb8c037362c49dda9cfa133 --- arch/x86/Kconfig | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 760570ff3f3e4..0dc77352bc3c9 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -125,7 +125,9 @@ config X86 select CLKEVT_I8253 select CLOCKSOURCE_VALIDATE_LAST_CYCLE select CLOCKSOURCE_WATCHDOG - select DCACHE_WORD_ACCESS + # Word-size accesses may read uninitialized data past the trailing \0 + # in strings and cause false KMSAN reports. + select DCACHE_WORD_ACCESS if !KMSAN select DYNAMIC_SIGFRAME select EDAC_ATOMIC_SCRUB select EDAC_SUPPORT From patchwork Tue Dec 14 16:20:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676417 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AD901C433EF for ; Tue, 14 Dec 2021 16:44:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 87CBB6B0075; Tue, 14 Dec 2021 11:23:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 82BCC6B00A9; Tue, 14 Dec 2021 11:23:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6CD4B6B00AB; Tue, 14 Dec 2021 11:23:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0122.hostedemail.com [216.40.44.122]) by kanga.kvack.org (Postfix) with ESMTP id 5E11B6B00A9 for ; Tue, 14 Dec 2021 11:23:51 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 1C2958249980 for ; Tue, 14 Dec 2021 16:23:41 +0000 (UTC) X-FDA: 78916920642.28.7E679D9 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf16.hostedemail.com (Postfix) with ESMTP id A6D7E18000A for ; Tue, 14 Dec 2021 16:23:40 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id 69-20020a1c0148000000b0033214e5b021so11494415wmb.3 for ; Tue, 14 Dec 2021 08:23:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ZnUpsMEEEaMIFMfW4PiNPKRHgYFqldIePfksOe3OgBA=; b=Brq++2uVFJ7WxZlLWrMfD1XXDJqAOv4L9P8DlGsnmIbBTBmkiTlNYXtKvICVzXC21X p2luCbzc/WruKPL5Xqzd9MX/xVCI4e0BFpjW0gTaCghNJ9hMa+M7NA8RaBIj4qkggONp erlrak/5+1AuLCDBZeneRT9IrzUpM8smn0SaGrq6Z1Gn/AgydnHqszcqrHVFLsX0KRHm Pj2Vth9a/a7+xjr3P3LUlwLjrZcX84eOiYEzaqp5rf1os88wDyhH/k3fzcarsT3OIddf 4upeVK2qPvDT/2fiEah4d6eC3+zktfq6gHE9r0qNam4togje7tTKHu2Cv8hzXkoqGQ9E roxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ZnUpsMEEEaMIFMfW4PiNPKRHgYFqldIePfksOe3OgBA=; b=mdBTexAld6vk1IIfIn5Ev5OVPC+z0kCH3MvVi7ad17Hr6TKstntwWMwDRx964imPAr lY+HXK8vJip7B5y81gZydKoyfnHymi2sXs0PVt53xDD4LGjGoabUoZgvgrsydx95//Kn k6vCqG9TSBq5msCt610zpoIYF+CQiHyk0it6Q7n8WsP2uTOKgxGNYTwaxjATcbGnShxH fk9ePg68pu6B7O81mchqVH+AesAc8JhTplFQxoapfgH7u/TicahaP56FfN/hwZ5ptjNB RaHeiFSyNVtqSHPWcr/wcy1AVSLTp4TEo5Wolf6fI2UiLvdaHh2WGWMhcaIV8UWpvUO0 iNRQ== X-Gm-Message-State: AOAM530xdb5b0Nb5Su2usFyPff16+8aEAkwoY+N0zch2jybzEoNQ0ppD oiAmN7DNw3a70u/U/W9IC6VtsA2CDgc= X-Google-Smtp-Source: ABdhPJxn0bLPYKI+N4zgBzr2ctmr3gZCxqLQaRF/4JPBJnIsdR5ZCIM+Tz7ExfM234sKmO38Qfn+7kwMOp0= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a05:6000:250:: with SMTP id m16mr6863763wrz.459.1639499019403; Tue, 14 Dec 2021 08:23:39 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:46 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-40-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 39/43] x86: kmsan: handle register passing from uninstrumented code From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: A6D7E18000A X-Stat-Signature: ixm1m7fjg9xj7iu5y4ix5gzkseanfruy Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Brq++2uV; spf=pass (imf16.hostedemail.com: domain of 3C8W4YQYKCIww1ytu7w44w1u.s421y3AD-220Bqs0.47w@flex--glider.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3C8W4YQYKCIww1ytu7w44w1u.s421y3AD-220Bqs0.47w@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1639499020-284999 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When calling KMSAN-instrumented functions from non-instrumented functions, function parameters may not be initialized properly, leading to false positive reports. In particular, this happens all the time when calling interrupt handlers from `noinstr` IDT entries. Fortunately, x86 code has instrumentation_begin() and instrumentation_end(), which denote the regions where `noinstr` code calls potentially instrumented code. We add calls to kmsan_instrumentation_begin() to those regions, which: - wipe the current KMSAN state at the beginning of the region, ensuring that the first call of an instrumented function receives initialized parameters (this is a pretty good approximation of having all other instrumented functions receive initialized parameters); - unpoison the `struct pt_regs` set up by the non-instrumented assembly code. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I435ec076cd21752c2f877f5da81f5eced62a2ea4 --- arch/x86/entry/common.c | 2 ++ arch/x86/include/asm/idtentry.h | 5 +++++ arch/x86/kernel/cpu/mce/core.c | 1 + arch/x86/kernel/kvm.c | 1 + arch/x86/kernel/nmi.c | 1 + arch/x86/kernel/sev.c | 2 ++ arch/x86/kernel/traps.c | 7 +++++++ arch/x86/mm/fault.c | 1 + kernel/entry/common.c | 3 +++ 9 files changed, 23 insertions(+) diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c index 6c2826417b337..a0f90588c514e 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -76,6 +77,7 @@ __visible noinstr void do_syscall_64(struct pt_regs *regs, int nr) nr = syscall_enter_from_user_mode(regs, nr); instrumentation_begin(); + kmsan_instrumentation_begin(regs); if (!do_syscall_x64(regs, nr) && !do_syscall_x32(regs, nr) && nr != -1) { /* Invalid system call, but still a system call. */ diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h index 1345088e99025..f025fdc0f25df 100644 --- a/arch/x86/include/asm/idtentry.h +++ b/arch/x86/include/asm/idtentry.h @@ -52,6 +52,7 @@ __visible noinstr void func(struct pt_regs *regs) \ irqentry_state_t state = irqentry_enter(regs); \ \ instrumentation_begin(); \ + kmsan_instrumentation_begin(regs); \ __##func (regs); \ instrumentation_end(); \ irqentry_exit(regs, state); \ @@ -99,6 +100,7 @@ __visible noinstr void func(struct pt_regs *regs, \ irqentry_state_t state = irqentry_enter(regs); \ \ instrumentation_begin(); \ + kmsan_instrumentation_begin(regs); \ __##func (regs, error_code); \ instrumentation_end(); \ irqentry_exit(regs, state); \ @@ -196,6 +198,7 @@ __visible noinstr void func(struct pt_regs *regs, \ u32 vector = (u32)(u8)error_code; \ \ instrumentation_begin(); \ + kmsan_instrumentation_begin(regs); \ kvm_set_cpu_l1tf_flush_l1d(); \ run_irq_on_irqstack_cond(__##func, regs, vector); \ instrumentation_end(); \ @@ -236,6 +239,7 @@ __visible noinstr void func(struct pt_regs *regs) \ irqentry_state_t state = irqentry_enter(regs); \ \ instrumentation_begin(); \ + kmsan_instrumentation_begin(regs); \ kvm_set_cpu_l1tf_flush_l1d(); \ run_sysvec_on_irqstack_cond(__##func, regs); \ instrumentation_end(); \ @@ -263,6 +267,7 @@ __visible noinstr void func(struct pt_regs *regs) \ irqentry_state_t state = irqentry_enter(regs); \ \ instrumentation_begin(); \ + kmsan_instrumentation_begin(regs); \ __irq_enter_raw(); \ kvm_set_cpu_l1tf_flush_l1d(); \ __##func (regs); \ diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 6ed365337a3b1..b49e2c6bb8ca2 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1314,6 +1314,7 @@ static void queue_task_work(struct mce *m, char *msg, void (*func)(struct callba static noinstr void unexpected_machine_check(struct pt_regs *regs) { instrumentation_begin(); + kmsan_instrumentation_begin(regs); pr_err("CPU#%d: Unexpected int18 (Machine Check)\n", smp_processor_id()); instrumentation_end(); diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 59abbdad7729c..55ffe1bc73b00 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -250,6 +250,7 @@ noinstr bool __kvm_handle_async_pf(struct pt_regs *regs, u32 token) state = irqentry_enter(regs); instrumentation_begin(); + kmsan_instrumentation_begin(regs); /* * If the host managed to inject an async #PF into an interrupt diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c index 4bce802d25fb1..d91327d271359 100644 --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -330,6 +330,7 @@ static noinstr void default_do_nmi(struct pt_regs *regs) __this_cpu_write(last_nmi_rip, regs->ip); instrumentation_begin(); + kmsan_instrumentation_begin(regs); handled = nmi_handle(NMI_LOCAL, regs); __this_cpu_add(nmi_stats.normal, handled); diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index a9fc2ac7a8bd5..421d59b982cae 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -1426,6 +1426,7 @@ DEFINE_IDTENTRY_VC_KERNEL(exc_vmm_communication) irq_state = irqentry_nmi_enter(regs); instrumentation_begin(); + kmsan_instrumentation_begin(regs); if (!vc_raw_handle_exception(regs, error_code)) { /* Show some debug info */ @@ -1458,6 +1459,7 @@ DEFINE_IDTENTRY_VC_USER(exc_vmm_communication) irqentry_enter_from_user_mode(regs); instrumentation_begin(); + kmsan_instrumentation_begin(regs); if (!vc_raw_handle_exception(regs, error_code)) { /* diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index c9d566dcf89a0..3a821010def63 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -230,6 +230,7 @@ static noinstr bool handle_bug(struct pt_regs *regs) * All lies, just get the WARN/BUG out. */ instrumentation_begin(); + kmsan_instrumentation_begin(regs); /* * Since we're emulating a CALL with exceptions, restore the interrupt * state to what it was at the exception site. @@ -261,6 +262,7 @@ DEFINE_IDTENTRY_RAW(exc_invalid_op) state = irqentry_enter(regs); instrumentation_begin(); + kmsan_instrumentation_begin(regs); handle_invalid_op(regs); instrumentation_end(); irqentry_exit(regs, state); @@ -415,6 +417,7 @@ DEFINE_IDTENTRY_DF(exc_double_fault) irqentry_nmi_enter(regs); instrumentation_begin(); + kmsan_instrumentation_begin(regs); notify_die(DIE_TRAP, str, regs, error_code, X86_TRAP_DF, SIGSEGV); tsk->thread.error_code = error_code; @@ -690,6 +693,7 @@ DEFINE_IDTENTRY_RAW(exc_int3) if (user_mode(regs)) { irqentry_enter_from_user_mode(regs); instrumentation_begin(); + kmsan_instrumentation_begin(regs); do_int3_user(regs); instrumentation_end(); irqentry_exit_to_user_mode(regs); @@ -697,6 +701,7 @@ DEFINE_IDTENTRY_RAW(exc_int3) irqentry_state_t irq_state = irqentry_nmi_enter(regs); instrumentation_begin(); + kmsan_instrumentation_begin(regs); if (!do_int3(regs)) die("int3", regs, 0); instrumentation_end(); @@ -896,6 +901,7 @@ static __always_inline void exc_debug_kernel(struct pt_regs *regs, unsigned long dr7 = local_db_save(); irqentry_state_t irq_state = irqentry_nmi_enter(regs); instrumentation_begin(); + kmsan_instrumentation_begin(regs); /* * If something gets miswired and we end up here for a user mode @@ -975,6 +981,7 @@ static __always_inline void exc_debug_user(struct pt_regs *regs, irqentry_enter_from_user_mode(regs); instrumentation_begin(); + kmsan_instrumentation_begin(regs); /* * Start the virtual/ptrace DR6 value with just the DR_STEP mask diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index abed0aedf00d2..0437d2fe31ecb 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1558,6 +1558,7 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault) state = irqentry_enter(regs); instrumentation_begin(); + kmsan_instrumentation_begin(regs); handle_page_fault(regs, error_code, address); instrumentation_end(); diff --git a/kernel/entry/common.c b/kernel/entry/common.c index d5a61d565ad5d..3a569ea5a78fb 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -104,6 +104,7 @@ noinstr long syscall_enter_from_user_mode(struct pt_regs *regs, long syscall) __enter_from_user_mode(regs); instrumentation_begin(); + kmsan_instrumentation_begin(regs); local_irq_enable(); ret = __syscall_enter_from_user_work(regs, syscall); instrumentation_end(); @@ -297,6 +298,7 @@ void syscall_exit_to_user_mode_work(struct pt_regs *regs) __visible noinstr void syscall_exit_to_user_mode(struct pt_regs *regs) { instrumentation_begin(); + kmsan_instrumentation_begin(regs); __syscall_exit_to_user_mode_work(regs); instrumentation_end(); __exit_to_user_mode(); @@ -310,6 +312,7 @@ noinstr void irqentry_enter_from_user_mode(struct pt_regs *regs) noinstr void irqentry_exit_to_user_mode(struct pt_regs *regs) { instrumentation_begin(); + kmsan_instrumentation_begin(regs); exit_to_user_mode_prepare(regs); instrumentation_end(); __exit_to_user_mode(); From patchwork Tue Dec 14 16:20:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676419 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABE9CC433F5 for ; Tue, 14 Dec 2021 16:44:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2FE8C6B00A9; Tue, 14 Dec 2021 11:23:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2AD036B00AA; Tue, 14 Dec 2021 11:23:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 19D576B00AB; Tue, 14 Dec 2021 11:23:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0057.hostedemail.com [216.40.44.57]) by kanga.kvack.org (Postfix) with ESMTP id 09C056B00A9 for ; Tue, 14 Dec 2021 11:23:55 -0500 (EST) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id CB0411801D51B for ; Tue, 14 Dec 2021 16:23:44 +0000 (UTC) X-FDA: 78916920768.07.A40127C Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf24.hostedemail.com (Postfix) with ESMTP id 8D6BE180012 for ; Tue, 14 Dec 2021 16:23:41 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id u4-20020a5d4684000000b0017c8c1de97dso4854665wrq.16 for ; Tue, 14 Dec 2021 08:23:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=U+qm7d/TRHtCZVnNcOhiGaVBA5CqP9KjwHQr2QaBZAQ=; b=pEM4RdPSzm/510AmsumYiHSDTA2WOU3NZ8sZhhdvwWgVTHPJc7Bdlah4MCZ32atX3G jGo/raNtb4LQI8I4r1YFjxd2ggdk5XJT0ob764gqF9JvZZCy42c/S2H0f040ScrApf57 gqIbh2Fc5O7kzVK2vUnbOkdFc1yq4EhbCmjUdgO0DasR2T5NyKPtdENN0LclCZG691Mt jOsBshdRwd+qj3IuZFNYMMsTlZzXepIRKPzicER+gtxYt9YdqeLXcIRg38AD7gTceUhp zYaKpHAFuXnw+voswqHRX1JELRSYcJyCHZvdEy65ACsMvgcqOGXpdF4F9wp7EfuQJXz5 4ySQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=U+qm7d/TRHtCZVnNcOhiGaVBA5CqP9KjwHQr2QaBZAQ=; b=Qw6mBmsUnPm6vqUtjwRqUFSvB4Qw41D4B9J8YZlr3OAx7FuCraEfT9iiKbVSuF9Lw2 YIBQa/e4UNvDvUehjOzMLEcHI88DxDxuETepF9dtVOs1Bx+t9lj7UQg29oDDGKEDnQiy n9ZvQhAD0t16xDC1RXQzvhnFyPckWpeDWPB8J1BzwpMOX7XrUZRRQWEHv2+o033phsK4 aC78AgooNZ2hE1M2d/lZtELd0CFRWom/BxLpcnPBtRBQrYGFdFk6diqFaPGN9yX3a2EP WKXmeoe16+wh/dAxApFVItt0cIA6j3tCvHfjLd9HD6RvuYeEdjFsm5sWuDiK6VxkSguQ Yeew== X-Gm-Message-State: AOAM530xOPcJr988hHlBSEksW037fktCK5TrpG9SeelmHcpsXf2Cmef9 mvoV93u4Tb4Qc/T3ApPNeFRFBf/meec= X-Google-Smtp-Source: ABdhPJzrl2gcQ/iPrl04EH5ZpIOzMTprjVo1ITZdjtHUSdKKRp22IBfVvKgCgO2cFELLY6fChdDG+6nJUr0= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a5d:452b:: with SMTP id j11mr6749890wra.432.1639499022128; Tue, 14 Dec 2021 08:23:42 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:47 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-41-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 40/43] kmsan: kcov: unpoison area->list in kcov_remote_area_put() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Queue-Id: 8D6BE180012 X-Stat-Signature: dsyk7871ejbzecd7prkftu1toixzstxy Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=pEM4RdPS; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of 3DsW4YQYKCI8z41wxAz77z4x.v75416DG-553Etv3.7Az@flex--glider.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3DsW4YQYKCI8z41wxAz77z4x.v75416DG-553Etv3.7Az@flex--glider.bounces.google.com X-Rspamd-Server: rspam11 X-HE-Tag: 1639499021-999082 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN does not instrument kernel/kcov.c for performance reasons (with CONFIG_KCOV=y virtually every place in the kernel invokes kcov instrumentation). Therefore the tool may miss writes from kcov.c that initialize memory. When CONFIG_DEBUG_LIST is enabled, list pointers from kernel/kcov.c are passed to instrumented helpers in lib/list_debug.c, resulting in false positives. To work around these reports, we unpoison the contents of area->list after initializing it. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ie17f2ee47a7af58f5cdf716d585ebf0769348a5a --- kernel/kcov.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/kernel/kcov.c b/kernel/kcov.c index 36ca640c4f8e7..88ffdddc99ba1 100644 --- a/kernel/kcov.c +++ b/kernel/kcov.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -152,6 +153,12 @@ static void kcov_remote_area_put(struct kcov_remote_area *area, INIT_LIST_HEAD(&area->list); area->size = size; list_add(&area->list, &kcov_remote_areas); + /* + * KMSAN doesn't instrument this file, so it may not know area->list + * is initialized. Unpoison it explicitly to avoid reports in + * kcov_remote_area_get(). + */ + kmsan_unpoison_memory(&area->list, sizeof(struct list_head)); } static notrace bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t) From patchwork Tue Dec 14 16:20:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676421 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34543C4332F for ; Tue, 14 Dec 2021 16:45:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 326636B00AA; Tue, 14 Dec 2021 11:23:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2D57C6B00AB; Tue, 14 Dec 2021 11:23:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 176A86B00AC; Tue, 14 Dec 2021 11:23:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0118.hostedemail.com [216.40.44.118]) by kanga.kvack.org (Postfix) with ESMTP id 07C666B00AA for ; Tue, 14 Dec 2021 11:23:58 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C82CF181AC9C6 for ; Tue, 14 Dec 2021 16:23:47 +0000 (UTC) X-FDA: 78916920894.20.70C6FBE Received: from mail-qt1-f202.google.com (mail-qt1-f202.google.com [209.85.160.202]) by imf28.hostedemail.com (Postfix) with ESMTP id 606E7C0015 for ; Tue, 14 Dec 2021 16:23:45 +0000 (UTC) Received: by mail-qt1-f202.google.com with SMTP id e14-20020a05622a110e00b002b0681d127eso27161846qty.15 for ; Tue, 14 Dec 2021 08:23:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=nqri0SqMC1etYNLgNBuMYaMAJIzMiZ/6wkGd9gYSOPg=; b=G18riu4JI7JtKkKPqFFA5idjWjGGTTZqHBAvDJQauVQAe0CwMxRshD5JmWlf7E37ks mwlaBVMnwHTHaPRmvc8izjTpFo/K6aQ60g2MJRtQD09Zi0kt6r+SwE63fluG5YCffcX3 ETmVIHrvmAmCsts/1Hn+sDZuXNhfiqY6oYEZO/SgyuHGBaFJFcj5R/RMsB2+P8KmbIqF LpJ2h0xhFl1+ofZ0d2h/MvnuO8XyFMt5eh/vq7b4V7CqBSjK/yRE6qI8rKVbn9Fylk4v FR6UfMNdjUdR8Mf9ICzS9FCWjbZML+EhSd4wW5x7J+ixaHc7MssuLWzv9rurbjT+dAZR 9RTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nqri0SqMC1etYNLgNBuMYaMAJIzMiZ/6wkGd9gYSOPg=; b=ibJxPPlCNuDR9H6Zh7AGNO1spESGWQoUwqya3qRmxDdKBavh4e5NDQ4FMAVJpwBy4K 7CCC9+tY792B5g09Q4ywufpWxV+AXYsCJ/yNVdrEjV87qfLMX0u0Pf1buayjzm0cN8/8 vKrfItB/rWMrEV4NBos8t7FEHgqmRE/uGCs7zpeFSqCFikFGgoNUJhvwpoINffEiz3Xt TMy6z6UyUQPnJotcA0w5WhaWCgmYO+COu91NxfJqTTX6pdg/NoSGPHZqGulSjo9GFpAb zlpvrDOlwQhKq7il5GE0OCfrE49lkjLGm0MBgb1BSBNt6vB5g4pLqGJKCQ/3G6sLYkdQ hTzw== X-Gm-Message-State: AOAM532YD0jUIh7L42kjYgrf46NSdwz0wqsjtxJOdtGSm8VtK3WwYVYa VAJ4CrhNAL5xmRUwZXGHjZTcALtOJJg= X-Google-Smtp-Source: ABdhPJxdzjFsV5SurqsBYzpnQqwewId1DkZ5ulNTElSjNHN2aEHjpBIN3oypc4pwNeZ7UyA+RJRV8P/AS1U= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a05:622a:18c:: with SMTP id s12mr6962473qtw.556.1639499024661; Tue, 14 Dec 2021 08:23:44 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:48 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-42-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 41/43] security: kmsan: fix interoperability with auto-initialization From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 606E7C0015 X-Stat-Signature: ihnkp1ei35kp6jckeiudfinex1mbg9jd Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=G18riu4J; spf=pass (imf28.hostedemail.com: domain of 3EMW4YQYKCJE163yzC19916z.x97638FI-775Gvx5.9C1@flex--glider.bounces.google.com designates 209.85.160.202 as permitted sender) smtp.mailfrom=3EMW4YQYKCJE163yzC19916z.x97638FI-775Gvx5.9C1@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1639499025-149114 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Heap and stack initialization is great, but not when we are trying uses of uninitialized memory. When the kernel is built with KMSAN, having kernel memory initialization enabled may introduce false negatives. We disable CONFIG_INIT_STACK_ALL_PATTERN and CONFIG_INIT_STACK_ALL_ZERO under CONFIG_KMSAN, making it impossible to auto-initialize stack variables in KMSAN builds. We also disable CONFIG_INIT_ON_ALLOC_DEFAULT_ON and CONFIG_INIT_ON_FREE_DEFAULT_ON to prevent accidental use of heap auto-initialization. We however still let the users enable heap auto-initialization at boot-time (by setting init_on_alloc=1 or init_on_free=1), in which case a warning is printed. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I86608dd867018683a14ae1870f1928ad925f42e9 --- mm/page_alloc.c | 4 ++++ security/Kconfig.hardening | 4 ++++ 2 files changed, 8 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index fa8029b714a81..4218dea0c76a2 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -855,6 +855,10 @@ void init_mem_debugging_and_hardening(void) else static_branch_disable(&init_on_free); + if (IS_ENABLED(CONFIG_KMSAN) && + (_init_on_alloc_enabled_early || _init_on_free_enabled_early)) + pr_info("mem auto-init: please make sure init_on_alloc and init_on_free are disabled when running KMSAN\n"); + #ifdef CONFIG_DEBUG_PAGEALLOC if (!debug_pagealloc_enabled()) return; diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening index d051f8ceefddd..bd13a46024457 100644 --- a/security/Kconfig.hardening +++ b/security/Kconfig.hardening @@ -106,6 +106,7 @@ choice config INIT_STACK_ALL_PATTERN bool "pattern-init everything (strongest)" depends on CC_HAS_AUTO_VAR_INIT_PATTERN + depends on !KMSAN help Initializes everything on the stack (including padding) with a specific debug value. This is intended to eliminate @@ -124,6 +125,7 @@ choice config INIT_STACK_ALL_ZERO bool "zero-init everything (strongest and safest)" depends on CC_HAS_AUTO_VAR_INIT_ZERO + depends on !KMSAN help Initializes everything on the stack (including padding) with a zero value. This is intended to eliminate all @@ -208,6 +210,7 @@ config STACKLEAK_RUNTIME_DISABLE config INIT_ON_ALLOC_DEFAULT_ON bool "Enable heap memory zeroing on allocation by default" + depends on !KMSAN help This has the effect of setting "init_on_alloc=1" on the kernel command line. This can be disabled with "init_on_alloc=0". @@ -220,6 +223,7 @@ config INIT_ON_ALLOC_DEFAULT_ON config INIT_ON_FREE_DEFAULT_ON bool "Enable heap memory zeroing on free by default" + depends on !KMSAN help This has the effect of setting "init_on_free=1" on the kernel command line. This can be disabled with "init_on_free=0". From patchwork Tue Dec 14 16:20:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676423 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A4A7C433EF for ; Tue, 14 Dec 2021 16:45:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 03F6A6B00AB; Tue, 14 Dec 2021 11:24:00 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id F31506B00AC; Tue, 14 Dec 2021 11:23:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DD1C16B00AD; Tue, 14 Dec 2021 11:23:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0006.hostedemail.com [216.40.44.6]) by kanga.kvack.org (Postfix) with ESMTP id CE60C6B00AB for ; Tue, 14 Dec 2021 11:23:59 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 98FB888CC7 for ; Tue, 14 Dec 2021 16:23:49 +0000 (UTC) X-FDA: 78916920978.10.03D4490 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf04.hostedemail.com (Postfix) with ESMTP id 631EA40017 for ; Tue, 14 Dec 2021 16:23:48 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id q17-20020adfcd91000000b0017bcb12ad4fso4851744wrj.12 for ; Tue, 14 Dec 2021 08:23:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=DRsUBze0TCq/8OPjxRo856aiOc30zNOBN3bl6uBcnGs=; b=pb+BQoZ+20k4jQn45XqB5EEUKM9VUAj9iuAHLbPgyXtF8+YsW/fWdE2vL6sFEFEoyK tWCv5yjZsXSeiLNcRp7Ukw9zJKfdtRTRck34pTDgR3eAKHXUMHwrxvmjrt0vnrCi9Ntj VcSfevLftX/8QebdbsT2/yyELtFYLs0K5L5GeCrGFcj/LwYUDawmOOwWa5lI+UYiUV6d aDd+xSGh3C+3mSWHyY145Tmd8x8T+UjCaGtCtAdmAUcJGTsJ04N7Px5YHgXcZ0IIbHVR KHuCXm+v9/dfHaUfRJl4TuJQOxWAjETiKDDAkBqZM45Aiw8oYXWucw0t068DlQk8Wo5C zofw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=DRsUBze0TCq/8OPjxRo856aiOc30zNOBN3bl6uBcnGs=; b=h7HK9jp91FgUJt7oYCleZkKJrz93m+7dzipiUETgKJUkHXkE5NnO/Pq6o4JC/gUxSl 0YUXGAbv4kfO7navAbgdIX6W1GmMxp4YpOQ3Qaq3MszOaMy11PgeW5vczq6TaCQ/CJKn bZTBIZkmovH9YGq8ekvMmKAXzbIflSg1q6DZFjOSnYcnPVBE23mqSyhl2mrLeG6ESbWq WzP7p8njyzszLd8aPK0D1LYpkqAfkdzI6bxgpySLd+/Qwb/l/K5c/i6rwYjRGqA6UPOq j7Ire6i+AD+c8DkYZN+3PvICpbSkX0UhDxj/LWOcHxZ4HJUcEkh2spAjZdVX4ETzkXAN nalA== X-Gm-Message-State: AOAM531QAW9E10fGM9nGDv+NRojjcpcDED7ij/GPBHRee1rDOq0JF0G1 KhuTDo39Hcc+7hZbfus25ugR5wB4U/Q= X-Google-Smtp-Source: ABdhPJzbEf5lWlF6RWfh8Re9EBCpxKO3TumG/VSHFoE+dz+wF3Gg0ijQABT1lDRRxNcQr8YrfKX8rhC5/wo= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a5d:644f:: with SMTP id d15mr6736523wrw.662.1639499027200; Tue, 14 Dec 2021 08:23:47 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:49 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-43-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 42/43] objtool: kmsan: list KMSAN API functions as uaccess-safe From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 631EA40017 X-Stat-Signature: 1sc81oiy365bi3pi3njxigdmaskms4fh Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=pb+BQoZ+; spf=pass (imf04.hostedemail.com: domain of 3E8W4YQYKCJQ49612F4CC492.0CA96BIL-AA8Jy08.CF4@flex--glider.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=3E8W4YQYKCJQ49612F4CC492.0CA96BIL-AA8Jy08.CF4@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1639499028-317233 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN inserts API function calls in a lot of places (function entries and exits, local variables, memory accesses), so they may get called from the uaccess regions as well. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I242bc9816273fecad4ea3d977393784396bb3c35 --- tools/objtool/check.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 21735829b860c..9620b5224754e 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -937,6 +937,25 @@ static const char *uaccess_safe_builtin[] = { "__sanitizer_cov_trace_cmp4", "__sanitizer_cov_trace_cmp8", "__sanitizer_cov_trace_switch", + /* KMSAN */ + "kmsan_copy_to_user", + "kmsan_report", + "kmsan_unpoison_memory", + "__msan_chain_origin", + "__msan_get_context_state", + "__msan_instrument_asm_store", + "__msan_metadata_ptr_for_load_1", + "__msan_metadata_ptr_for_load_2", + "__msan_metadata_ptr_for_load_4", + "__msan_metadata_ptr_for_load_8", + "__msan_metadata_ptr_for_load_n", + "__msan_metadata_ptr_for_store_1", + "__msan_metadata_ptr_for_store_2", + "__msan_metadata_ptr_for_store_4", + "__msan_metadata_ptr_for_store_8", + "__msan_metadata_ptr_for_store_n", + "__msan_poison_alloca", + "__msan_warning", /* UBSAN */ "ubsan_type_mismatch_common", "__ubsan_handle_type_mismatch", From patchwork Tue Dec 14 16:20:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12676425 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDE23C433F5 for ; Tue, 14 Dec 2021 16:46:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 967A46B00AC; Tue, 14 Dec 2021 11:24:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 916816B00AD; Tue, 14 Dec 2021 11:24:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7DF566B00AE; Tue, 14 Dec 2021 11:24:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0059.hostedemail.com [216.40.44.59]) by kanga.kvack.org (Postfix) with ESMTP id 6E3BA6B00AC for ; Tue, 14 Dec 2021 11:24:03 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 332FD18039531 for ; Tue, 14 Dec 2021 16:23:53 +0000 (UTC) X-FDA: 78916921146.10.11196AE Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf29.hostedemail.com (Postfix) with ESMTP id 2A399120008 for ; Tue, 14 Dec 2021 16:23:50 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id v62-20020a1cac41000000b0033719a1a714so11490898wme.6 for ; Tue, 14 Dec 2021 08:23:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=0mmB6h+qi5LVtavm6Y/uuufvMWDJjURttWeT4uCGgOA=; b=krFHkJFRpG5f7PQyylfsyHH+X/3oZHTlM2bHag0ljtZokgaD8/YyPhz0OYtoZljF9p a/1WMF9SvMaTb9wiE7MASzLtGFT8CktrQrZ2tpyLjxKkL91geH8UeWWzwvBJv6Bc7oJU E+KEPa6RoW0rUQgwUpd0QXBPE9puM5GvMQ0xIh3Tvxtqpf8QygDfsFQSGIminBNT0eKZ oLNgqIxe1YY8jSU+kZMraZV6fkCDztyU5gCD/zBdRh535EKhlF+heSFYdyLPxt6xntXU vLWuKXkmWn8l2IGs4qKg3zgmOEkTk5yNhec57ZKsf5VC5RbbGWVj9NpQuNXed095KhdZ eyQw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=0mmB6h+qi5LVtavm6Y/uuufvMWDJjURttWeT4uCGgOA=; b=4MVIqftApZlOYi5c0QRlHEQGdzr0khNaI/UJvT0yv0oRzpmWPtGBukiLpXlH3W/S93 gjtUwaCzgowIv3QQnCEvANIgObzO62ur99qhpPWLDmrpO8OjH6fJyCgAoIQQCEOPyH01 RAeknX1c6q84TUCH8RnJmMBdz3BxlE2XGLYeKLgK5zAPf/B2+/Lizwm23DksU1aVMfaB w/xN7xQSKdCyM6ygF826EN4ElrpEKNZBjgEfSooP+clsVCUcNF/v1++K1TRntPw1w+is NHwybZOThwqb/fY0UlxKR0RyODS4hyQCqCuekPKOwBHV8PevXJnwLVDuUBx4REl/BARI TmKg== X-Gm-Message-State: AOAM531Rb7+QD6L8Bc5kp9UUuSrG+w+76bKkoD0z30OW39gZEWibLRdq 6gCllau3OGWb0bst58wa4s0BFax6lQM= X-Google-Smtp-Source: ABdhPJxhhlzbQHoLVXFhNi14NhI+haOVw2i1LSirQray9nV4L8GASwwdRmkzwbZcam5m1xHcTug1lx3xxjA= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:357e:2b9d:5b13:a652]) (user=glider job=sendgmr) by 2002:a05:600c:1c8d:: with SMTP id k13mr2669727wms.0.1639499029840; Tue, 14 Dec 2021 08:23:49 -0800 (PST) Date: Tue, 14 Dec 2021 17:20:50 +0100 In-Reply-To: <20211214162050.660953-1-glider@google.com> Message-Id: <20211214162050.660953-44-glider@google.com> Mime-Version: 1.0 References: <20211214162050.660953-1-glider@google.com> X-Mailer: git-send-email 2.34.1.173.g76aa8bc2d0-goog Subject: [PATCH 43/43] x86: kmsan: enable KMSAN builds for x86 From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Ard Biesheuvel , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 2A399120008 X-Stat-Signature: fwwn7uynq18pc3i4kkngqz539bkpdtcx Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=krFHkJFR; spf=pass (imf29.hostedemail.com: domain of 3FcW4YQYKCJY6B834H6EE6B4.2ECB8DKN-CCAL02A.EH6@flex--glider.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3FcW4YQYKCJY6B834H6EE6B4.2ECB8DKN-CCAL02A.EH6@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1639499030-999868 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make KMSAN usable by adding the necessary Kconfig bits. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I1d295ce8159ce15faa496d20089d953a919c125e --- arch/x86/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 0dc77352bc3c9..b5740d0ab0eb9 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -165,6 +165,7 @@ config X86 select HAVE_ARCH_KASAN if X86_64 select HAVE_ARCH_KASAN_VMALLOC if X86_64 select HAVE_ARCH_KFENCE + select HAVE_ARCH_KMSAN if X86_64 select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT