From patchwork Tue Apr 26 16:42:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827483 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA557C433EF for ; Tue, 26 Apr 2022 16:44:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2C80C6B0075; Tue, 26 Apr 2022 12:44:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 277E76B0078; Tue, 26 Apr 2022 12:44:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 119716B007B; Tue, 26 Apr 2022 12:44:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 032C06B0075 for ; Tue, 26 Apr 2022 12:44:25 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id D32F725C63 for ; Tue, 26 Apr 2022 16:44:24 +0000 (UTC) X-FDA: 79399603248.08.BDC0212 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf08.hostedemail.com (Postfix) with ESMTP id 7F4F2160045 for ; Tue, 26 Apr 2022 16:44:18 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id eg38-20020a05640228a600b00425d61d0302so4456964edb.17 for ; Tue, 26 Apr 2022 09:44:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=A3NTwB+RJ9VPcHTGR65MbBZ7w/cFn7M1ATQGzKSwE+o=; b=Ev6F5DoTQsSpCxCbmkM4Tk/VDlOb3s/m5trmUrtt8ohNngUKfb9OF00S9Mc2nr8vO/ IQzu1xDBWcfWryuhjSAG5ZlBXRwwzNyAPtvuMgv9hqu42UyoqcQ5XEuNIQ0sSyZ+pKNG nprOiwF/8Y6uwcYL1a16ovvRkuJ0Its6Mt6zvkHc2in2l3Y1ZetLaSNnUX8xEJI6/5bz L2BalAbeuA6NEVFDlHzBHWrt3POZUDMrT8BCTJ4wmAs/5Yaek0/+fGGtYHWXx1vHXyga CXr7UuHGTuK2cG9qbHwIW0QwOAeaZ8omncUskx4plnax4j1B4kX2WY7MzorNhLUuUpPW zBig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=A3NTwB+RJ9VPcHTGR65MbBZ7w/cFn7M1ATQGzKSwE+o=; b=i4FORBiFYVe9Cu2eNPJMfo7KaUPk5KA1LkXQRcvoGDWMdoqiNccmpB38DfWz++zgBY qtKiRzica4RabKeovdgeTa16p0f5jgioZ2Ao0+orlbAHpRS4cZlnW7Ij194IvT0YXkbU yL2vp8jiydg8NZjTsZ+4R0ZEvsk1iFYiGAxRmfSFn5/rihlcCBdlqk+yi1NnIFpUpRcb l9TG6MWGfnVAAIs1lA+E4gbk8F9+y4aRJ7eqLEPIr7rZa2SDYF8OzXPvr5ujD5ElkqN4 HYL9b4JYDzkG+CVparccsx472PMLr1auiO9qfMaKGEQwvS+s4qP0AOZP9uBy/TNNbp0A O4IQ== X-Gm-Message-State: AOAM533eT3MOz2VFbY5VBU2jW3LBAT/s78zjYAyVVtOgryf20n0LQ8x0 Lr19mY+Tx7xWK77q8BDFpjizxscXK94= X-Google-Smtp-Source: ABdhPJy2UkBps++xwJwtis+DecVA+28tEyDZziiiovCJIOEA9Q4lRIjJpV+/umWCIrcsFJtphuW7vqpus3Q= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a50:fe1a:0:b0:425:e276:5adf with SMTP id f26-20020a50fe1a000000b00425e2765adfmr13327701edt.284.1650991462573; Tue, 26 Apr 2022 09:44:22 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:30 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-2-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 01/46] x86: add missing include to sparsemem.h From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Ev6F5DoT; spf=pass (imf08.hostedemail.com: domain of 3ZiFoYgYKCGMHMJEFSHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3ZiFoYgYKCGMHMJEFSHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 7F4F2160045 X-Stat-Signature: w8d44f8s6ngbq8nqc1royqmpkdrira43 X-HE-Tag: 1650991458-154932 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Dmitry Vyukov sparsemem.h:34:32: error: unknown type name 'phys_addr_t' extern int phys_to_target_node(phys_addr_t start); ^ sparsemem.h:36:39: error: unknown type name 'u64' extern int memory_add_physaddr_to_nid(u64 start); ^ Signed-off-by: Dmitry Vyukov Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ifae221ce85d870d8f8d17173bd44d5cf9be2950f --- arch/x86/include/asm/sparsemem.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/x86/include/asm/sparsemem.h b/arch/x86/include/asm/sparsemem.h index 6a9ccc1b2be5d..64df897c0ee30 100644 --- a/arch/x86/include/asm/sparsemem.h +++ b/arch/x86/include/asm/sparsemem.h @@ -2,6 +2,8 @@ #ifndef _ASM_X86_SPARSEMEM_H #define _ASM_X86_SPARSEMEM_H +#include + #ifdef CONFIG_SPARSEMEM /* * generic non-linear memory support: From patchwork Tue Apr 26 16:42:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827484 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1A2FC4332F for ; Tue, 26 Apr 2022 16:44:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 75C396B0078; Tue, 26 Apr 2022 12:44:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 70BEE6B007B; Tue, 26 Apr 2022 12:44:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D46B6B007E; Tue, 26 Apr 2022 12:44:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 4E7B86B0078 for ; Tue, 26 Apr 2022 12:44:28 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 240B826CA4 for ; Tue, 26 Apr 2022 16:44:28 +0000 (UTC) X-FDA: 79399603416.17.5F60994 Received: from mail-lj1-f201.google.com (mail-lj1-f201.google.com [209.85.208.201]) by imf03.hostedemail.com (Postfix) with ESMTP id 2A8BC20042 for ; Tue, 26 Apr 2022 16:44:24 +0000 (UTC) Received: by mail-lj1-f201.google.com with SMTP id k8-20020a2e92c8000000b0024f249d1770so531192ljh.23 for ; Tue, 26 Apr 2022 09:44:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=x9iYgpTxv0DnaRejeNz0igsc6c2qOQO28HLcQTucooU=; b=Gl3v87A3gpptfoX1Pv7tGOGPPJqWORLwAIxyRCfUwrY6EM/33Sv4bGLyko673V+5xF W4mSZxl9y8ecIksGz4CC6cMM37/vZPTybFun3kcP8rmLRp4jIpgojrsWED7hIGm2tK41 cIHamgbg09osnortXRg8+8xhIFhauRGd+cc35eZAYByb70I71asbchC3HhT2gEtPLrwF xHbkxkx9ilVJt+VW8y7uX1hEOh+kXUF62M21NT0ICSoUX6MwHxuJtZpqVVShleTmOQJ7 6zRLG6oUiR8di2OatEIxd1W+NN2xViGHvdecFvaeGc48EuCrlgSCU2KO294xezzXHnF7 M+aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=x9iYgpTxv0DnaRejeNz0igsc6c2qOQO28HLcQTucooU=; b=BM2FoOTlcUgNHpIhNiZBWTecWmGhVJHAiHtjYD7ZhxkIl5o0gX7aLrvnrBr8N73p2V AH26laSImaAcMRDes0aKwBSa2eoLgkncTxJqXqnOqPsfkkquG7hohcQo8LgZi1Y/WYPx MdPsKMDdfBdk4cA3lDdrv6h8/p6zIMo/tbluprzkORrZFg8Rbf71Li3QsRdeyzuX7GkK BO3xqXuenakBit/nY4yxznlJQY9YP/ToDCRHZStayHLXq4qZSAxqQZxJLRv8Nr+T5mMb lBu6bmQycZl1rxdwTHYJ7sprGC8rXR/pKeDGNuk8BTWCbIAEnbaz7pK9LNDBFE/vxtnb NGeA== X-Gm-Message-State: AOAM531fpbleWj1HLggN9ybAeGOeddCimPAZQ94j7OBdhqLn9uP66e/k ELZvLryWNkDnTgMBE1NTWKEs9wMETPM= X-Google-Smtp-Source: ABdhPJzXv4WU2IWezWx9nTDWae496L7Hc+FMPGMUtJ3yOHwUWfZiIbEVA68sUonu6ecYeBhxtBfXHrcPzyE= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:ac2:424e:0:b0:46b:9249:8ce3 with SMTP id m14-20020ac2424e000000b0046b92498ce3mr17070835lfl.282.1650991465442; Tue, 26 Apr 2022 09:44:25 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:31 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-3-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 02/46] stackdepot: reserve 5 extra bits in depot_stack_handle_t From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Gl3v87A3; spf=pass (imf03.hostedemail.com: domain of 3aSFoYgYKCGYKPMHIVKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--glider.bounces.google.com designates 209.85.208.201 as permitted sender) smtp.mailfrom=3aSFoYgYKCGYKPMHIVKSSKPI.GSQPMRYb-QQOZEGO.SVK@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 2A8BC20042 X-Stat-Signature: kobqww3ditt5sfxp3f6umcfs3nnpe4bn X-HE-Tag: 1650991464-102030 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some users (currently only KMSAN) may want to use spare bits in depot_stack_handle_t. Let them do so by adding @extra_bits to __stack_depot_save() to store arbitrary flags, and providing stack_depot_get_extra_bits() to retrieve those flags. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I0587f6c777667864768daf07821d594bce6d8ff9 --- include/linux/stackdepot.h | 8 ++++++++ lib/stackdepot.c | 29 ++++++++++++++++++++++++----- 2 files changed, 32 insertions(+), 5 deletions(-) diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h index 17f992fe6355b..fd641d266bead 100644 --- a/include/linux/stackdepot.h +++ b/include/linux/stackdepot.h @@ -14,9 +14,15 @@ #include typedef u32 depot_stack_handle_t; +/* + * Number of bits in the handle that stack depot doesn't use. Users may store + * information in them. + */ +#define STACK_DEPOT_EXTRA_BITS 5 depot_stack_handle_t __stack_depot_save(unsigned long *entries, unsigned int nr_entries, + unsigned int extra_bits, gfp_t gfp_flags, bool can_alloc); /* @@ -41,6 +47,8 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries, unsigned int stack_depot_fetch(depot_stack_handle_t handle, unsigned long **entries); +unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle); + int stack_depot_snprint(depot_stack_handle_t handle, char *buf, size_t size, int spaces); diff --git a/lib/stackdepot.c b/lib/stackdepot.c index bf5ba9af05009..6dc11a3b7b88e 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -42,7 +42,8 @@ #define STACK_ALLOC_OFFSET_BITS (STACK_ALLOC_ORDER + PAGE_SHIFT - \ STACK_ALLOC_ALIGN) #define STACK_ALLOC_INDEX_BITS (DEPOT_STACK_BITS - \ - STACK_ALLOC_NULL_PROTECTION_BITS - STACK_ALLOC_OFFSET_BITS) + STACK_ALLOC_NULL_PROTECTION_BITS - \ + STACK_ALLOC_OFFSET_BITS - STACK_DEPOT_EXTRA_BITS) #define STACK_ALLOC_SLABS_CAP 8192 #define STACK_ALLOC_MAX_SLABS \ (((1LL << (STACK_ALLOC_INDEX_BITS)) < STACK_ALLOC_SLABS_CAP) ? \ @@ -55,6 +56,7 @@ union handle_parts { u32 slabindex : STACK_ALLOC_INDEX_BITS; u32 offset : STACK_ALLOC_OFFSET_BITS; u32 valid : STACK_ALLOC_NULL_PROTECTION_BITS; + u32 extra : STACK_DEPOT_EXTRA_BITS; }; }; @@ -73,6 +75,14 @@ static int next_slab_inited; static size_t depot_offset; static DEFINE_RAW_SPINLOCK(depot_lock); +unsigned int stack_depot_get_extra_bits(depot_stack_handle_t handle) +{ + union handle_parts parts = { .handle = handle }; + + return parts.extra; +} +EXPORT_SYMBOL(stack_depot_get_extra_bits); + static bool init_stack_slab(void **prealloc) { if (!*prealloc) @@ -136,6 +146,7 @@ depot_alloc_stack(unsigned long *entries, int size, u32 hash, void **prealloc) stack->handle.slabindex = depot_index; stack->handle.offset = depot_offset >> STACK_ALLOC_ALIGN; stack->handle.valid = 1; + stack->handle.extra = 0; memcpy(stack->entries, entries, flex_array_size(stack, entries, size)); depot_offset += required_size; @@ -320,6 +331,7 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); * * @entries: Pointer to storage array * @nr_entries: Size of the storage array + * @extra_bits: Flags to store in unused bits of depot_stack_handle_t * @alloc_flags: Allocation gfp flags * @can_alloc: Allocate stack slabs (increased chance of failure if false) * @@ -331,6 +343,10 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); * If the stack trace in @entries is from an interrupt, only the portion up to * interrupt entry is saved. * + * Additional opaque flags can be passed in @extra_bits, stored in the unused + * bits of the stack handle, and retrieved using stack_depot_get_extra_bits() + * without calling stack_depot_fetch(). + * * Context: Any context, but setting @can_alloc to %false is required if * alloc_pages() cannot be used from the current context. Currently * this is the case from contexts where neither %GFP_ATOMIC nor @@ -340,10 +356,11 @@ EXPORT_SYMBOL_GPL(stack_depot_fetch); */ depot_stack_handle_t __stack_depot_save(unsigned long *entries, unsigned int nr_entries, + unsigned int extra_bits, gfp_t alloc_flags, bool can_alloc) { struct stack_record *found = NULL, **bucket; - depot_stack_handle_t retval = 0; + union handle_parts retval = { .handle = 0 }; struct page *page = NULL; void *prealloc = NULL; unsigned long flags; @@ -427,9 +444,11 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, free_pages((unsigned long)prealloc, STACK_ALLOC_ORDER); } if (found) - retval = found->handle.handle; + retval.handle = found->handle.handle; fast_exit: - return retval; + retval.extra = extra_bits; + + return retval.handle; } EXPORT_SYMBOL_GPL(__stack_depot_save); @@ -449,6 +468,6 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries, unsigned int nr_entries, gfp_t alloc_flags) { - return __stack_depot_save(entries, nr_entries, alloc_flags, true); + return __stack_depot_save(entries, nr_entries, 0, alloc_flags, true); } EXPORT_SYMBOL_GPL(stack_depot_save); From patchwork Tue Apr 26 16:42:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827485 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7AF5C433F5 for ; Tue, 26 Apr 2022 16:44:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3B6396B007B; Tue, 26 Apr 2022 12:44:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 364BC6B007E; Tue, 26 Apr 2022 12:44:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2056D6B0080; Tue, 26 Apr 2022 12:44:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 0C62E6B007B for ; Tue, 26 Apr 2022 12:44:30 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id D7A18120B4B for ; Tue, 26 Apr 2022 16:44:29 +0000 (UTC) X-FDA: 79399603458.16.A000B48 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf13.hostedemail.com (Postfix) with ESMTP id 25E4E20046 for ; Tue, 26 Apr 2022 16:44:23 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id jg25-20020a170907971900b006f010192c19so8621425ejc.21 for ; Tue, 26 Apr 2022 09:44:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=nOhCPOtwmNXODLijzTSO8M7zxl8h4T8Tb26PN26Qq0M=; b=LZKEFbB8TPQO7kgBdpU2j4eYblKuzaR2Gi1xLMal4r01UVgMm32iuP23KamR8oVwXU A0Q2g9OquWtPZUG20Kv+3FzPplKg/UvdC5tL/zaE1RpHoBAA4uWQSRvxSviAcVNValYR atgp44CzDP4JuJCWMVAOztuYyp83xVUCTOdMgjPZtZT/I1G2Uvo4gRCy0vOJEAK515qh z6CoYJ90tMK9fDUC0vXsGNpavg5yM2TgwZG7Lr5lZNNSi4LNKj1E9yf0Md2SkqX4WdS2 da8nli8jJ9UvooA1jnZ8RZWFp4syKCdzroiMyC5ygmwL4OY2QmTJ4Viu/eSQqGFOkxbA m6yg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nOhCPOtwmNXODLijzTSO8M7zxl8h4T8Tb26PN26Qq0M=; b=KphGdSjUQbjuWZxUcLP+5ybFj7vET1XHaT+BMyY8SYamk18A6IgNsvppfpFrFLsjZx iEESyOcoVl8+o0g8sefafbxrl1XaKud74sdlEv7Ztg6yy1UA31mBM5C37mWybZmVPsDd gjLrIncJQd/9sw8DiyB4Fy0Yj6DU/vQlyMTuA5CxBtRCyEV+ai+mec8S02/5SgNh+6RE s+BdS5cd+c6cycH8aKEEeQ7UGzSLamkbBLU5HzzvConjvYT+OxjGBuIH+gjWj+CTpRgA hXuDv0X5YwCBDarl3Xp3ncfiqs4yEcPneXmmd4dnVfJf5URcJZ0VaM4VsXEMnqBBVRkM 1eFw== X-Gm-Message-State: AOAM530FNsX0kc9eb+T16oln1/LQL0ExxE5ikQmCaWLMk2xjrWvUm6l0 Y6lq2QzG98u6uzwsPoiNaVRb46l3Ak4= X-Google-Smtp-Source: ABdhPJwSu8anY6YzOERS1Ux22TtxUqNV58xGV4qRNVaMI4J71hTHmmbMpjG+iLMULK2X+phM8pbqjHM5UG4= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a17:907:7815:b0:6ce:5242:1280 with SMTP id la21-20020a170907781500b006ce52421280mr22103960ejc.217.1650991468011; Tue, 26 Apr 2022 09:44:28 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:32 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-4-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 03/46] kasan: common: adapt to the new prototype of __stack_depot_save() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: h8m1adguhzgw5dpbqm8hus1jfzk8qnum Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=LZKEFbB8; spf=pass (imf13.hostedemail.com: domain of 3bCFoYgYKCGkNSPKLYNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3bCFoYgYKCGkNSPKLYNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 25E4E20046 X-HE-Tag: 1650991463-888120 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Pass extra_bits=0, as KASAN does not intend to store additional information in the stack handle. No functional change. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I932d8f4f11a41b7483e0d57078744cc94697607a --- mm/kasan/common.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/kasan/common.c b/mm/kasan/common.c index d9079ec11f313..5d244746ac4fe 100644 --- a/mm/kasan/common.c +++ b/mm/kasan/common.c @@ -36,7 +36,7 @@ depot_stack_handle_t kasan_save_stack(gfp_t flags, bool can_alloc) unsigned int nr_entries; nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 0); - return __stack_depot_save(entries, nr_entries, flags, can_alloc); + return __stack_depot_save(entries, nr_entries, 0, flags, can_alloc); } void kasan_set_track(struct kasan_track *track, gfp_t flags) From patchwork Tue Apr 26 16:42:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827486 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D18FC433EF for ; Tue, 26 Apr 2022 16:44:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C163B6B007E; Tue, 26 Apr 2022 12:44:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BEE396B0080; Tue, 26 Apr 2022 12:44:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A8F966B0081; Tue, 26 Apr 2022 12:44:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 9B4326B007E for ; Tue, 26 Apr 2022 12:44:32 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 790EC60AF9 for ; Tue, 26 Apr 2022 16:44:32 +0000 (UTC) X-FDA: 79399603584.20.D1B076B Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf17.hostedemail.com (Postfix) with ESMTP id 14DFF4004B for ; Tue, 26 Apr 2022 16:44:24 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id cn27-20020a0564020cbb00b0041b5b91adb5so10595607edb.15 for ; Tue, 26 Apr 2022 09:44:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=jT7N8sHbqw5Bt63XokggrytDJOztxBUcMGiWNZUZnNs=; b=socyW/SWH6WVUElWDQ9so0Jec6CqBHZso2QGVjaGlPPUu+vTsdFC6iZP42U1gWVUW8 lw5Omaj6xQ6NIAx4hEPTVPQZVPhUnOOdq2KiTARpdZyQ3LitifZ9pKoTzPHYkhsda4Rc I3/96cm8sYfgVBN9+zu56yewNbGMMZugG3lvIt1FRn+zBDIMu2owM5cLnh00hlx9RTgJ 2Z6YBAV3FKrKFXGf4sOcstHFSQon+8TmbuzAsNLmJ8iTodBN45fyl/KZmz89vwrBf1cr dJFUsGH8Xd4MrMUpg558yBB4sq9J25PoMnfrF15rZOAHy5XHDqKNCFXnOKuvMldOAx0v Xc1w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=jT7N8sHbqw5Bt63XokggrytDJOztxBUcMGiWNZUZnNs=; b=L/Mc3xFnkrv3xGAND8NjOf+PsKpyrjvzFYJ7XSSvoeH2i8JAGPNfjEzCEYLXkgaMbX 6DX/OzEn5+q6viSdBvbhdPQ1NfBhbEL8umrJLmye+ayY4uiQs2gedqgvfWz+NMji4F8g pDZcL22pmtqBwgNhy+twn8cUfBMBTbiBOkJZM+j836810ksQTaR5stm3OfCl0BNdcKGW N/Qy1ZQhRDKiC2WE58Wj6mUaLv9g7r3DwPG8+QJKYovwifZn8fK2u6JcvxhPyobHD593 7aVR7oFdP+uKEHw6LcfTHKXkfJYVHwSxR8fl5ae35+6CmBzr8GGjPLibpdDgn48RRFAD vgSA== X-Gm-Message-State: AOAM530bVocbz5kb1h8L3h1d3EtloTaoMxbQSzpZICa7VNWjdGhiNSd8 2s3u8u9vh8NEjSEwDnFwmlDffrsraUQ= X-Google-Smtp-Source: ABdhPJzVoY4VqalpQr3d8BPXtD+NaAh71WbLc1nD76HvvUm8mcb7U8Xkr1TFGyyyj1Et3ec1q26DZC4DL3U= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:3593:b0:425:dfd4:2947 with SMTP id y19-20020a056402359300b00425dfd42947mr14309082edc.137.1650991470646; Tue, 26 Apr 2022 09:44:30 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:33 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-5-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 04/46] instrumented.h: allow instrumenting both sides of copy_from_user() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Queue-Id: 14DFF4004B X-Stat-Signature: a786rds3578ecyatwssan6aq8ow7rt59 X-Rspam-User: Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="socyW/SW"; spf=pass (imf17.hostedemail.com: domain of 3biFoYgYKCGsPURMNaPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3biFoYgYKCGsPURMNaPXXPUN.LXVURWdg-VVTeJLT.XaP@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam09 X-HE-Tag: 1650991464-240041 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Introduce instrument_copy_from_user_before() and instrument_copy_from_user_after() hooks to be invoked before and after the call to copy_from_user(). KASAN and KCSAN will be only using instrument_copy_from_user_before(), but for KMSAN we'll need to insert code after copy_from_user(). Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I855034578f0b0f126734cbd734fb4ae1d3a6af99 --- include/linux/instrumented.h | 21 +++++++++++++++++++-- include/linux/uaccess.h | 19 ++++++++++++++----- lib/iov_iter.c | 9 ++++++--- lib/usercopy.c | 3 ++- 4 files changed, 41 insertions(+), 11 deletions(-) diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h index 42faebbaa202a..ee8f7d17d34f5 100644 --- a/include/linux/instrumented.h +++ b/include/linux/instrumented.h @@ -120,7 +120,7 @@ instrument_copy_to_user(void __user *to, const void *from, unsigned long n) } /** - * instrument_copy_from_user - instrument writes of copy_from_user + * instrument_copy_from_user_before - add instrumentation before copy_from_user * * Instrument writes to kernel memory, that are due to copy_from_user (and * variants). The instrumentation should be inserted before the accesses. @@ -130,10 +130,27 @@ instrument_copy_to_user(void __user *to, const void *from, unsigned long n) * @n number of bytes to copy */ static __always_inline void -instrument_copy_from_user(const void *to, const void __user *from, unsigned long n) +instrument_copy_from_user_before(const void *to, const void __user *from, unsigned long n) { kasan_check_write(to, n); kcsan_check_write(to, n); } +/** + * instrument_copy_from_user_after - add instrumentation after copy_from_user + * + * Instrument writes to kernel memory, that are due to copy_from_user (and + * variants). The instrumentation should be inserted after the accesses. + * + * @to destination address + * @from source address + * @n number of bytes to copy + * @left number of bytes not copied (as returned by copy_from_user) + */ +static __always_inline void +instrument_copy_from_user_after(const void *to, const void __user *from, + unsigned long n, unsigned long left) +{ +} + #endif /* _LINUX_INSTRUMENTED_H */ diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 546179418ffa2..079bdea3b9dcd 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -58,20 +58,28 @@ static __always_inline __must_check unsigned long __copy_from_user_inatomic(void *to, const void __user *from, unsigned long n) { - instrument_copy_from_user(to, from, n); + unsigned long res; + + instrument_copy_from_user_before(to, from, n); check_object_size(to, n, false); - return raw_copy_from_user(to, from, n); + res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); + return res; } static __always_inline __must_check unsigned long __copy_from_user(void *to, const void __user *from, unsigned long n) { + unsigned long res; + might_fault(); + instrument_copy_from_user_before(to, from, n); if (should_fail_usercopy()) return n; - instrument_copy_from_user(to, from, n); check_object_size(to, n, false); - return raw_copy_from_user(to, from, n); + res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); + return res; } /** @@ -115,8 +123,9 @@ _copy_from_user(void *to, const void __user *from, unsigned long n) unsigned long res = n; might_fault(); if (!should_fail_usercopy() && likely(access_ok(from, n))) { - instrument_copy_from_user(to, from, n); + instrument_copy_from_user_before(to, from, n); res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); } if (unlikely(res)) memset(to + (n - res), 0, res); diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 6dd5330f7a995..fb19401c29c4f 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -159,13 +159,16 @@ static int copyout(void __user *to, const void *from, size_t n) static int copyin(void *to, const void __user *from, size_t n) { + size_t res = n; + if (should_fail_usercopy()) return n; if (access_ok(from, n)) { - instrument_copy_from_user(to, from, n); - n = raw_copy_from_user(to, from, n); + instrument_copy_from_user_before(to, from, n); + res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); } - return n; + return res; } static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t bytes, diff --git a/lib/usercopy.c b/lib/usercopy.c index 7413dd300516e..1505a52f23a01 100644 --- a/lib/usercopy.c +++ b/lib/usercopy.c @@ -12,8 +12,9 @@ unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n unsigned long res = n; might_fault(); if (!should_fail_usercopy() && likely(access_ok(from, n))) { - instrument_copy_from_user(to, from, n); + instrument_copy_from_user_before(to, from, n); res = raw_copy_from_user(to, from, n); + instrument_copy_from_user_after(to, from, n, res); } if (unlikely(res)) memset(to + (n - res), 0, res); From patchwork Tue Apr 26 16:42:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2188C433EF for ; Tue, 26 Apr 2022 16:44:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3671E6B0080; Tue, 26 Apr 2022 12:44:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3148D6B0081; Tue, 26 Apr 2022 12:44:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 18DC46B0082; Tue, 26 Apr 2022 12:44:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 0B1C46B0080 for ; Tue, 26 Apr 2022 12:44:35 -0400 (EDT) Received: from smtpin31.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id D17FB26B40 for ; Tue, 26 Apr 2022 16:44:34 +0000 (UTC) X-FDA: 79399603668.31.EE09A2D Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf26.hostedemail.com (Postfix) with ESMTP id BE703140039 for ; Tue, 26 Apr 2022 16:44:32 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id b24-20020a50e798000000b0041631767675so10627580edn.23 for ; Tue, 26 Apr 2022 09:44:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=n1CPd29HHKe3iLELt5x5zSxX1l0GlMUVKgGqbXsXPsc=; b=Ul2x8GzRzl3HHeVNodduisFS+GirD7ttp4YhxIfHtZ/wAm96wAOKW0A3UvjvpnRyGE dGIlLM7rLl8SOQncEfIhzWSYbpzWK4FIquce/uniGQa5s5Khmoip2xUEMKpML79SS2Nr fjFDfjY+DOzRzBepJfAOlaHW98Dbj79IrR8+XSS3SP0heVjAw3rhmHavoXW9MHBQKoZG wag5mhU0cWEiD2fbJEejotrtfue9XD44tRsrIfEYSpEORbTnfzF4DdsV8zcndXNrqw8l /BmYauMIADOumK6gR7O9BDkDjdwUZNDb4ZrqS03yp+ROELB3q+NAfod4pzAvjwcIAOW/ JTFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=n1CPd29HHKe3iLELt5x5zSxX1l0GlMUVKgGqbXsXPsc=; b=v4bzeNsMsyHsk+fDIMFRX18yjhw73SUTClFpw/KGTDamgFDCyJ+s0ykXD//UtKdygo udRK0y2ALQAieEGrZgi2NoQlP1VAFySJdfmhCdXwP/fCLSm+ILTl0nthWWE7zvzyrWn4 G2jHu9CkEO5C2TpPezRjAT0X0I1TdNGZTSsuwYTDuftbS5kJrk+R5W+ozfJ7D6JPEBGW a5E46HRo7y1RQcrtkiOgP8WE1IcOw/ZBZFlw6D7SjC/yMRs5kJ4OA1brlfkP8s8QEhwi 1EuNuKttM16DqjrHTmB8m1SkKXAsDB2/DC/+ITY2xc7H/HcABzRz2GyIaMtNSa3lktUX 27eg== X-Gm-Message-State: AOAM530hWJbhb/enUVm2oCs6ZiWgxlyfRVc02URsgB4iJLIINia4j+FS l6JiSBEcJ8cFm+3CPQq5y7XXDaOdQ3E= X-Google-Smtp-Source: ABdhPJyMKZH4q2GhaVIalO1baxoI+dFp4AmHdfwHW+zpAMwIACuxCiWEQZTqeJmEu9W2AFPYPsdcZ1aQv28= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:1385:b0:413:2bc6:4400 with SMTP id b5-20020a056402138500b004132bc64400mr25986634edv.94.1650991473255; Tue, 26 Apr 2022 09:44:33 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:34 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-6-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 05/46] x86: asm: instrument usercopy in get_user() and __put_user_size() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Ul2x8GzR; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf26.hostedemail.com: domain of 3cSFoYgYKCG4SXUPQdSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3cSFoYgYKCG4SXUPQdSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--glider.bounces.google.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: BE703140039 X-Rspam-User: X-Stat-Signature: auehjye1yhqwbnsp6g96jdch4ws15fke X-HE-Tag: 1650991472-809835 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Use hooks from instrumented.h to notify bug detection tools about usercopy events in get_user() and put_user_size(). It's still unclear how to instrument put_user(), which assumes that instrumentation code doesn't clobber RAX. Signed-off-by: Alexander Potapenko Reported-by: kernel test robot Reported-by: kernel test robot Reported-by: kernel test robot Reported-by: kernel test robot Reported-by: kernel test robot --- Link: https://linux-review.googlesource.com/id/Ia9f12bfe5832623250e20f1859fdf5cc485a2fce --- arch/x86/include/asm/uaccess.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index f78e2b3501a19..0373d52a0543e 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -5,6 +5,7 @@ * User space memory access functions */ #include +#include #include #include #include @@ -99,11 +100,13 @@ extern int __get_user_bad(void); int __ret_gu; \ register __inttype(*(ptr)) __val_gu asm("%"_ASM_DX); \ __chk_user_ptr(ptr); \ + instrument_copy_from_user_before((void *)&(x), ptr, sizeof(*(ptr))); \ asm volatile("call __" #fn "_%P4" \ : "=a" (__ret_gu), "=r" (__val_gu), \ ASM_CALL_CONSTRAINT \ : "0" (ptr), "i" (sizeof(*(ptr)))); \ (x) = (__force __typeof__(*(ptr))) __val_gu; \ + instrument_copy_from_user_after((void *)&(x), ptr, sizeof(*(ptr)), 0); \ __builtin_expect(__ret_gu, 0); \ }) @@ -248,7 +251,9 @@ extern void __put_user_nocheck_8(void); #define __put_user_size(x, ptr, size, label) \ do { \ + __typeof__(*(ptr)) __pus_val = x; \ __chk_user_ptr(ptr); \ + instrument_copy_to_user(ptr, &(__pus_val), size); \ switch (size) { \ case 1: \ __put_user_goto(x, ptr, "b", "iq", label); \ @@ -286,6 +291,7 @@ do { \ #define __get_user_size(x, ptr, size, label) \ do { \ __chk_user_ptr(ptr); \ + instrument_copy_from_user_before((void *)&(x), ptr, size); \ switch (size) { \ case 1: { \ unsigned char x_u8__; \ @@ -305,6 +311,7 @@ do { \ default: \ (x) = __get_user_bad(); \ } \ + instrument_copy_from_user_after((void *)&(x), ptr, size, 0); \ } while (0) #define __get_user_asm(x, addr, itype, ltype, label) \ From patchwork Tue Apr 26 16:42:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827488 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 38B17C433F5 for ; Tue, 26 Apr 2022 16:44:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C1A606B0081; Tue, 26 Apr 2022 12:44:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BA0136B0082; Tue, 26 Apr 2022 12:44:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A68F46B0083; Tue, 26 Apr 2022 12:44:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 999436B0081 for ; Tue, 26 Apr 2022 12:44:37 -0400 (EDT) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 745EC26CCD for ; Tue, 26 Apr 2022 16:44:37 +0000 (UTC) X-FDA: 79399603794.08.C8101EA Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf08.hostedemail.com (Postfix) with ESMTP id 79894160040 for ; Tue, 26 Apr 2022 16:44:31 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id i14-20020a17090639ce00b006dabe6a112fso9271205eje.13 for ; Tue, 26 Apr 2022 09:44:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=cG4bw4Dybp9do4PvtZBZ2e2ITt4Wsjs9LSUgy4lmV+4=; b=RcuVhtgi/Tw1bMDtabWe38GwlUcwg4SUjFJCfkW2os/ElbHByzMYCT2RYzqn62C0bn cbc56TqAri4pL54mxEkGuVkiAaJJ9tXSwdeAGi+nY3slj4lCV4ZtmvWqeaNCS3QKANio WW7rLfKq9u9Kd3RXcYrkCGHS0aT05KrVMUMkow7J5zXgyXnRiQ3E/SReE2LZFqObeM2p smA+CIkkQF6Odog2q36ChSrXewTMVXM+93iY1Gau5URCGueQT5zBnFATmZOSwzbaRVC1 zWWjq4eJBwkMFtGAv2hVypAFZhEDp0HEN6QVTrNW6nJcUAwldpsVdCTpJbkD3ZSC1KHK QTEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=cG4bw4Dybp9do4PvtZBZ2e2ITt4Wsjs9LSUgy4lmV+4=; b=sLYBzsVsYIz6qasUbwx5vtkZDD4kPKU1dA5U3p6p0zDbkg78MwsNOp5wT8ngZVoBtJ 6ipL0U3GVoOe9VSl4acyBJHMsJudj963vvVZRi9uhvDgiFzQzWJwbHJ07P+1sBd65Pmt +8oizOxYKRKG3SpLQEbXXtRUAClzgRMK0NLIF/u/+ZJl8rl0/e8V5a+0JhapBbOogOyY WeFBXyH/uVCRev0vV/QMUJXAaqfvMMaVlTwNENdyKT9fNb4uTYUMFxFZQXCsfTbdExCU VdsBgokIYH/WYAQbD9OhFnmF33tkIl5+5fUX8iI3gH180s2+fFtMFI9IhFMe+YPJ8mHu MYFA== X-Gm-Message-State: AOAM5307G3TRl4CU23/YmGFazxaYo1MF5XdUJv7NzX5Qt6ClRZGT/661 0su9y0hK4LRShSLHtHt0pGFzgCcILcE= X-Google-Smtp-Source: ABdhPJwJq5G2rpZj9OKDdv47ee+grpfL1muZK+Akh8tj6QCkzJXGooM+Bx/fhzrHXRviMJo4c+0ZnBDuFgY= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:2809:b0:423:e123:5e40 with SMTP id h9-20020a056402280900b00423e1235e40mr25792114ede.84.1650991475654; Tue, 26 Apr 2022 09:44:35 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:35 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-7-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 06/46] asm-generic: instrument usercopy in cacheflush.h From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 79894160040 X-Stat-Signature: zobuu3i3oe3duint77kyqa78rjok5q74 X-Rspam-User: Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=RcuVhtgi; spf=pass (imf08.hostedemail.com: domain of 3cyFoYgYKCHAUZWRSfUccUZS.QcaZWbil-aaYjOQY.cfU@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3cyFoYgYKCHAUZWRSfUccUZS.QcaZWbil-aaYjOQY.cfU@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1650991471-72047 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Notify memory tools about usercopy events in copy_to_user_page() and copy_from_user_page(). Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ic1ee8da1886325f46ad67f52176f48c2c836c48f --- include/asm-generic/cacheflush.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index 4f07afacbc239..0f63eb325025f 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -2,6 +2,8 @@ #ifndef _ASM_GENERIC_CACHEFLUSH_H #define _ASM_GENERIC_CACHEFLUSH_H +#include + struct mm_struct; struct vm_area_struct; struct page; @@ -105,6 +107,7 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end) #ifndef copy_to_user_page #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ + instrument_copy_to_user(dst, src, len); \ memcpy(dst, src, len); \ flush_icache_user_page(vma, page, vaddr, len); \ } while (0) @@ -112,7 +115,11 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end) #ifndef copy_from_user_page #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ - memcpy(dst, src, len) + do { \ + instrument_copy_from_user_before(dst, src, len); \ + memcpy(dst, src, len); \ + instrument_copy_from_user_after(dst, src, len, 0); \ + } while (0) #endif #endif /* _ASM_GENERIC_CACHEFLUSH_H */ From patchwork Tue Apr 26 16:42:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9F67BC433EF for ; Tue, 26 Apr 2022 16:44:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 38FCC6B0082; Tue, 26 Apr 2022 12:44:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 312EC6B0083; Tue, 26 Apr 2022 12:44:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B4726B0085; Tue, 26 Apr 2022 12:44:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 0A2556B0082 for ; Tue, 26 Apr 2022 12:44:40 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id DB3EEB5B for ; Tue, 26 Apr 2022 16:44:39 +0000 (UTC) X-FDA: 79399603878.24.D70A621 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf19.hostedemail.com (Postfix) with ESMTP id BC6A81A004D for ; Tue, 26 Apr 2022 16:44:35 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id ee56-20020a056402293800b00425b0f5b9c6so7560804edb.9 for ; Tue, 26 Apr 2022 09:44:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=p8FEo7s+Y/rcqoYNHQMIRhiXb9XntJYZxdO4j/h094A=; b=FN0NXTYm/4DM3BzRomEJlLBOdDZK2QtkvjCrwnxNu7ZCzEOc/oVhFdK4WXlDEVdirQ YuVhb8aZBGQP/vHgAYsLyv76/O97zW2Bf51WKCp789tm4Qjtf9zFBL3j57QvVDLDhnFw JWzYQwqOtyltmmcl+Arw6lRAwK/B1u8ETn2mV8K4IqKpBj4vuVhQCOItYU3wYbkHWE2J Z7jF6oHs4JLz+HXwY+ciXQdsTbDClhb4iTf+PvyW2JhTxjOung+RoUV/ib2RrncC3xwe fZCF7gC3F71l0WI97t3oSVGvki3Zm/4NN0wR3vTDsvenqe+v77M7MJ77kmQUAWJZ0U7L iRSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=p8FEo7s+Y/rcqoYNHQMIRhiXb9XntJYZxdO4j/h094A=; b=TjI2b22ZKCy/cyddHEB8iLv3fERnWtfltmhHcCnIWexsX9GPvBoOcRiMDvSi1ItWQL mgHDNXH66yI3/5ABm97nPumysc/5Gl1K9Fzpw8XQZL+nChJ2tPgHALagbUuiCWSUp0iW LODpS/Il9uBhke490ZLQBp43fBiCiTzf3sLbV91G/THB1oEANJs9TkeogghyGk0W/JNy eEzHwNb1DWva95f6tjhu21PjCfGvxOA/S/2Bi+FqhrZqgwNZotucM5Ycmzi+vHYbKRoq WFO68ZeXL4GWIzZvXoN0D8UCuZ/MRXjDb5Z8Qq2v6adSPrMvcDng7mWyGvAYWb63GSSX YXbA== X-Gm-Message-State: AOAM5325NhJZZxGB6cjsHhwYc1pf8OqmQMZR1v5aLtpn8iaD1ng3Bb3b faOf3Oyf4FvUoOxH5laTiEQLVMRrH7o= X-Google-Smtp-Source: ABdhPJyCCyIfUafFOfp5MPZnTGpX5q6s2bnIOjoI2eskQhmcqE76RefRNF30puIR6F21nZAoW9kp50iu4zg= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:3553:b0:424:20d3:ae78 with SMTP id f19-20020a056402355300b0042420d3ae78mr25923509edd.290.1650991478143; Tue, 26 Apr 2022 09:44:38 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:36 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-8-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 07/46] kmsan: add ReST documentation From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: 9iu7g61fe4mzjq5xz4tau4qs8c763a69 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: BC6A81A004D Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=FN0NXTYm; spf=pass (imf19.hostedemail.com: domain of 3diFoYgYKCHMXcZUViXffXcV.TfdcZelo-ddbmRTb.fiX@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3diFoYgYKCHMXcZUViXffXcV.TfdcZelo-ddbmRTb.fiX@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-HE-Tag: 1650991475-740005 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add Documentation/dev-tools/kmsan.rst and reference it in the dev-tools index. Signed-off-by: Alexander Potapenko --- v2: -- added a note that KMSAN is not intended for production use Link: https://linux-review.googlesource.com/id/I751586f79418b95550a83c6035c650b5b01567cc --- Documentation/dev-tools/index.rst | 1 + Documentation/dev-tools/kmsan.rst | 414 ++++++++++++++++++++++++++++++ 2 files changed, 415 insertions(+) create mode 100644 Documentation/dev-tools/kmsan.rst diff --git a/Documentation/dev-tools/index.rst b/Documentation/dev-tools/index.rst index 4621eac290f46..6b0663075dc04 100644 --- a/Documentation/dev-tools/index.rst +++ b/Documentation/dev-tools/index.rst @@ -24,6 +24,7 @@ Documentation/dev-tools/testing-overview.rst kcov gcov kasan + kmsan ubsan kmemleak kcsan diff --git a/Documentation/dev-tools/kmsan.rst b/Documentation/dev-tools/kmsan.rst new file mode 100644 index 0000000000000..e116889da79d5 --- /dev/null +++ b/Documentation/dev-tools/kmsan.rst @@ -0,0 +1,414 @@ +============================= +KernelMemorySanitizer (KMSAN) +============================= + +KMSAN is a dynamic error detector aimed at finding uses of uninitialized +values. It is based on compiler instrumentation, and is quite similar to the +userspace `MemorySanitizer tool`_. + +An important note is that KMSAN is not intended for production use, because it +drastically increases kernel memory footprint and slows the whole system down. + +Example report +============== + +Here is an example of a KMSAN report:: + + ===================================================== + BUG: KMSAN: uninit-value in test_uninit_kmsan_check_memory+0x1be/0x380 [kmsan_test] + test_uninit_kmsan_check_memory+0x1be/0x380 mm/kmsan/kmsan_test.c:273 + kunit_run_case_internal lib/kunit/test.c:333 + kunit_try_run_case+0x206/0x420 lib/kunit/test.c:374 + kunit_generic_run_threadfn_adapter+0x6d/0xc0 lib/kunit/try-catch.c:28 + kthread+0x721/0x850 kernel/kthread.c:327 + ret_from_fork+0x1f/0x30 ??:? + + Uninit was stored to memory at: + do_uninit_local_array+0xfa/0x110 mm/kmsan/kmsan_test.c:260 + test_uninit_kmsan_check_memory+0x1a2/0x380 mm/kmsan/kmsan_test.c:271 + kunit_run_case_internal lib/kunit/test.c:333 + kunit_try_run_case+0x206/0x420 lib/kunit/test.c:374 + kunit_generic_run_threadfn_adapter+0x6d/0xc0 lib/kunit/try-catch.c:28 + kthread+0x721/0x850 kernel/kthread.c:327 + ret_from_fork+0x1f/0x30 ??:? + + Local variable uninit created at: + do_uninit_local_array+0x4a/0x110 mm/kmsan/kmsan_test.c:256 + test_uninit_kmsan_check_memory+0x1a2/0x380 mm/kmsan/kmsan_test.c:271 + + Bytes 4-7 of 8 are uninitialized + Memory access of size 8 starts at ffff888083fe3da0 + + CPU: 0 PID: 6731 Comm: kunit_try_catch Tainted: G B E 5.16.0-rc3+ #104 + Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 + ===================================================== + + +The report says that the local variable ``uninit`` was created uninitialized in +``do_uninit_local_array()``. The lower stack trace corresponds to the place +where this variable was created. + +The upper stack shows where the uninit value was used - in +``test_uninit_kmsan_check_memory()``. The tool shows the bytes which were left +uninitialized in the local variable, as well as the stack where the value was +copied to another memory location before use. + +Please note that KMSAN only reports an error when an uninitialized value is +actually used (e.g. in a condition or pointer dereference). A lot of +uninitialized values in the kernel are never used, and reporting them would +result in too many false positives. + +KMSAN and Clang +=============== + +In order for KMSAN to work the kernel must be built with Clang, which so far is +the only compiler that has KMSAN support. The kernel instrumentation pass is +based on the userspace `MemorySanitizer tool`_. + +How to build +============ + +In order to build a kernel with KMSAN you will need a fresh Clang (14.0.0+). +Please refer to `LLVM documentation`_ for the instructions on how to build Clang. + +Now configure and build the kernel with CONFIG_KMSAN enabled. + +How KMSAN works +=============== + +KMSAN shadow memory +------------------- + +KMSAN associates a metadata byte (also called shadow byte) with every byte of +kernel memory. A bit in the shadow byte is set iff the corresponding bit of the +kernel memory byte is uninitialized. Marking the memory uninitialized (i.e. +setting its shadow bytes to ``0xff``) is called poisoning, marking it +initialized (setting the shadow bytes to ``0x00``) is called unpoisoning. + +When a new variable is allocated on the stack, it is poisoned by default by +instrumentation code inserted by the compiler (unless it is a stack variable +that is immediately initialized). Any new heap allocation done without +``__GFP_ZERO`` is also poisoned. + +Compiler instrumentation also tracks the shadow values with the help from the +runtime library in ``mm/kmsan/``. + +The shadow value of a basic or compound type is an array of bytes of the same +length. When a constant value is written into memory, that memory is unpoisoned. +When a value is read from memory, its shadow memory is also obtained and +propagated into all the operations which use that value. For every instruction +that takes one or more values the compiler generates code that calculates the +shadow of the result depending on those values and their shadows. + +Example:: + + int a = 0xff; // i.e. 0x000000ff + int b; + int c = a | b; + +In this case the shadow of ``a`` is ``0``, shadow of ``b`` is ``0xffffffff``, +shadow of ``c`` is ``0xffffff00``. This means that the upper three bytes of +``c`` are uninitialized, while the lower byte is initialized. + + +Origin tracking +--------------- + +Every four bytes of kernel memory also have a so-called origin assigned to +them. This origin describes the point in program execution at which the +uninitialized value was created. Every origin is associated with either the +full allocation stack (for heap-allocated memory), or the function containing +the uninitialized variable (for locals). + +When an uninitialized variable is allocated on stack or heap, a new origin +value is created, and that variable's origin is filled with that value. +When a value is read from memory, its origin is also read and kept together +with the shadow. For every instruction that takes one or more values the origin +of the result is one of the origins corresponding to any of the uninitialized +inputs. If a poisoned value is written into memory, its origin is written to the +corresponding storage as well. + +Example 1:: + + int a = 42; + int b; + int c = a + b; + +In this case the origin of ``b`` is generated upon function entry, and is +stored to the origin of ``c`` right before the addition result is written into +memory. + +Several variables may share the same origin address, if they are stored in the +same four-byte chunk. In this case every write to either variable updates the +origin for all of them. We have to sacrifice precision in this case, because +storing origins for individual bits (and even bytes) would be too costly. + +Example 2:: + + int combine(short a, short b) { + union ret_t { + int i; + short s[2]; + } ret; + ret.s[0] = a; + ret.s[1] = b; + return ret.i; + } + +If ``a`` is initialized and ``b`` is not, the shadow of the result would be +0xffff0000, and the origin of the result would be the origin of ``b``. +``ret.s[0]`` would have the same origin, but it will be never used, because +that variable is initialized. + +If both function arguments are uninitialized, only the origin of the second +argument is preserved. + +Origin chaining +~~~~~~~~~~~~~~~ + +To ease debugging, KMSAN creates a new origin for every store of an +uninitialized value to memory. The new origin references both its creation stack +and the previous origin the value had. This may cause increased memory +consumption, so we limit the length of origin chains in the runtime. + +Clang instrumentation API +------------------------- + +Clang instrumentation pass inserts calls to functions defined in +``mm/kmsan/instrumentation.c`` into the kernel code. + +Shadow manipulation +~~~~~~~~~~~~~~~~~~~ + +For every memory access the compiler emits a call to a function that returns a +pair of pointers to the shadow and origin addresses of the given memory:: + + typedef struct { + void *shadow, *origin; + } shadow_origin_ptr_t + + shadow_origin_ptr_t __msan_metadata_ptr_for_load_{1,2,4,8}(void *addr) + shadow_origin_ptr_t __msan_metadata_ptr_for_store_{1,2,4,8}(void *addr) + shadow_origin_ptr_t __msan_metadata_ptr_for_load_n(void *addr, uintptr_t size) + shadow_origin_ptr_t __msan_metadata_ptr_for_store_n(void *addr, uintptr_t size) + +The function name depends on the memory access size. + +The compiler makes sure that for every loaded value its shadow and origin +values are read from memory. When a value is stored to memory, its shadow and +origin are also stored using the metadata pointers. + +Origin tracking +~~~~~~~~~~~~~~~ + +A special function is used to create a new origin value for a local variable and +set the origin of that variable to that value:: + + void __msan_poison_alloca(void *addr, uintptr_t size, char *descr) + +Access to per-task data +~~~~~~~~~~~~~~~~~~~~~~~~~ + +At the beginning of every instrumented function KMSAN inserts a call to +``__msan_get_context_state()``:: + + kmsan_context_state *__msan_get_context_state(void) + +``kmsan_context_state`` is declared in ``include/linux/kmsan.h``:: + + struct kmsan_context_state { + char param_tls[KMSAN_PARAM_SIZE]; + char retval_tls[KMSAN_RETVAL_SIZE]; + char va_arg_tls[KMSAN_PARAM_SIZE]; + char va_arg_origin_tls[KMSAN_PARAM_SIZE]; + u64 va_arg_overflow_size_tls; + char param_origin_tls[KMSAN_PARAM_SIZE]; + depot_stack_handle_t retval_origin_tls; + }; + +This structure is used by KMSAN to pass parameter shadows and origins between +instrumented functions. + +String functions +~~~~~~~~~~~~~~~~ + +The compiler replaces calls to ``memcpy()``/``memmove()``/``memset()`` with the +following functions. These functions are also called when data structures are +initialized or copied, making sure shadow and origin values are copied alongside +with the data:: + + void *__msan_memcpy(void *dst, void *src, uintptr_t n) + void *__msan_memmove(void *dst, void *src, uintptr_t n) + void *__msan_memset(void *dst, int c, uintptr_t n) + +Error reporting +~~~~~~~~~~~~~~~ + +For each pointer dereference and each condition the compiler emits a shadow +check that calls ``__msan_warning()`` in the case a poisoned value is being +used:: + + void __msan_warning(u32 origin) + +``__msan_warning()`` causes KMSAN runtime to print an error report. + +Inline assembly instrumentation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +KMSAN instruments every inline assembly output with a call to:: + + void __msan_instrument_asm_store(void *addr, uintptr_t size) + +, which unpoisons the memory region. + +This approach may mask certain errors, but it also helps to avoid a lot of +false positives in bitwise operations, atomics etc. + +Sometimes the pointers passed into inline assembly do not point to valid memory. +In such cases they are ignored at runtime. + +Disabling the instrumentation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +A function can be marked with ``__no_kmsan_checks``. Doing so makes KMSAN +ignore uninitialized values in that function and mark its output as initialized. +As a result, the user will not get KMSAN reports related to that function. + +Another function attribute supported by KMSAN is ``__no_sanitize_memory``. +Applying this attribute to a function will result in KMSAN not instrumenting it, +which can be helpful if we do not want the compiler to mess up some low-level +code (e.g. that marked with ``noinstr``). + +This however comes at a cost: stack allocations from such functions will have +incorrect shadow/origin values, likely leading to false positives. Functions +called from non-instrumented code may also receive incorrect metadata for their +parameters. + +As a rule of thumb, avoid using ``__no_sanitize_memory`` explicitly. + +It is also possible to disable KMSAN for a single file (e.g. main.o):: + + KMSAN_SANITIZE_main.o := n + +or for the whole directory:: + + KMSAN_SANITIZE := n + +in the Makefile. Think of this as applying ``__no_sanitize_memory`` to every +function in the file or directory. Most users won't need KMSAN_SANITIZE, unless +their code gets broken by KMSAN (e.g. runs at early boot time). + +Runtime library +--------------- + +The code is located in ``mm/kmsan/``. + +Per-task KMSAN state +~~~~~~~~~~~~~~~~~~~~ + +Every task_struct has an associated KMSAN task state that holds the KMSAN +context (see above) and a per-task flag disallowing KMSAN reports:: + + struct kmsan_context { + ... + bool allow_reporting; + struct kmsan_context_state cstate; + ... + } + + struct task_struct { + ... + struct kmsan_context kmsan; + ... + } + + +KMSAN contexts +~~~~~~~~~~~~~~ + +When running in a kernel task context, KMSAN uses ``current->kmsan.cstate`` to +hold the metadata for function parameters and return values. + +But in the case the kernel is running in the interrupt, softirq or NMI context, +where ``current`` is unavailable, KMSAN switches to per-cpu interrupt state:: + + DEFINE_PER_CPU(struct kmsan_ctx, kmsan_percpu_ctx); + +Metadata allocation +~~~~~~~~~~~~~~~~~~~ + +There are several places in the kernel for which the metadata is stored. + +1. Each ``struct page`` instance contains two pointers to its shadow and +origin pages:: + + struct page { + ... + struct page *shadow, *origin; + ... + }; + +At boot-time, the kernel allocates shadow and origin pages for every available +kernel page. This is done quite late, when the kernel address space is already +fragmented, so normal data pages may arbitrarily interleave with the metadata +pages. + +This means that in general for two contiguous memory pages their shadow/origin +pages may not be contiguous. So, if a memory access crosses the boundary +of a memory block, accesses to shadow/origin memory may potentially corrupt +other pages or read incorrect values from them. + +In practice, contiguous memory pages returned by the same ``alloc_pages()`` +call will have contiguous metadata, whereas if these pages belong to two +different allocations their metadata pages can be fragmented. + +For the kernel data (``.data``, ``.bss`` etc.) and percpu memory regions +there also are no guarantees on metadata contiguity. + +In the case ``__msan_metadata_ptr_for_XXX_YYY()`` hits the border between two +pages with non-contiguous metadata, it returns pointers to fake shadow/origin regions:: + + char dummy_load_page[PAGE_SIZE] __attribute__((aligned(PAGE_SIZE))); + char dummy_store_page[PAGE_SIZE] __attribute__((aligned(PAGE_SIZE))); + +``dummy_load_page`` is zero-initialized, so reads from it always yield zeroes. +All stores to ``dummy_store_page`` are ignored. + +2. For vmalloc memory and modules, there is a direct mapping between the memory +range, its shadow and origin. KMSAN reduces the vmalloc area by 3/4, making only +the first quarter available to ``vmalloc()``. The second quarter of the vmalloc +area contains shadow memory for the first quarter, the third one holds the +origins. A small part of the fourth quarter contains shadow and origins for the +kernel modules. Please refer to ``arch/x86/include/asm/pgtable_64_types.h`` for +more details. + +When an array of pages is mapped into a contiguous virtual memory space, their +shadow and origin pages are similarly mapped into contiguous regions. + +3. For CPU entry area there are separate per-CPU arrays that hold its +metadata:: + + DEFINE_PER_CPU(char[CPU_ENTRY_AREA_SIZE], cpu_entry_area_shadow); + DEFINE_PER_CPU(char[CPU_ENTRY_AREA_SIZE], cpu_entry_area_origin); + +When calculating shadow and origin addresses for a given memory address, KMSAN +checks whether the address belongs to the physical page range, the virtual page +range or CPU entry area. + +Handling ``pt_regs`` +~~~~~~~~~~~~~~~~~~~~ + +Many functions receive a ``struct pt_regs`` holding the register state at a +certain point. Registers do not have (easily calculatable) shadow or origin +associated with them, so we assume they are always initialized. + +References +========== + +E. Stepanov, K. Serebryany. `MemorySanitizer: fast detector of uninitialized +memory use in C++ +`_. +In Proceedings of CGO 2015. + +.. _MemorySanitizer tool: https://clang.llvm.org/docs/MemorySanitizer.html +.. _LLVM documentation: https://llvm.org/docs/GettingStarted.html From patchwork Tue Apr 26 16:42:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827490 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59FA0C433FE for ; Tue, 26 Apr 2022 16:44:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E88986B0083; Tue, 26 Apr 2022 12:44:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E12886B0085; Tue, 26 Apr 2022 12:44:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CB0E46B0087; Tue, 26 Apr 2022 12:44:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id BE6996B0083 for ; Tue, 26 Apr 2022 12:44:42 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id 89A9880ABE for ; Tue, 26 Apr 2022 16:44:42 +0000 (UTC) X-FDA: 79399604004.03.DBFC14C Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf07.hostedemail.com (Postfix) with ESMTP id 12B2940050 for ; Tue, 26 Apr 2022 16:44:39 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id qw33-20020a1709066a2100b006f001832229so9338259ejc.4 for ; Tue, 26 Apr 2022 09:44:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=/0H5A8X92xIp0p19j3P2M66qBbmfiGtveehlyZCJw24=; b=FvkMkIGRscjUzYZMvLVFQr9H/QRVOHYZw4hHknosvP6NOZtQxpNsZd0RtzhJXelcRc z988hAfskDDmoJnPIiI0jXNGvUulB8rFxBfA8ab5Sms91LUTaW4M13z8UFY9KvIBTi0U mydviyOj8aDprvPJfYeUMhjCpbeC8rH4hZxCn8OWbU3YolhExGVHn2JyusklOjF/6sO4 vYyxMokIMpAW+3L8FTBFk9+HgjrICCnt0AsWcRVYbtUA2ELiqXnXSxBNwEJZcwS3tI4o 91Z59BTGhx3wH4FUgP2JaAvwZ5iQeqRkzHHrNtLbtKEtxaZ4rzTmakhpwXtIFjy1dzme JxoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/0H5A8X92xIp0p19j3P2M66qBbmfiGtveehlyZCJw24=; b=vdxQusq43UX+yp7Lu5WBiE1Ic/mY7xNDEq6+bYKtdYkFZOOgLRLB3ADyqXsHBvdZqD GQQULGGJ1mCUhu06jMvYUOTBX3tsEW7f1sJbE+7FJ30P3iBOQoMTJRSWb+1xjrWjCgil KlNMqaBsRPyObS1vQjthJhpdqPN78Y2EqY6YNXZjQFgZAKC5ji7IYXdiVmGHh1eQBALh SV7ZxS3rLLw1GimpqX134Wwybfr5fn7mitLxc2Bv2J5udUtbkdP/QXUgJnthC0UKtzuG qQ9fF+b8M/7ZuMMuNHq5rWir+2huNy4qsK74PX1CahRC5lkC/ahSoetwhF2u6YKAa2d7 7CTg== X-Gm-Message-State: AOAM5315PQy0E2XAyU6tCMnavVPFvPrBh5YVlTUSZJIT0vDuxwKAopKb c0JKecZ2J6/9BoQgggam5PsjMGwqFdE= X-Google-Smtp-Source: ABdhPJyznSX2i8WSeLQTTkki2cbZTho3ac9KkIH//+BcoF3wC20oGterW7mlNzbNsl8P2+ms/2tHQUoo1l8= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:aa7:d310:0:b0:425:f22f:763f with SMTP id p16-20020aa7d310000000b00425f22f763fmr9236955edq.163.1650991480781; Tue, 26 Apr 2022 09:44:40 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:37 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-9-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 08/46] kmsan: introduce __no_sanitize_memory and __no_kmsan_checks From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 12B2940050 X-Stat-Signature: i7bdn1eqxh56mfsgtc3ehnxxsu1jxyx8 Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=FvkMkIGR; spf=pass (imf07.hostedemail.com: domain of 3eCFoYgYKCHUZebWXkZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3eCFoYgYKCHUZebWXkZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1650991479-807580 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __no_sanitize_memory is a function attribute that instructs KMSAN to skip a function during instrumentation. This is needed to e.g. implement the noinstr functions. __no_kmsan_checks is a function attribute that makes KMSAN ignore the uninitialized values coming from the function's inputs, and initialize the function's outputs. Functions marked with this attribute can't be inlined into functions not marked with it, and vice versa. This behavior is overridden by __always_inline. __SANITIZE_MEMORY__ is a macro that's defined iff the file is instrumented with KMSAN. This is not the same as CONFIG_KMSAN, which is defined for every file. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I004ff0360c918d3cd8b18767ddd1381c6d3281be --- include/linux/compiler-clang.h | 23 +++++++++++++++++++++++ include/linux/compiler-gcc.h | 6 ++++++ 2 files changed, 29 insertions(+) diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h index babb1347148c5..c561064921449 100644 --- a/include/linux/compiler-clang.h +++ b/include/linux/compiler-clang.h @@ -51,6 +51,29 @@ #define __no_sanitize_undefined #endif +#if __has_feature(memory_sanitizer) +#define __SANITIZE_MEMORY__ +/* + * Unlike other sanitizers, KMSAN still inserts code into functions marked with + * no_sanitize("kernel-memory"). Using disable_sanitizer_instrumentation + * provides the behavior consistent with other __no_sanitize_ attributes, + * guaranteeing that __no_sanitize_memory functions remain uninstrumented. + */ +#define __no_sanitize_memory __disable_sanitizer_instrumentation + +/* + * The __no_kmsan_checks attribute ensures that a function does not produce + * false positive reports by: + * - initializing all local variables and memory stores in this function; + * - skipping all shadow checks; + * - passing initialized arguments to this function's callees. + */ +#define __no_kmsan_checks __attribute__((no_sanitize("kernel-memory"))) +#else +#define __no_sanitize_memory +#define __no_kmsan_checks +#endif + /* * Support for __has_feature(coverage_sanitizer) was added in Clang 13 together * with no_sanitize("coverage"). Prior versions of Clang support coverage diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h index 52299c957c98e..f1a7ce3f6e6fd 100644 --- a/include/linux/compiler-gcc.h +++ b/include/linux/compiler-gcc.h @@ -133,6 +133,12 @@ #define __SANITIZE_ADDRESS__ #endif +/* + * GCC does not support KMSAN. + */ +#define __no_sanitize_memory +#define __no_kmsan_checks + /* * Turn individual warnings and errors on and off locally, depending * on version. From patchwork Tue Apr 26 16:42:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56495C433EF for ; Tue, 26 Apr 2022 16:44:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E3A5A6B0085; Tue, 26 Apr 2022 12:44:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DEB946B0087; Tue, 26 Apr 2022 12:44:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C8A556B0088; Tue, 26 Apr 2022 12:44:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id BB2E56B0085 for ; Tue, 26 Apr 2022 12:44:45 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 8498D120B66 for ; Tue, 26 Apr 2022 16:44:45 +0000 (UTC) X-FDA: 79399604130.30.83F5A2E Received: from mail-lj1-f202.google.com (mail-lj1-f202.google.com [209.85.208.202]) by imf26.hostedemail.com (Postfix) with ESMTP id 65716140042 for ; Tue, 26 Apr 2022 16:44:43 +0000 (UTC) Received: by mail-lj1-f202.google.com with SMTP id n9-20020a2e82c9000000b002435af2e8b9so4785385ljh.20 for ; Tue, 26 Apr 2022 09:44:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ednioRVQi+463GyxSxojLtYXRmMb4k6reZrR8s7PDTk=; b=s7nDKeE2uO8tPBSHXhKtgeVd4KJBULrFKzji5KxbrLSqAsd9qwd5XqsiW4sxXc2Zuj /LDHzi6oYDxLjayBmNbbHHd8x60Z1q11m24TiSwhP8RYbreiv3W6YuCWDnMHnOA40Zg6 2XAh/SQC3o2WwNcd888BCR1ayDkPme9nLNf+yFJmAhhH+5m2makPEnGrUgDcpKDvSpAx uZ/EMSRFu7oZ7g4w8LFxCUmwK7SAeXqsUvws74aRat1Udaq+/e8oYjQBcJ/MQ0phtMYp ItWJ4UPus1+bGv6Mz6YHzcXh/jQw/SqfpbBVzObyfR2JQ5g2erl9ttRuMQ0dfEHG3Yf8 GoPQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ednioRVQi+463GyxSxojLtYXRmMb4k6reZrR8s7PDTk=; b=S+PgetjpPXxCgfmxQnJW/F2s7uggX7sB7ek58ZHDyazHmb4/7jbseAl58VkwZ/Iv6M cTxJuwLuypAMCQJAMkuOauPrLBGJSPyxx4RNEEcuVunhHhcutcC7+Tew9JcI7uSg/UVg 6KyzKecMRyW+WBaUUfCWkjv08X43CNvnrUluUK1d+nS/gohFHLsgFdfW3pA2e1dJvZnJ x6Hd6gbSpE5Tt2FIdh446+z7REMkhwgz+gs7BG2oBOv4VFpDUcjdg/Uow484qLwt+PoM Gl+FP57uCslsRdxpjTNExgB1byeWTTqj06kPZaBD6S5hvhThnxrawU0cuDWdlrA4xYHZ gUJw== X-Gm-Message-State: AOAM533IA0BrHGofJACx9p3Ez/fb0PlmAAR3Ol4F3Uygnnv/1befBm/A z0uFnuY7PYLKHnPUeoDEwYdAZDqaQ7E= X-Google-Smtp-Source: ABdhPJxHrJcKOCcTiJCxWbbHxqyDNc8yKbgnQVG+zhnL2F1NE0oryK8Ze862y3s98aS3FdAQ1AuU/NMiTCE= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6512:114f:b0:471:b097:4a29 with SMTP id m15-20020a056512114f00b00471b0974a29mr17572189lfg.93.1650991483323; Tue, 26 Apr 2022 09:44:43 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:38 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-10-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 09/46] kmsan: mark noinstr as __no_sanitize_memory From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: sgqmko3zhhdapf8hj4e79t3uyhn8fibb X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 65716140042 Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=s7nDKeE2; spf=pass (imf26.hostedemail.com: domain of 3eyFoYgYKCHgcheZanckkcha.Ykihejqt-iigrWYg.knc@flex--glider.bounces.google.com designates 209.85.208.202 as permitted sender) smtp.mailfrom=3eyFoYgYKCHgcheZanckkcha.Ykihejqt-iigrWYg.knc@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-HE-Tag: 1650991483-886966 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: noinstr functions should never be instrumented, so make KMSAN skip them by applying the __no_sanitize_memory attribute. Signed-off-by: Alexander Potapenko --- v2: -- moved this patch earlier in the series per Mark Rutland's request Link: https://linux-review.googlesource.com/id/I3c9abe860b97b49bc0c8026918b17a50448dec0d --- include/linux/compiler_types.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 1c2c33ae1b37d..a9ba5edd8208b 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -227,7 +227,8 @@ struct ftrace_likely_data { /* Section for code which can't be instrumented at all */ #define noinstr \ noinline notrace __attribute((__section__(".noinstr.text"))) \ - __no_kcsan __no_sanitize_address __no_profile __no_sanitize_coverage + __no_kcsan __no_sanitize_address __no_profile __no_sanitize_coverage \ + __no_sanitize_memory #endif /* __KERNEL__ */ From patchwork Tue Apr 26 16:42:39 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827492 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB048C43217 for ; Tue, 26 Apr 2022 16:44:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5631C6B0087; Tue, 26 Apr 2022 12:44:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 513076B0088; Tue, 26 Apr 2022 12:44:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 388F76B0089; Tue, 26 Apr 2022 12:44:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 2C2286B0087 for ; Tue, 26 Apr 2022 12:44:48 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id F25B2120B4E for ; Tue, 26 Apr 2022 16:44:47 +0000 (UTC) X-FDA: 79399604214.13.4E9B0BE Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf06.hostedemail.com (Postfix) with ESMTP id 3ACDA180049 for ; Tue, 26 Apr 2022 16:44:46 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id mp18-20020a1709071b1200b006e7f314ecb3so9373129ejc.23 for ; Tue, 26 Apr 2022 09:44:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=H2XcU3OLx+0JTyyML32bzgUT5MJiV2tDmnYI11nI1dE=; b=m/GoUZBVtlZMRq+FF0SH5aCHlTGSvaapuxEDbVMKlEUnlusQPCCZ7ebjbcCdUrWYb5 a8Bnd3p86A4nE5NEzf9NbCXsFTEls9lTknebhOvAE06AdOtYHsT4/XF7t90n+nJ5iebk sRGdLEAOOW9bUwdhOZndjscgKwN1eOVOb9hwxXJ7DP2x/BYwqNzZ6yzf2ohW5qzq+FEZ g3mr3p0uqiTWn5KToQ7MIQy5yDLS3i1YymgzGDrCkkGyLo8f/tQJWjvGzdC5snsMrVUx RKkkljd+V3QJEWhwnCDg+P32Vbl+YhkiLRuywjeqifPG0/ok+r70FeWu7XYIUm7eYfDk 3sOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=H2XcU3OLx+0JTyyML32bzgUT5MJiV2tDmnYI11nI1dE=; b=J4PMRP/c/iWSYUICpjlqy5sjRGVJyl8O+5BgxynBpXbuVt8i4DE+Wy54fMqqh2eVhv zIYNLzSvVl4+guxZNrQVmvhJyd2Vss/ADo23OUYgShWE35n797LMRfSGDQcHDqCXoY5D /ct47VCdzk/Unzazem48+GqSev+r0jMRnXod4VMRpYJkpm4E3T9iqDnqLwSWKDvA+ZjU 5ggi7LMKTqFODOiGhpj0Z8xcnxU4jr9NtW1s28q+ArUd2ixQ00JQn1knCbyhPCfP89Rl HaHEtBCPvR+y9AUIo2wicfk7ZVIdN33kfX0Hr8Rpif1uXGypR4sVyhkbz/33cGS+lqmn sCxg== X-Gm-Message-State: AOAM531dKl+9hoqHuPp8/R60xA6zXpGow5F4I3HYwMfWaosXjSQbUtkd oRy/pGwm/3HEAAXfYBIgkR25tWKQ/yk= X-Google-Smtp-Source: ABdhPJxuZFKkNdAIYxCe6Wr3rHxB/+GLrdr6eXOf7wzKyJPWcdL4QfIWrMfSB3fnw8xzMi6hwbSucSjlH8I= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:4315:b0:426:155:e4a3 with SMTP id m21-20020a056402431500b004260155e4a3mr1554638edc.324.1650991485934; Tue, 26 Apr 2022 09:44:45 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:39 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-11-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 10/46] x86: kmsan: pgtable: reduce vmalloc space From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: a8je3b9ar7o3pn98db19ytp9gqaeuc48 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 3ACDA180049 Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="m/GoUZBV"; spf=pass (imf06.hostedemail.com: domain of 3fSFoYgYKCHoejgbcpemmejc.amkjglsv-kkitYai.mpe@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3fSFoYgYKCHoejgbcpemmejc.amkjglsv-kkitYai.mpe@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-HE-Tag: 1650991486-825580 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN is going to use 3/4 of existing vmalloc space to hold the metadata, therefore we lower VMALLOC_END to make sure vmalloc() doesn't allocate past the first 1/4. Signed-off-by: Alexander Potapenko --- v2: -- added x86: to the title Link: https://linux-review.googlesource.com/id/I9d8b7f0a88a639f1263bc693cbd5c136626f7efd --- arch/x86/include/asm/pgtable_64_types.h | 41 ++++++++++++++++++++++++- arch/x86/mm/init_64.c | 2 +- 2 files changed, 41 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 91ac106545703..7f15d43754a34 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -139,7 +139,46 @@ extern unsigned int ptrs_per_p4d; # define VMEMMAP_START __VMEMMAP_BASE_L4 #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */ -#define VMALLOC_END (VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1) +#define VMEMORY_END (VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1) + +#ifndef CONFIG_KMSAN +#define VMALLOC_END VMEMORY_END +#else +/* + * In KMSAN builds vmalloc area is four times smaller, and the remaining 3/4 + * are used to keep the metadata for virtual pages. The memory formerly + * belonging to vmalloc area is now laid out as follows: + * + * 1st quarter: VMALLOC_START to VMALLOC_END - new vmalloc area + * 2nd quarter: KMSAN_VMALLOC_SHADOW_START to + * VMALLOC_END+KMSAN_VMALLOC_SHADOW_OFFSET - vmalloc area shadow + * 3rd quarter: KMSAN_VMALLOC_ORIGIN_START to + * VMALLOC_END+KMSAN_VMALLOC_ORIGIN_OFFSET - vmalloc area origins + * 4th quarter: KMSAN_MODULES_SHADOW_START to KMSAN_MODULES_ORIGIN_START + * - shadow for modules, + * KMSAN_MODULES_ORIGIN_START to + * KMSAN_MODULES_ORIGIN_START + MODULES_LEN - origins for modules. + */ +#define VMALLOC_QUARTER_SIZE ((VMALLOC_SIZE_TB << 40) >> 2) +#define VMALLOC_END (VMALLOC_START + VMALLOC_QUARTER_SIZE - 1) + +/* + * vmalloc metadata addresses are calculated by adding shadow/origin offsets + * to vmalloc address. + */ +#define KMSAN_VMALLOC_SHADOW_OFFSET VMALLOC_QUARTER_SIZE +#define KMSAN_VMALLOC_ORIGIN_OFFSET (VMALLOC_QUARTER_SIZE << 1) + +#define KMSAN_VMALLOC_SHADOW_START (VMALLOC_START + KMSAN_VMALLOC_SHADOW_OFFSET) +#define KMSAN_VMALLOC_ORIGIN_START (VMALLOC_START + KMSAN_VMALLOC_ORIGIN_OFFSET) + +/* + * The shadow/origin for modules are placed one by one in the last 1/4 of + * vmalloc space. + */ +#define KMSAN_MODULES_SHADOW_START (VMALLOC_END + KMSAN_VMALLOC_ORIGIN_OFFSET + 1) +#define KMSAN_MODULES_ORIGIN_START (KMSAN_MODULES_SHADOW_START + MODULES_LEN) +#endif /* CONFIG_KMSAN */ #define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE) /* The module sections ends with the start of the fixmap */ diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 96d34ebb20a9e..fcea37beb3911 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -1287,7 +1287,7 @@ static void __init preallocate_vmalloc_pages(void) unsigned long addr; const char *lvl; - for (addr = VMALLOC_START; addr <= VMALLOC_END; addr = ALIGN(addr + 1, PGDIR_SIZE)) { + for (addr = VMALLOC_START; addr <= VMEMORY_END; addr = ALIGN(addr + 1, PGDIR_SIZE)) { pgd_t *pgd = pgd_offset_k(addr); p4d_t *p4d; pud_t *pud; From patchwork Tue Apr 26 16:42:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 559EAC4332F for ; Tue, 26 Apr 2022 16:44:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E396A6B0088; Tue, 26 Apr 2022 12:44:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DEB066B0089; Tue, 26 Apr 2022 12:44:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C8C616B008A; Tue, 26 Apr 2022 12:44:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id BB4D76B0088 for ; Tue, 26 Apr 2022 12:44:50 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id A1BE420D56 for ; Tue, 26 Apr 2022 16:44:50 +0000 (UTC) X-FDA: 79399604340.01.C896D9E Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf31.hostedemail.com (Postfix) with ESMTP id 6123B2005A for ; Tue, 26 Apr 2022 16:44:42 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id sh14-20020a1709076e8e00b006f3b7adb9ffso1013709ejc.16 for ; Tue, 26 Apr 2022 09:44:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=SEm8RFoD3mzqXjcHWM9Z/XfNnVTnRxEBaTJIwFUC6FI=; b=V1UPpoK10uyPLSVeGxmKmWLpz7MVRMo+4uiBOQTE+btQjeiF7bX14i8xIN4hWxvfsI BKp6ezW66yD/fgl4XMpuBdON6QxtTZCTto768iFnXFc2LH7l5QwZpTox8pUyK0cZZMcU 9diZqYTWDNczn8waY+ysJcw4lWMp4tfA90aPMraK8ja0L3fYlvQlBZsap+cS3cIoYkUy i+K1sOjum2D1Snfi9ezBpkT3FqJQbpZ1ZkJb010JMdv/VFlzNAOneWXO4NmYGXJdCYB8 LxjKRUcmGqytLknAlHvn7Ijdu7TEekTSbAaIp3JV0enFaH8f7pcQ4jDbiUGAEReAXkN8 4fmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=SEm8RFoD3mzqXjcHWM9Z/XfNnVTnRxEBaTJIwFUC6FI=; b=kVHA7A7Bf+nfdkeJZcyJIqRxCB2VSj00z1LHMkLp+A6gCZpIRq9qXRPbsyybHDVqRn ZOwEkfuePi2QknEGoykWqWSyG+PSfgReyWtZHd6PVSO4DFROHUQfOfitmIlt7xb5lcUL L4KOKQizYhjTZWCJD0dTDEC7ynQ7AdeG/9POJrIuncHlpMUM0pMNVKkr5z43QbAt+39i Lvt07ZHpemaj9QY6qTFMEZ9+ri9fMR49bDJkbD/wG13MnTi1AHXcX/6o4rHJchsPtyQ6 cAkUzB9qFAFKR4MiWcV/zoDHVGmkA7HMGf/YigAVJO29ZMWm99q2rdSc/acOpAaDy+pE PRCg== X-Gm-Message-State: AOAM531NlhdEguHY6NuEy2y4tge0/r4WDE4E1NJcqH6Fr0VvJOahkPwC MY7jwS2fjBlrrp1IyynzIe3POtd/+yc= X-Google-Smtp-Source: ABdhPJzlxnxEzq09x8VRCHG67jl1BT+tZuNt6ukFhmOsg7s6+uxFS5/FlyCj/NdcpcGzVA6sRk/t/wnlBr4= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:35d2:b0:424:1eb0:45c2 with SMTP id z18-20020a05640235d200b004241eb045c2mr25704880edc.152.1650991488637; Tue, 26 Apr 2022 09:44:48 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:40 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-12-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 11/46] libnvdimm/pfn_dev: increase MAX_STRUCT_PAGE_SIZE From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 6123B2005A X-Stat-Signature: hgqrdco4pokt65ugnfs8sdd4c5gq7mqb Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=V1UPpoK1; spf=pass (imf31.hostedemail.com: domain of 3gCFoYgYKCH0hmjefshpphmf.dpnmjovy-nnlwbdl.psh@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3gCFoYgYKCH0hmjefshpphmf.dpnmjovy-nnlwbdl.psh@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1650991482-60131 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN adds extra metadata fields to struct page, so it does not fit into 64 bytes anymore. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I353796acc6a850bfd7bb342aa1b63e616fc614f1 --- drivers/nvdimm/nd.h | 2 +- drivers/nvdimm/pfn_devs.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h index ec5219680092d..85ca5b4da3cf3 100644 --- a/drivers/nvdimm/nd.h +++ b/drivers/nvdimm/nd.h @@ -652,7 +652,7 @@ void devm_namespace_disable(struct device *dev, struct nd_namespace_common *ndns); #if IS_ENABLED(CONFIG_ND_CLAIM) /* max struct page size independent of kernel config */ -#define MAX_STRUCT_PAGE_SIZE 64 +#define MAX_STRUCT_PAGE_SIZE 128 int nvdimm_setup_pfn(struct nd_pfn *nd_pfn, struct dev_pagemap *pgmap); #else static inline int nvdimm_setup_pfn(struct nd_pfn *nd_pfn, diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c index c31e184bfa45e..d51a3cd6581b1 100644 --- a/drivers/nvdimm/pfn_devs.c +++ b/drivers/nvdimm/pfn_devs.c @@ -784,7 +784,7 @@ static int nd_pfn_init(struct nd_pfn *nd_pfn) * when populating the vmemmap. This *should* be equal to * PMD_SIZE for most architectures. * - * Also make sure size of struct page is less than 64. We + * Also make sure size of struct page is less than 128. We * want to make sure we use large enough size here so that * we don't have a dynamic reserve space depending on * struct page size. But we also want to make sure we notice From patchwork Tue Apr 26 16:42:41 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DDAF9C4332F for ; Tue, 26 Apr 2022 16:44:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 981B86B008A; Tue, 26 Apr 2022 12:44:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 92E6F6B008C; Tue, 26 Apr 2022 12:44:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70DB46B0092; Tue, 26 Apr 2022 12:44:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 553CE6B008A for ; Tue, 26 Apr 2022 12:44:53 -0400 (EDT) Received: from smtpin31.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 24A0826C72 for ; Tue, 26 Apr 2022 16:44:53 +0000 (UTC) X-FDA: 79399604466.31.32440FD Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf10.hostedemail.com (Postfix) with ESMTP id 9F312C0052 for ; Tue, 26 Apr 2022 16:44:44 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id gs30-20020a1709072d1e00b006f00c67c0b0so8986453ejc.11 for ; Tue, 26 Apr 2022 09:44:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=I+h0bEkbop6PXGR8KD8WntxvfCzQLAueILixWq8NhjE=; b=E6xJ/acQBP6/CIvPzNzrog8j0UQtY4+YDiBy1Y2P5ryLVQxievpREa605KL0ckgMgL PYQkbBb5fcgwcFcwstuSgrInoccYcmP+VDNaeegRem0WsF/vhZPd5acgG9TOzU3RhjAn AGjLlOXeGa6b6rh+VnjJpCUnAyuL7Axz2gtuiWtIElXfuFiL4lEhdW4T6u5n758zGSWO 30IWtaJyj7Z0UF1Au0eAJ613Z/j/b/QSiA6d26n1IpLLUTiun8TRgiVuvA5bNQ/KZHJc srwcxlNKsw4p5c3AbeIMtMPfcvNfA3fr108gsqpEesCinMcoYLS9+vqMNLjF4y6686/k OMWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=I+h0bEkbop6PXGR8KD8WntxvfCzQLAueILixWq8NhjE=; b=fp4jyPFlWNwpyCyg+o77XLKBo/AvaeEmA0Wxyq3TbMdOGKVbPSKOT1q0nOKpeLaPps tgPKX3g7BV5LD3+EZqQ8l9czumu4XTcv8qrZttA/NqSExymJM06+JGySv8cIn25G6RgH G35KkrN74uUMjc9TI2FLxMcZA6TrRdPSBotf+4JvZtDMkx0cH2Vg8qOdOoJQ1PimvQBr pmzC2WjNgJfEtqjZmA091QxfSLJaH8h8KIT6IypumYv5grmCdmvnmxuoJDAxJhXwz5ml hPkEx1hM5FGtU/dXipRW+IKT6d/cAYh7Iks5sjlcGw+JWofh3IG4WGt6wIQ6IKxy9L+3 epRQ== X-Gm-Message-State: AOAM530YN9b3xfiwifT61+bMC2qgona42JxtzL11zo8z7d/zZV4aVbCF KNjrhbO84bR1GyVgzDIwQySSgv87hUk= X-Google-Smtp-Source: ABdhPJyIxePGxygd36SgHLHKsG1CV7OIL0ps+CaI4w8xpkljTDxyZ3AJbqgX8s7GXr5EC8WrmChC38hBy3M= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:3486:b0:425:f2c6:9695 with SMTP id v6-20020a056402348600b00425f2c69695mr8713048edc.2.1650991491493; Tue, 26 Apr 2022 09:44:51 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:41 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-13-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 12/46] kmsan: add KMSAN runtime core From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 9F312C0052 X-Stat-Signature: a3twkrua46k1t59ruqgnajr6yb9jt6pa Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="E6xJ/acQ"; spf=pass (imf10.hostedemail.com: domain of 3gyFoYgYKCIAkpmhivksskpi.gsqpmry1-qqozego.svk@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3gyFoYgYKCIAkpmhivksskpi.gsqpmry1-qqozego.svk@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1650991484-107829 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: For each memory location KernelMemorySanitizer maintains two types of metadata: 1. The so-called shadow of that location - а byte:byte mapping describing whether or not individual bits of memory are initialized (shadow is 0) or not (shadow is 1). 2. The origins of that location - а 4-byte:4-byte mapping containing 4-byte IDs of the stack traces where uninitialized values were created. Each struct page now contains pointers to two struct pages holding KMSAN metadata (shadow and origins) for the original struct page. Utility routines in mm/kmsan/core.c and mm/kmsan/shadow.c handle the metadata creation, addressing, copying and checking. mm/kmsan/report.c performs error reporting in the cases an uninitialized value is used in a way that leads to undefined behavior. KMSAN compiler instrumentation is responsible for tracking the metadata along with the kernel memory. mm/kmsan/instrumentation.c provides the implementation for instrumentation hooks that are called from files compiled with -fsanitize=kernel-memory. To aid parameter passing (also done at instrumentation level), each task_struct now contains a struct kmsan_task_state used to track the metadata of function parameters and return values for that task. Finally, this patch provides CONFIG_KMSAN that enables KMSAN, and declares CFLAGS_KMSAN, which are applied to files compiled with KMSAN. The KMSAN_SANITIZE:=n Makefile directive can be used to completely disable KMSAN instrumentation for certain files. Similarly, KMSAN_ENABLE_CHECKS:=n disables KMSAN checks and makes newly created stack memory initialized. Users can also use functions from include/linux/kmsan-checks.h to mark certain memory regions as uninitialized or initialized (this is called "poisoning" and "unpoisoning") or check that a particular region is initialized. Signed-off-by: Alexander Potapenko --- v2: -- as requested by Greg K-H, moved hooks for different subsystems to respective patches, rewrote the patch description; -- addressed comments by Dmitry Vyukov; -- added a note about KMSAN being not intended for production use. -- fix case of unaligned dst in kmsan_internal_memmove_metadata() v3: -- print build IDs in reports where applicable -- drop redundant filter_irq_stacks(), unpoison the local passed to __stack_depot_save() -- remove a stray BUG() Link: https://linux-review.googlesource.com/id/I9b71bfe3425466c97159f9de0062e5e8e4fec866 --- Makefile | 1 + include/linux/kmsan-checks.h | 64 +++++ include/linux/kmsan.h | 47 ++++ include/linux/mm_types.h | 12 + include/linux/sched.h | 5 + lib/Kconfig.debug | 1 + lib/Kconfig.kmsan | 23 ++ mm/Makefile | 1 + mm/kmsan/Makefile | 18 ++ mm/kmsan/core.c | 458 +++++++++++++++++++++++++++++++++++ mm/kmsan/hooks.c | 66 +++++ mm/kmsan/instrumentation.c | 267 ++++++++++++++++++++ mm/kmsan/kmsan.h | 183 ++++++++++++++ mm/kmsan/report.c | 211 ++++++++++++++++ mm/kmsan/shadow.c | 186 ++++++++++++++ scripts/Makefile.kmsan | 1 + scripts/Makefile.lib | 9 + 17 files changed, 1553 insertions(+) create mode 100644 include/linux/kmsan-checks.h create mode 100644 include/linux/kmsan.h create mode 100644 lib/Kconfig.kmsan create mode 100644 mm/kmsan/Makefile create mode 100644 mm/kmsan/core.c create mode 100644 mm/kmsan/hooks.c create mode 100644 mm/kmsan/instrumentation.c create mode 100644 mm/kmsan/kmsan.h create mode 100644 mm/kmsan/report.c create mode 100644 mm/kmsan/shadow.c create mode 100644 scripts/Makefile.kmsan diff --git a/Makefile b/Makefile index c3ec1ea423797..d3c7dcd9f0fea 100644 --- a/Makefile +++ b/Makefile @@ -1009,6 +1009,7 @@ include-y := scripts/Makefile.extrawarn include-$(CONFIG_DEBUG_INFO) += scripts/Makefile.debug include-$(CONFIG_KASAN) += scripts/Makefile.kasan include-$(CONFIG_KCSAN) += scripts/Makefile.kcsan +include-$(CONFIG_KMSAN) += scripts/Makefile.kmsan include-$(CONFIG_UBSAN) += scripts/Makefile.ubsan include-$(CONFIG_KCOV) += scripts/Makefile.kcov include-$(CONFIG_GCC_PLUGINS) += scripts/Makefile.gcc-plugins diff --git a/include/linux/kmsan-checks.h b/include/linux/kmsan-checks.h new file mode 100644 index 0000000000000..a6522a0c28df9 --- /dev/null +++ b/include/linux/kmsan-checks.h @@ -0,0 +1,64 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * KMSAN checks to be used for one-off annotations in subsystems. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#ifndef _LINUX_KMSAN_CHECKS_H +#define _LINUX_KMSAN_CHECKS_H + +#include + +#ifdef CONFIG_KMSAN + +/** + * kmsan_poison_memory() - Mark the memory range as uninitialized. + * @address: address to start with. + * @size: size of buffer to poison. + * @flags: GFP flags for allocations done by this function. + * + * Until other data is written to this range, KMSAN will treat it as + * uninitialized. Error reports for this memory will reference the call site of + * kmsan_poison_memory() as origin. + */ +void kmsan_poison_memory(const void *address, size_t size, gfp_t flags); + +/** + * kmsan_unpoison_memory() - Mark the memory range as initialized. + * @address: address to start with. + * @size: size of buffer to unpoison. + * + * Until other data is written to this range, KMSAN will treat it as + * initialized. + */ +void kmsan_unpoison_memory(const void *address, size_t size); + +/** + * kmsan_check_memory() - Check the memory range for being initialized. + * @address: address to start with. + * @size: size of buffer to check. + * + * If any piece of the given range is marked as uninitialized, KMSAN will report + * an error. + */ +void kmsan_check_memory(const void *address, size_t size); + +#else + +static inline void kmsan_poison_memory(const void *address, size_t size, + gfp_t flags) +{ +} +static inline void kmsan_unpoison_memory(const void *address, size_t size) +{ +} +static inline void kmsan_check_memory(const void *address, size_t size) +{ +} + +#endif + +#endif /* _LINUX_KMSAN_CHECKS_H */ diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h new file mode 100644 index 0000000000000..4e35f43eceaa9 --- /dev/null +++ b/include/linux/kmsan.h @@ -0,0 +1,47 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * KMSAN API for subsystems. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ +#ifndef _LINUX_KMSAN_H +#define _LINUX_KMSAN_H + +#include +#include +#include +#include +#include + +struct page; + +#ifdef CONFIG_KMSAN + +/* These constants are defined in the MSan LLVM instrumentation pass. */ +#define KMSAN_RETVAL_SIZE 800 +#define KMSAN_PARAM_SIZE 800 + +struct kmsan_context_state { + char param_tls[KMSAN_PARAM_SIZE]; + char retval_tls[KMSAN_RETVAL_SIZE]; + char va_arg_tls[KMSAN_PARAM_SIZE]; + char va_arg_origin_tls[KMSAN_PARAM_SIZE]; + u64 va_arg_overflow_size_tls; + char param_origin_tls[KMSAN_PARAM_SIZE]; + depot_stack_handle_t retval_origin_tls; +}; + +#undef KMSAN_PARAM_SIZE +#undef KMSAN_RETVAL_SIZE + +struct kmsan_ctx { + struct kmsan_context_state cstate; + int kmsan_in_runtime; + bool allow_reporting; +}; + +#endif + +#endif /* _LINUX_KMSAN_H */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 8834e38c06a4f..85c97a2145f7e 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -218,6 +218,18 @@ struct page { not kmapped, ie. highmem) */ #endif /* WANT_PAGE_VIRTUAL */ +#ifdef CONFIG_KMSAN + /* + * KMSAN metadata for this page: + * - shadow page: every bit indicates whether the corresponding + * bit of the original page is initialized (0) or not (1); + * - origin page: every 4 bytes contain an id of the stack trace + * where the uninitialized value was created. + */ + struct page *kmsan_shadow; + struct page *kmsan_origin; +#endif + #ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS int _last_cpupid; #endif diff --git a/include/linux/sched.h b/include/linux/sched.h index a8911b1f35aad..9e53624cd73ac 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -1352,6 +1353,10 @@ struct task_struct { #endif #endif +#ifdef CONFIG_KMSAN + struct kmsan_ctx kmsan_ctx; +#endif + #if IS_ENABLED(CONFIG_KUNIT) struct kunit *kunit_test; #endif diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 075cd25363ac3..b81670878acae 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -996,6 +996,7 @@ config DEBUG_STACKOVERFLOW source "lib/Kconfig.kasan" source "lib/Kconfig.kfence" +source "lib/Kconfig.kmsan" endmenu # "Memory Debugging" diff --git a/lib/Kconfig.kmsan b/lib/Kconfig.kmsan new file mode 100644 index 0000000000000..199f79d031f94 --- /dev/null +++ b/lib/Kconfig.kmsan @@ -0,0 +1,23 @@ +config HAVE_ARCH_KMSAN + bool + +config HAVE_KMSAN_COMPILER + def_bool (CC_IS_CLANG && $(cc-option,-fsanitize=kernel-memory -mllvm -msan-disable-checks=1)) + +config KMSAN + bool "KMSAN: detector of uninitialized values use" + depends on HAVE_ARCH_KMSAN && HAVE_KMSAN_COMPILER + depends on SLUB && DEBUG_KERNEL && !KASAN && !KCSAN + depends on CC_IS_CLANG && CLANG_VERSION >= 140000 + select STACKDEPOT + select STACKDEPOT_ALWAYS_INIT + help + KernelMemorySanitizer (KMSAN) is a dynamic detector of uses of + uninitialized values in the kernel. It is based on compiler + instrumentation provided by Clang and thus requires Clang to build. + + An important note is that KMSAN is not intended for production use, + because it drastically increases kernel memory footprint and slows + the whole system down. + + See for more details. diff --git a/mm/Makefile b/mm/Makefile index 4cc13f3179a51..4da7eeaecc214 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -89,6 +89,7 @@ obj-$(CONFIG_SLAB) += slab.o obj-$(CONFIG_SLUB) += slub.o obj-$(CONFIG_KASAN) += kasan/ obj-$(CONFIG_KFENCE) += kfence/ +obj-$(CONFIG_KMSAN) += kmsan/ obj-$(CONFIG_FAILSLAB) += failslab.o obj-$(CONFIG_MEMTEST) += memtest.o obj-$(CONFIG_MIGRATION) += migrate.o diff --git a/mm/kmsan/Makefile b/mm/kmsan/Makefile new file mode 100644 index 0000000000000..a80dde1de7048 --- /dev/null +++ b/mm/kmsan/Makefile @@ -0,0 +1,18 @@ +obj-y := core.o instrumentation.o hooks.o report.o shadow.o + +KMSAN_SANITIZE := n +KCOV_INSTRUMENT := n +UBSAN_SANITIZE := n + +# Disable instrumentation of KMSAN runtime with other tools. +CC_FLAGS_KMSAN_RUNTIME := -fno-stack-protector +CC_FLAGS_KMSAN_RUNTIME += $(call cc-option,-fno-conserve-stack) +CC_FLAGS_KMSAN_RUNTIME += -DDISABLE_BRANCH_PROFILING + +CFLAGS_REMOVE.o = $(CC_FLAGS_FTRACE) + +CFLAGS_core.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_hooks.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_instrumentation.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_report.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_shadow.o := $(CC_FLAGS_KMSAN_RUNTIME) diff --git a/mm/kmsan/core.c b/mm/kmsan/core.c new file mode 100644 index 0000000000000..933d864d9d467 --- /dev/null +++ b/mm/kmsan/core.c @@ -0,0 +1,458 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN runtime library. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../slab.h" +#include "kmsan.h" + +/* + * Avoid creating too long origin chains, these are unlikely to participate in + * real reports. + */ +#define MAX_CHAIN_DEPTH 7 +#define NUM_SKIPPED_TO_WARN 10000 + +bool kmsan_enabled __read_mostly; + +/* + * Per-CPU KMSAN context to be used in interrupts, where current->kmsan is + * unavaliable. + */ +DEFINE_PER_CPU(struct kmsan_ctx, kmsan_percpu_ctx); + +void kmsan_internal_poison_memory(void *address, size_t size, gfp_t flags, + unsigned int poison_flags) +{ + u32 extra_bits = + kmsan_extra_bits(/*depth*/ 0, poison_flags & KMSAN_POISON_FREE); + bool checked = poison_flags & KMSAN_POISON_CHECK; + depot_stack_handle_t handle; + + handle = kmsan_save_stack_with_flags(flags, extra_bits); + kmsan_internal_set_shadow_origin(address, size, -1, handle, checked); +} + +void kmsan_internal_unpoison_memory(void *address, size_t size, bool checked) +{ + kmsan_internal_set_shadow_origin(address, size, 0, 0, checked); +} + +depot_stack_handle_t kmsan_save_stack_with_flags(gfp_t flags, + unsigned int extra) +{ + unsigned long entries[KMSAN_STACK_DEPTH]; + unsigned int nr_entries; + + nr_entries = stack_trace_save(entries, KMSAN_STACK_DEPTH, 0); + + /* Don't sleep (see might_sleep_if() in __alloc_pages_nodemask()). */ + flags &= ~__GFP_DIRECT_RECLAIM; + + return __stack_depot_save(entries, nr_entries, extra, flags, true); +} + +/* Copy the metadata following the memmove() behavior. */ +void kmsan_internal_memmove_metadata(void *dst, void *src, size_t n) +{ + depot_stack_handle_t old_origin = 0, new_origin = 0; + int src_slots, dst_slots, i, iter, step, skip_bits; + depot_stack_handle_t *origin_src, *origin_dst; + void *shadow_src, *shadow_dst; + u32 *align_shadow_src, shadow; + bool backwards; + + shadow_dst = kmsan_get_metadata(dst, KMSAN_META_SHADOW); + if (!shadow_dst) + return; + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(dst, n)); + + shadow_src = kmsan_get_metadata(src, KMSAN_META_SHADOW); + if (!shadow_src) { + /* + * |src| is untracked: zero out destination shadow, ignore the + * origins, we're done. + */ + __memset(shadow_dst, 0, n); + return; + } + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(src, n)); + + __memmove(shadow_dst, shadow_src, n); + + origin_dst = kmsan_get_metadata(dst, KMSAN_META_ORIGIN); + origin_src = kmsan_get_metadata(src, KMSAN_META_ORIGIN); + KMSAN_WARN_ON(!origin_dst || !origin_src); + src_slots = (ALIGN((u64)src + n, KMSAN_ORIGIN_SIZE) - + ALIGN_DOWN((u64)src, KMSAN_ORIGIN_SIZE)) / + KMSAN_ORIGIN_SIZE; + dst_slots = (ALIGN((u64)dst + n, KMSAN_ORIGIN_SIZE) - + ALIGN_DOWN((u64)dst, KMSAN_ORIGIN_SIZE)) / + KMSAN_ORIGIN_SIZE; + KMSAN_WARN_ON((src_slots < 1) || (dst_slots < 1)); + KMSAN_WARN_ON((src_slots - dst_slots > 1) || + (dst_slots - src_slots < -1)); + + backwards = dst > src; + i = backwards ? min(src_slots, dst_slots) - 1 : 0; + iter = backwards ? -1 : 1; + + align_shadow_src = + (u32 *)ALIGN_DOWN((u64)shadow_src, KMSAN_ORIGIN_SIZE); + for (step = 0; step < min(src_slots, dst_slots); step++, i += iter) { + KMSAN_WARN_ON(i < 0); + shadow = align_shadow_src[i]; + if (i == 0) { + /* + * If |src| isn't aligned on KMSAN_ORIGIN_SIZE, don't + * look at the first |src % KMSAN_ORIGIN_SIZE| bytes + * of the first shadow slot. + */ + skip_bits = ((u64)src % KMSAN_ORIGIN_SIZE) * 8; + shadow = (shadow >> skip_bits) << skip_bits; + } + if (i == src_slots - 1) { + /* + * If |src + n| isn't aligned on + * KMSAN_ORIGIN_SIZE, don't look at the last + * |(src + n) % KMSAN_ORIGIN_SIZE| bytes of the + * last shadow slot. + */ + skip_bits = (((u64)src + n) % KMSAN_ORIGIN_SIZE) * 8; + shadow = (shadow << skip_bits) >> skip_bits; + } + /* + * Overwrite the origin only if the corresponding + * shadow is nonempty. + */ + if (origin_src[i] && (origin_src[i] != old_origin) && shadow) { + old_origin = origin_src[i]; + new_origin = kmsan_internal_chain_origin(old_origin); + /* + * kmsan_internal_chain_origin() may return + * NULL, but we don't want to lose the previous + * origin value. + */ + if (!new_origin) + new_origin = old_origin; + } + if (shadow) + origin_dst[i] = new_origin; + else + origin_dst[i] = 0; + } + /* + * If dst_slots is greater than src_slots (i.e. + * dst_slots == src_slots + 1), there is an extra origin slot at the + * beginning or end of the destination buffer, for which we take the + * origin from the previous slot. + * This is only done if the part of the source shadow corresponding to + * slot is non-zero. + * + * E.g. if we copy 8 aligned bytes that are marked as uninitialized + * and have origins o111 and o222, to an unaligned buffer with offset 1, + * these two origins are copied to three origin slots, so one of then + * needs to be duplicated, depending on the copy direction (@backwards) + * + * src shadow: |uuuu|uuuu|....| + * src origin: |o111|o222|....| + * + * backwards = 0: + * dst shadow: |.uuu|uuuu|u...| + * dst origin: |....|o111|o222| - fill the empty slot with o111 + * backwards = 1: + * dst shadow: |.uuu|uuuu|u...| + * dst origin: |o111|o222|....| - fill the empty slot with o222 + */ + if (src_slots < dst_slots) { + if (backwards) { + shadow = align_shadow_src[src_slots - 1]; + skip_bits = (((u64)dst + n) % KMSAN_ORIGIN_SIZE) * 8; + shadow = (shadow << skip_bits) >> skip_bits; + if (shadow) + /* src_slots > 0, therefore dst_slots is at least 2 */ + origin_dst[dst_slots - 1] = origin_dst[dst_slots - 2]; + } else { + shadow = align_shadow_src[0]; + skip_bits = ((u64)dst % KMSAN_ORIGIN_SIZE) * 8; + shadow = (shadow >> skip_bits) << skip_bits; + if (shadow) + origin_dst[0] = origin_dst[1]; + } + } +} + +depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id) +{ + unsigned long entries[3]; + u32 extra_bits; + int depth; + bool uaf; + + if (!id) + return id; + /* + * Make sure we have enough spare bits in |id| to hold the UAF bit and + * the chain depth. + */ + BUILD_BUG_ON((1 << STACK_DEPOT_EXTRA_BITS) <= (MAX_CHAIN_DEPTH << 1)); + + extra_bits = stack_depot_get_extra_bits(id); + depth = kmsan_depth_from_eb(extra_bits); + uaf = kmsan_uaf_from_eb(extra_bits); + + if (depth >= MAX_CHAIN_DEPTH) { + static atomic_long_t kmsan_skipped_origins; + long skipped = atomic_long_inc_return(&kmsan_skipped_origins); + + if (skipped % NUM_SKIPPED_TO_WARN == 0) { + pr_warn("not chained %ld origins\n", skipped); + dump_stack(); + kmsan_print_origin(id); + } + return id; + } + depth++; + extra_bits = kmsan_extra_bits(depth, uaf); + + entries[0] = KMSAN_CHAIN_MAGIC_ORIGIN; + entries[1] = kmsan_save_stack_with_flags(GFP_ATOMIC, 0); + entries[2] = id; + /* + * @entries is a local var in non-instrumented code, so KMSAN does not + * know it is initialized. Explicitly unpoison it to avoid false + * positives when __stack_depot_save() passes it to instrumented code. + */ + kmsan_internal_unpoison_memory(entries, sizeof(entries), false); + return __stack_depot_save(entries, ARRAY_SIZE(entries), extra_bits, + GFP_ATOMIC, true); +} + +void kmsan_internal_set_shadow_origin(void *addr, size_t size, int b, + u32 origin, bool checked) +{ + u64 address = (u64)addr; + void *shadow_start; + u32 *origin_start; + size_t pad = 0; + int i; + + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(addr, size)); + shadow_start = kmsan_get_metadata(addr, KMSAN_META_SHADOW); + if (!shadow_start) { + /* + * kmsan_metadata_is_contiguous() is true, so either all shadow + * and origin pages are NULL, or all are non-NULL. + */ + if (checked) { + pr_err("%s: not memsetting %ld bytes starting at %px, because the shadow is NULL\n", + __func__, size, addr); + KMSAN_WARN_ON(true); + } + return; + } + __memset(shadow_start, b, size); + + if (!IS_ALIGNED(address, KMSAN_ORIGIN_SIZE)) { + pad = address % KMSAN_ORIGIN_SIZE; + address -= pad; + size += pad; + } + size = ALIGN(size, KMSAN_ORIGIN_SIZE); + origin_start = + (u32 *)kmsan_get_metadata((void *)address, KMSAN_META_ORIGIN); + + for (i = 0; i < size / KMSAN_ORIGIN_SIZE; i++) + origin_start[i] = origin; +} + +struct page *kmsan_vmalloc_to_page_or_null(void *vaddr) +{ + struct page *page; + + if (!kmsan_internal_is_vmalloc_addr(vaddr) && + !kmsan_internal_is_module_addr(vaddr)) + return NULL; + page = vmalloc_to_page(vaddr); + if (pfn_valid(page_to_pfn(page))) + return page; + else + return NULL; +} + +void kmsan_internal_check_memory(void *addr, size_t size, const void *user_addr, + int reason) +{ + depot_stack_handle_t cur_origin = 0, new_origin = 0; + unsigned long addr64 = (unsigned long)addr; + depot_stack_handle_t *origin = NULL; + unsigned char *shadow = NULL; + int cur_off_start = -1; + int i, chunk_size; + size_t pos = 0; + + if (!size) + return; + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(addr, size)); + while (pos < size) { + chunk_size = min(size - pos, + PAGE_SIZE - ((addr64 + pos) % PAGE_SIZE)); + shadow = kmsan_get_metadata((void *)(addr64 + pos), + KMSAN_META_SHADOW); + if (!shadow) { + /* + * This page is untracked. If there were uninitialized + * bytes before, report them. + */ + if (cur_origin) { + kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, + cur_off_start, pos - 1, user_addr, + reason); + kmsan_leave_runtime(); + } + cur_origin = 0; + cur_off_start = -1; + pos += chunk_size; + continue; + } + for (i = 0; i < chunk_size; i++) { + if (!shadow[i]) { + /* + * This byte is unpoisoned. If there were + * poisoned bytes before, report them. + */ + if (cur_origin) { + kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, + cur_off_start, pos + i - 1, + user_addr, reason); + kmsan_leave_runtime(); + } + cur_origin = 0; + cur_off_start = -1; + continue; + } + origin = kmsan_get_metadata((void *)(addr64 + pos + i), + KMSAN_META_ORIGIN); + KMSAN_WARN_ON(!origin); + new_origin = *origin; + /* + * Encountered new origin - report the previous + * uninitialized range. + */ + if (cur_origin != new_origin) { + if (cur_origin) { + kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, + cur_off_start, pos + i - 1, + user_addr, reason); + kmsan_leave_runtime(); + } + cur_origin = new_origin; + cur_off_start = pos + i; + } + } + pos += chunk_size; + } + KMSAN_WARN_ON(pos != size); + if (cur_origin) { + kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, cur_off_start, pos - 1, + user_addr, reason); + kmsan_leave_runtime(); + } +} + +bool kmsan_metadata_is_contiguous(void *addr, size_t size) +{ + char *cur_shadow = NULL, *next_shadow = NULL, *cur_origin = NULL, + *next_origin = NULL; + u64 cur_addr = (u64)addr, next_addr = cur_addr + PAGE_SIZE; + depot_stack_handle_t *origin_p; + bool all_untracked = false; + + if (!size) + return true; + + /* The whole range belongs to the same page. */ + if (ALIGN_DOWN(cur_addr + size - 1, PAGE_SIZE) == + ALIGN_DOWN(cur_addr, PAGE_SIZE)) + return true; + + cur_shadow = kmsan_get_metadata((void *)cur_addr, /*is_origin*/ false); + if (!cur_shadow) + all_untracked = true; + cur_origin = kmsan_get_metadata((void *)cur_addr, /*is_origin*/ true); + if (all_untracked && cur_origin) + goto report; + + for (; next_addr < (u64)addr + size; + cur_addr = next_addr, cur_shadow = next_shadow, + cur_origin = next_origin, next_addr += PAGE_SIZE) { + next_shadow = kmsan_get_metadata((void *)next_addr, false); + next_origin = kmsan_get_metadata((void *)next_addr, true); + if (all_untracked) { + if (next_shadow || next_origin) + goto report; + if (!next_shadow && !next_origin) + continue; + } + if (((u64)cur_shadow == ((u64)next_shadow - PAGE_SIZE)) && + ((u64)cur_origin == ((u64)next_origin - PAGE_SIZE))) + continue; + goto report; + } + return true; + +report: + pr_err("%s: attempting to access two shadow page ranges.\n", __func__); + pr_err("Access of size %ld at %px.\n", size, addr); + pr_err("Addresses belonging to different ranges: %px and %px\n", + (void *)cur_addr, (void *)next_addr); + pr_err("page[0].shadow: %px, page[1].shadow: %px\n", cur_shadow, + next_shadow); + pr_err("page[0].origin: %px, page[1].origin: %px\n", cur_origin, + next_origin); + origin_p = kmsan_get_metadata(addr, KMSAN_META_ORIGIN); + if (origin_p) { + pr_err("Origin: %08x\n", *origin_p); + kmsan_print_origin(*origin_p); + } else { + pr_err("Origin: unavailable\n"); + } + return false; +} + +bool kmsan_internal_is_module_addr(void *vaddr) +{ + return ((u64)vaddr >= MODULES_VADDR) && ((u64)vaddr < MODULES_END); +} + +bool kmsan_internal_is_vmalloc_addr(void *addr) +{ + return ((u64)addr >= VMALLOC_START) && ((u64)addr < VMALLOC_END); +} diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c new file mode 100644 index 0000000000000..4ac62fa67a02a --- /dev/null +++ b/mm/kmsan/hooks.c @@ -0,0 +1,66 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN hooks for kernel subsystems. + * + * These functions handle creation of KMSAN metadata for memory allocations. + * + * Copyright (C) 2018-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#include +#include +#include +#include +#include +#include + +#include "../internal.h" +#include "../slab.h" +#include "kmsan.h" + +/* + * Instrumented functions shouldn't be called under + * kmsan_enter_runtime()/kmsan_leave_runtime(), because this will lead to + * skipping effects of functions like memset() inside instrumented code. + */ + +/* Functions from kmsan-checks.h follow. */ +void kmsan_poison_memory(const void *address, size_t size, gfp_t flags) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + /* The users may want to poison/unpoison random memory. */ + kmsan_internal_poison_memory((void *)address, size, flags, + KMSAN_POISON_NOCHECK); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_poison_memory); + +void kmsan_unpoison_memory(const void *address, size_t size) +{ + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + ua_flags = user_access_save(); + kmsan_enter_runtime(); + /* The users may want to poison/unpoison random memory. */ + kmsan_internal_unpoison_memory((void *)address, size, + KMSAN_POISON_NOCHECK); + kmsan_leave_runtime(); + user_access_restore(ua_flags); +} +EXPORT_SYMBOL(kmsan_unpoison_memory); + +void kmsan_check_memory(const void *addr, size_t size) +{ + if (!kmsan_enabled) + return; + return kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0, + REASON_ANY); +} +EXPORT_SYMBOL(kmsan_check_memory); diff --git a/mm/kmsan/instrumentation.c b/mm/kmsan/instrumentation.c new file mode 100644 index 0000000000000..fe062d123a76f --- /dev/null +++ b/mm/kmsan/instrumentation.c @@ -0,0 +1,267 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN compiler API. + * + * This file implements __msan_XXX hooks that Clang inserts into the code + * compiled with -fsanitize=kernel-memory. + * See Documentation/dev-tools/kmsan.rst for more information on how KMSAN + * instrumentation works. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#include "kmsan.h" +#include +#include +#include + +static inline bool is_bad_asm_addr(void *addr, uintptr_t size, bool is_store) +{ + if ((u64)addr < TASK_SIZE) + return true; + if (!kmsan_get_metadata(addr, KMSAN_META_SHADOW)) + return true; + return false; +} + +static inline struct shadow_origin_ptr +get_shadow_origin_ptr(void *addr, u64 size, bool store) +{ + unsigned long ua_flags = user_access_save(); + struct shadow_origin_ptr ret; + + ret = kmsan_get_shadow_origin_ptr(addr, size, store); + user_access_restore(ua_flags); + return ret; +} + +/* Get shadow and origin pointers for a memory load with non-standard size. */ +struct shadow_origin_ptr __msan_metadata_ptr_for_load_n(void *addr, + uintptr_t size) +{ + return get_shadow_origin_ptr(addr, size, /*store*/ false); +} +EXPORT_SYMBOL(__msan_metadata_ptr_for_load_n); + +/* Get shadow and origin pointers for a memory store with non-standard size. */ +struct shadow_origin_ptr __msan_metadata_ptr_for_store_n(void *addr, + uintptr_t size) +{ + return get_shadow_origin_ptr(addr, size, /*store*/ true); +} +EXPORT_SYMBOL(__msan_metadata_ptr_for_store_n); + +/* + * Declare functions that obtain shadow/origin pointers for loads and stores + * with fixed size. + */ +#define DECLARE_METADATA_PTR_GETTER(size) \ + struct shadow_origin_ptr __msan_metadata_ptr_for_load_##size( \ + void *addr) \ + { \ + return get_shadow_origin_ptr(addr, size, /*store*/ false); \ + } \ + EXPORT_SYMBOL(__msan_metadata_ptr_for_load_##size); \ + struct shadow_origin_ptr __msan_metadata_ptr_for_store_##size( \ + void *addr) \ + { \ + return get_shadow_origin_ptr(addr, size, /*store*/ true); \ + } \ + EXPORT_SYMBOL(__msan_metadata_ptr_for_store_##size) + +DECLARE_METADATA_PTR_GETTER(1); +DECLARE_METADATA_PTR_GETTER(2); +DECLARE_METADATA_PTR_GETTER(4); +DECLARE_METADATA_PTR_GETTER(8); + +/* + * Handle a memory store performed by inline assembly. KMSAN conservatively + * attempts to unpoison the outputs of asm() directives to prevent false + * positives caused by missed stores. + */ +void __msan_instrument_asm_store(void *addr, uintptr_t size) +{ + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + ua_flags = user_access_save(); + /* + * Most of the accesses are below 32 bytes. The two exceptions so far + * are clwb() (64 bytes) and FPU state (512 bytes). + * It's unlikely that the assembly will touch more than 512 bytes. + */ + if (size > 512) { + WARN_ONCE(1, "assembly store size too big: %ld\n", size); + size = 8; + } + if (is_bad_asm_addr(addr, size, /*is_store*/ true)) { + user_access_restore(ua_flags); + return; + } + kmsan_enter_runtime(); + /* Unpoisoning the memory on best effort. */ + kmsan_internal_unpoison_memory(addr, size, /*checked*/ false); + kmsan_leave_runtime(); + user_access_restore(ua_flags); +} +EXPORT_SYMBOL(__msan_instrument_asm_store); + +/* Handle llvm.memmove intrinsic. */ +void *__msan_memmove(void *dst, const void *src, uintptr_t n) +{ + void *result; + + result = __memmove(dst, src, n); + if (!n) + /* Some people call memmove() with zero length. */ + return result; + if (!kmsan_enabled || kmsan_in_runtime()) + return result; + + kmsan_internal_memmove_metadata(dst, (void *)src, n); + + return result; +} +EXPORT_SYMBOL(__msan_memmove); + +/* Handle llvm.memcpy intrinsic. */ +void *__msan_memcpy(void *dst, const void *src, uintptr_t n) +{ + void *result; + + result = __memcpy(dst, src, n); + if (!n) + /* Some people call memcpy() with zero length. */ + return result; + + if (!kmsan_enabled || kmsan_in_runtime()) + return result; + + /* Using memmove instead of memcpy doesn't affect correctness. */ + kmsan_internal_memmove_metadata(dst, (void *)src, n); + + return result; +} +EXPORT_SYMBOL(__msan_memcpy); + +/* Handle llvm.memset intrinsic. */ +void *__msan_memset(void *dst, int c, uintptr_t n) +{ + void *result; + + result = __memset(dst, c, n); + if (!kmsan_enabled || kmsan_in_runtime()) + return result; + + kmsan_enter_runtime(); + /* + * Clang doesn't pass parameter metadata here, so it is impossible to + * use shadow of @c to set up the shadow for @dst. + */ + kmsan_internal_unpoison_memory(dst, n, /*checked*/ false); + kmsan_leave_runtime(); + + return result; +} +EXPORT_SYMBOL(__msan_memset); + +/* + * Create a new origin from an old one. This is done when storing an + * uninitialized value to memory. When reporting an error, KMSAN unrolls and + * prints the whole chain of stores that preceded the use of this value. + */ +depot_stack_handle_t __msan_chain_origin(depot_stack_handle_t origin) +{ + depot_stack_handle_t ret = 0; + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return ret; + + ua_flags = user_access_save(); + + /* Creating new origins may allocate memory. */ + kmsan_enter_runtime(); + ret = kmsan_internal_chain_origin(origin); + kmsan_leave_runtime(); + user_access_restore(ua_flags); + return ret; +} +EXPORT_SYMBOL(__msan_chain_origin); + +/* Poison a local variable when entering a function. */ +void __msan_poison_alloca(void *address, uintptr_t size, char *descr) +{ + depot_stack_handle_t handle; + unsigned long entries[4]; + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + ua_flags = user_access_save(); + entries[0] = KMSAN_ALLOCA_MAGIC_ORIGIN; + entries[1] = (u64)descr; + entries[2] = (u64)__builtin_return_address(0); + /* + * With frame pointers enabled, it is possible to quickly fetch the + * second frame of the caller stack without calling the unwinder. + * Without them, simply do not bother. + */ + if (IS_ENABLED(CONFIG_UNWINDER_FRAME_POINTER)) + entries[3] = (u64)__builtin_return_address(1); + else + entries[3] = 0; + + /* stack_depot_save() may allocate memory. */ + kmsan_enter_runtime(); + handle = stack_depot_save(entries, ARRAY_SIZE(entries), GFP_ATOMIC); + kmsan_leave_runtime(); + + kmsan_internal_set_shadow_origin(address, size, -1, handle, + /*checked*/ true); + user_access_restore(ua_flags); +} +EXPORT_SYMBOL(__msan_poison_alloca); + +/* Unpoison a local variable. */ +void __msan_unpoison_alloca(void *address, uintptr_t size) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + kmsan_enter_runtime(); + kmsan_internal_unpoison_memory(address, size, /*checked*/ true); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(__msan_unpoison_alloca); + +/* + * Report that an uninitialized value with the given origin was used in a way + * that constituted undefined behavior. + */ +void __msan_warning(u32 origin) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + kmsan_report(origin, /*address*/ 0, /*size*/ 0, + /*off_first*/ 0, /*off_last*/ 0, /*user_addr*/ 0, + REASON_ANY); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(__msan_warning); + +/* + * At the beginning of an instrumented function, obtain the pointer to + * `struct kmsan_context_state` holding the metadata for function parameters. + */ +struct kmsan_context_state *__msan_get_context_state(void) +{ + return &kmsan_get_context()->cstate; +} +EXPORT_SYMBOL(__msan_get_context_state); diff --git a/mm/kmsan/kmsan.h b/mm/kmsan/kmsan.h new file mode 100644 index 0000000000000..bfe38789950a6 --- /dev/null +++ b/mm/kmsan/kmsan.h @@ -0,0 +1,183 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Functions used by the KMSAN runtime. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#ifndef __MM_KMSAN_KMSAN_H +#define __MM_KMSAN_KMSAN_H + +#include +#include +#include +#include +#include +#include +#include +#include + +#define KMSAN_ALLOCA_MAGIC_ORIGIN 0xabcd0100 +#define KMSAN_CHAIN_MAGIC_ORIGIN 0xabcd0200 + +#define KMSAN_POISON_NOCHECK 0x0 +#define KMSAN_POISON_CHECK 0x1 +#define KMSAN_POISON_FREE 0x2 + +#define KMSAN_ORIGIN_SIZE 4 + +#define KMSAN_STACK_DEPTH 64 + +#define KMSAN_META_SHADOW (false) +#define KMSAN_META_ORIGIN (true) + +extern bool kmsan_enabled; +extern int panic_on_kmsan; + +/* + * KMSAN performs a lot of consistency checks that are currently enabled by + * default. BUG_ON is normally discouraged in the kernel, unless used for + * debugging, but KMSAN itself is a debugging tool, so it makes little sense to + * recover if something goes wrong. + */ +#define KMSAN_WARN_ON(cond) \ + ({ \ + const bool __cond = WARN_ON(cond); \ + if (unlikely(__cond)) { \ + WRITE_ONCE(kmsan_enabled, false); \ + if (panic_on_kmsan) { \ + /* Can't call panic() here because */ \ + /* of uaccess checks.*/ \ + BUG(); \ + } \ + } \ + __cond; \ + }) + +/* + * A pair of metadata pointers to be returned by the instrumentation functions. + */ +struct shadow_origin_ptr { + void *shadow, *origin; +}; + +struct shadow_origin_ptr kmsan_get_shadow_origin_ptr(void *addr, u64 size, + bool store); +void *kmsan_get_metadata(void *addr, bool is_origin); + +enum kmsan_bug_reason { + REASON_ANY, + REASON_COPY_TO_USER, + REASON_SUBMIT_URB, +}; + +void kmsan_print_origin(depot_stack_handle_t origin); + +/** + * kmsan_report() - Report a use of uninitialized value. + * @origin: Stack ID of the uninitialized value. + * @address: Address at which the memory access happens. + * @size: Memory access size. + * @off_first: Offset (from @address) of the first byte to be reported. + * @off_last: Offset (from @address) of the last byte to be reported. + * @user_addr: When non-NULL, denotes the userspace address to which the kernel + * is leaking data. + * @reason: Error type from enum kmsan_bug_reason. + * + * kmsan_report() prints an error message for a consequent group of bytes + * sharing the same origin. If an uninitialized value is used in a comparison, + * this function is called once without specifying the addresses. When checking + * a memory range, KMSAN may call kmsan_report() multiple times with the same + * @address, @size, @user_addr and @reason, but different @off_first and + * @off_last corresponding to different @origin values. + */ +void kmsan_report(depot_stack_handle_t origin, void *address, int size, + int off_first, int off_last, const void *user_addr, + enum kmsan_bug_reason reason); + +DECLARE_PER_CPU(struct kmsan_ctx, kmsan_percpu_ctx); + +static __always_inline struct kmsan_ctx *kmsan_get_context(void) +{ + return in_task() ? ¤t->kmsan_ctx : raw_cpu_ptr(&kmsan_percpu_ctx); +} + +/* + * When a compiler hook is invoked, it may make a call to instrumented code + * and eventually call itself recursively. To avoid that, we protect the + * runtime entry points with kmsan_enter_runtime()/kmsan_leave_runtime() and + * exit the hook if kmsan_in_runtime() is true. + */ + +static __always_inline bool kmsan_in_runtime(void) +{ + if ((hardirq_count() >> HARDIRQ_SHIFT) > 1) + return true; + return kmsan_get_context()->kmsan_in_runtime; +} + +static __always_inline void kmsan_enter_runtime(void) +{ + struct kmsan_ctx *ctx; + + ctx = kmsan_get_context(); + KMSAN_WARN_ON(ctx->kmsan_in_runtime++); +} + +static __always_inline void kmsan_leave_runtime(void) +{ + struct kmsan_ctx *ctx = kmsan_get_context(); + + KMSAN_WARN_ON(--ctx->kmsan_in_runtime); +} + +depot_stack_handle_t kmsan_save_stack(void); +depot_stack_handle_t kmsan_save_stack_with_flags(gfp_t flags, + unsigned int extra_bits); + +/* + * Pack and unpack the origin chain depth and UAF flag to/from the extra bits + * provided by the stack depot. + * The UAF flag is stored in the lowest bit, followed by the depth in the upper + * bits. + * set_dsh_extra_bits() is responsible for clamping the value. + */ +static __always_inline unsigned int kmsan_extra_bits(unsigned int depth, + bool uaf) +{ + return (depth << 1) | uaf; +} + +static __always_inline bool kmsan_uaf_from_eb(unsigned int extra_bits) +{ + return extra_bits & 1; +} + +static __always_inline unsigned int kmsan_depth_from_eb(unsigned int extra_bits) +{ + return extra_bits >> 1; +} + +/* + * kmsan_internal_ functions are supposed to be very simple and not require the + * kmsan_in_runtime() checks. + */ +void kmsan_internal_memmove_metadata(void *dst, void *src, size_t n); +void kmsan_internal_poison_memory(void *address, size_t size, gfp_t flags, + unsigned int poison_flags); +void kmsan_internal_unpoison_memory(void *address, size_t size, bool checked); +void kmsan_internal_set_shadow_origin(void *address, size_t size, int b, + u32 origin, bool checked); +depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id); + +bool kmsan_metadata_is_contiguous(void *addr, size_t size); +void kmsan_internal_check_memory(void *addr, size_t size, const void *user_addr, + int reason); +bool kmsan_internal_is_module_addr(void *vaddr); +bool kmsan_internal_is_vmalloc_addr(void *addr); + +struct page *kmsan_vmalloc_to_page_or_null(void *vaddr); + +#endif /* __MM_KMSAN_KMSAN_H */ diff --git a/mm/kmsan/report.c b/mm/kmsan/report.c new file mode 100644 index 0000000000000..f36fca452e313 --- /dev/null +++ b/mm/kmsan/report.c @@ -0,0 +1,211 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN error reporting routines. + * + * Copyright (C) 2019-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#include +#include +#include +#include +#include + +#include "kmsan.h" + +static DEFINE_SPINLOCK(kmsan_report_lock); +#define DESCR_SIZE 128 +/* Protected by kmsan_report_lock */ +static char report_local_descr[DESCR_SIZE]; +int panic_on_kmsan __read_mostly; + +#ifdef MODULE_PARAM_PREFIX +#undef MODULE_PARAM_PREFIX +#endif +#define MODULE_PARAM_PREFIX "kmsan." +module_param_named(panic, panic_on_kmsan, int, 0); + +/* + * Skip internal KMSAN frames. + */ +static int get_stack_skipnr(const unsigned long stack_entries[], + int num_entries) +{ + int len, skip; + char buf[64]; + + for (skip = 0; skip < num_entries; ++skip) { + len = scnprintf(buf, sizeof(buf), "%ps", + (void *)stack_entries[skip]); + + /* Never show __msan_* or kmsan_* functions. */ + if ((strnstr(buf, "__msan_", len) == buf) || + (strnstr(buf, "kmsan_", len) == buf)) + continue; + + /* + * No match for runtime functions -- @skip entries to skip to + * get to first frame of interest. + */ + break; + } + + return skip; +} + +/* + * Currently the descriptions of locals generated by Clang look as follows: + * ----local_name@function_name + * We want to print only the name of the local, as other information in that + * description can be confusing. + * The meaningful part of the description is copied to a global buffer to avoid + * allocating memory. + */ +static char *pretty_descr(char *descr) +{ + int i, pos = 0, len = strlen(descr); + + for (i = 0; i < len; i++) { + if (descr[i] == '@') + break; + if (descr[i] == '-') + continue; + report_local_descr[pos] = descr[i]; + if (pos + 1 == DESCR_SIZE) + break; + pos++; + } + report_local_descr[pos] = 0; + return report_local_descr; +} + +void kmsan_print_origin(depot_stack_handle_t origin) +{ + unsigned long *entries = NULL, *chained_entries = NULL; + unsigned int nr_entries, chained_nr_entries, skipnr; + void *pc1 = NULL, *pc2 = NULL; + depot_stack_handle_t head; + unsigned long magic; + char *descr = NULL; + + if (!origin) + return; + + while (true) { + nr_entries = stack_depot_fetch(origin, &entries); + magic = nr_entries ? entries[0] : 0; + if ((nr_entries == 4) && (magic == KMSAN_ALLOCA_MAGIC_ORIGIN)) { + descr = (char *)entries[1]; + pc1 = (void *)entries[2]; + pc2 = (void *)entries[3]; + pr_err("Local variable %s created at:\n", + pretty_descr(descr)); + if (pc1) + pr_err(" %pSb\n", pc1); + if (pc2) + pr_err(" %pSb\n", pc2); + break; + } + if ((nr_entries == 3) && (magic == KMSAN_CHAIN_MAGIC_ORIGIN)) { + head = entries[1]; + origin = entries[2]; + pr_err("Uninit was stored to memory at:\n"); + chained_nr_entries = + stack_depot_fetch(head, &chained_entries); + kmsan_internal_unpoison_memory( + chained_entries, + chained_nr_entries * sizeof(*chained_entries), + /*checked*/ false); + skipnr = get_stack_skipnr(chained_entries, + chained_nr_entries); + stack_trace_print(chained_entries + skipnr, + chained_nr_entries - skipnr, 0); + pr_err("\n"); + continue; + } + pr_err("Uninit was created at:\n"); + if (nr_entries) { + skipnr = get_stack_skipnr(entries, nr_entries); + stack_trace_print(entries + skipnr, nr_entries - skipnr, + 0); + } else { + pr_err("(stack is not available)\n"); + } + break; + } +} + +void kmsan_report(depot_stack_handle_t origin, void *address, int size, + int off_first, int off_last, const void *user_addr, + enum kmsan_bug_reason reason) +{ + unsigned long stack_entries[KMSAN_STACK_DEPTH]; + int num_stack_entries, skipnr; + char *bug_type = NULL; + unsigned long flags, ua_flags; + bool is_uaf; + + if (!kmsan_enabled) + return; + if (!current->kmsan_ctx.allow_reporting) + return; + if (!origin) + return; + + current->kmsan_ctx.allow_reporting = false; + ua_flags = user_access_save(); + spin_lock_irqsave(&kmsan_report_lock, flags); + pr_err("=====================================================\n"); + is_uaf = kmsan_uaf_from_eb(stack_depot_get_extra_bits(origin)); + switch (reason) { + case REASON_ANY: + bug_type = is_uaf ? "use-after-free" : "uninit-value"; + break; + case REASON_COPY_TO_USER: + bug_type = is_uaf ? "kernel-infoleak-after-free" : + "kernel-infoleak"; + break; + case REASON_SUBMIT_URB: + bug_type = is_uaf ? "kernel-usb-infoleak-after-free" : + "kernel-usb-infoleak"; + break; + } + + num_stack_entries = + stack_trace_save(stack_entries, KMSAN_STACK_DEPTH, 1); + skipnr = get_stack_skipnr(stack_entries, num_stack_entries); + + pr_err("BUG: KMSAN: %s in %pSb\n", + bug_type, (void *)stack_entries[skipnr]); + stack_trace_print(stack_entries + skipnr, num_stack_entries - skipnr, + 0); + pr_err("\n"); + + kmsan_print_origin(origin); + + if (size) { + pr_err("\n"); + if (off_first == off_last) + pr_err("Byte %d of %d is uninitialized\n", off_first, + size); + else + pr_err("Bytes %d-%d of %d are uninitialized\n", + off_first, off_last, size); + } + if (address) + pr_err("Memory access of size %d starts at %px\n", size, + address); + if (user_addr && reason == REASON_COPY_TO_USER) + pr_err("Data copied to user address %px\n", user_addr); + pr_err("\n"); + dump_stack_print_info(KERN_ERR); + pr_err("=====================================================\n"); + add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); + spin_unlock_irqrestore(&kmsan_report_lock, flags); + if (panic_on_kmsan) + panic("kmsan.panic set ...\n"); + user_access_restore(ua_flags); + current->kmsan_ctx.allow_reporting = true; +} diff --git a/mm/kmsan/shadow.c b/mm/kmsan/shadow.c new file mode 100644 index 0000000000000..de58cfbc55b9d --- /dev/null +++ b/mm/kmsan/shadow.c @@ -0,0 +1,186 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN shadow implementation. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../internal.h" +#include "kmsan.h" + +#define shadow_page_for(page) ((page)->kmsan_shadow) + +#define origin_page_for(page) ((page)->kmsan_origin) + +static void *shadow_ptr_for(struct page *page) +{ + return page_address(shadow_page_for(page)); +} + +static void *origin_ptr_for(struct page *page) +{ + return page_address(origin_page_for(page)); +} + +static bool page_has_metadata(struct page *page) +{ + return shadow_page_for(page) && origin_page_for(page); +} + +static void set_no_shadow_origin_page(struct page *page) +{ + shadow_page_for(page) = NULL; + origin_page_for(page) = NULL; +} + +/* + * Dummy load and store pages to be used when the real metadata is unavailable. + * There are separate pages for loads and stores, so that every load returns a + * zero, and every store doesn't affect other loads. + */ +static char dummy_load_page[PAGE_SIZE] __aligned(PAGE_SIZE); +static char dummy_store_page[PAGE_SIZE] __aligned(PAGE_SIZE); + +/* + * Taken from arch/x86/mm/physaddr.h to avoid using an instrumented version. + */ +static int kmsan_phys_addr_valid(unsigned long addr) +{ + if (IS_ENABLED(CONFIG_PHYS_ADDR_T_64BIT)) + return !(addr >> boot_cpu_data.x86_phys_bits); + else + return 1; +} + +/* + * Taken from arch/x86/mm/physaddr.c to avoid using an instrumented version. + */ +static bool kmsan_virt_addr_valid(void *addr) +{ + unsigned long x = (unsigned long)addr; + unsigned long y = x - __START_KERNEL_map; + + /* use the carry flag to determine if x was < __START_KERNEL_map */ + if (unlikely(x > y)) { + x = y + phys_base; + + if (y >= KERNEL_IMAGE_SIZE) + return false; + } else { + x = y + (__START_KERNEL_map - PAGE_OFFSET); + + /* carry flag will be set if starting x was >= PAGE_OFFSET */ + if ((x > y) || !kmsan_phys_addr_valid(x)) + return false; + } + + return pfn_valid(x >> PAGE_SHIFT); +} + +static unsigned long vmalloc_meta(void *addr, bool is_origin) +{ + unsigned long addr64 = (unsigned long)addr, off; + + KMSAN_WARN_ON(is_origin && !IS_ALIGNED(addr64, KMSAN_ORIGIN_SIZE)); + if (kmsan_internal_is_vmalloc_addr(addr)) { + off = addr64 - VMALLOC_START; + return off + (is_origin ? KMSAN_VMALLOC_ORIGIN_START : + KMSAN_VMALLOC_SHADOW_START); + } + if (kmsan_internal_is_module_addr(addr)) { + off = addr64 - MODULES_VADDR; + return off + (is_origin ? KMSAN_MODULES_ORIGIN_START : + KMSAN_MODULES_SHADOW_START); + } + return 0; +} + +static struct page *virt_to_page_or_null(void *vaddr) +{ + if (kmsan_virt_addr_valid(vaddr)) + return virt_to_page(vaddr); + else + return NULL; +} + +struct shadow_origin_ptr kmsan_get_shadow_origin_ptr(void *address, u64 size, + bool store) +{ + struct shadow_origin_ptr ret; + void *shadow; + + /* + * Even if we redirect this memory access to the dummy page, it will + * go out of bounds. + */ + KMSAN_WARN_ON(size > PAGE_SIZE); + + if (!kmsan_enabled || kmsan_in_runtime()) + goto return_dummy; + + KMSAN_WARN_ON(!kmsan_metadata_is_contiguous(address, size)); + shadow = kmsan_get_metadata(address, KMSAN_META_SHADOW); + if (!shadow) + goto return_dummy; + + ret.shadow = shadow; + ret.origin = kmsan_get_metadata(address, KMSAN_META_ORIGIN); + return ret; + +return_dummy: + if (store) { + /* Ignore this store. */ + ret.shadow = dummy_store_page; + ret.origin = dummy_store_page; + } else { + /* This load will return zero. */ + ret.shadow = dummy_load_page; + ret.origin = dummy_load_page; + } + return ret; +} + +/* + * Obtain the shadow or origin pointer for the given address, or NULL if there's + * none. The caller must check the return value for being non-NULL if needed. + * The return value of this function should not depend on whether we're in the + * runtime or not. + */ +void *kmsan_get_metadata(void *address, bool is_origin) +{ + u64 addr = (u64)address, pad, off; + struct page *page; + void *ret; + + if (is_origin && !IS_ALIGNED(addr, KMSAN_ORIGIN_SIZE)) { + pad = addr % KMSAN_ORIGIN_SIZE; + addr -= pad; + } + address = (void *)addr; + if (kmsan_internal_is_vmalloc_addr(address) || + kmsan_internal_is_module_addr(address)) + return (void *)vmalloc_meta(address, is_origin); + + page = virt_to_page_or_null(address); + if (!page) + return NULL; + if (!page_has_metadata(page)) + return NULL; + off = addr % PAGE_SIZE; + + ret = (is_origin ? origin_ptr_for(page) : shadow_ptr_for(page)) + off; + return ret; +} diff --git a/scripts/Makefile.kmsan b/scripts/Makefile.kmsan new file mode 100644 index 0000000000000..9793591f9855c --- /dev/null +++ b/scripts/Makefile.kmsan @@ -0,0 +1 @@ +export CFLAGS_KMSAN := -fsanitize=kernel-memory diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib index 9f69ecdd7977a..49e6e57fdf4c8 100644 --- a/scripts/Makefile.lib +++ b/scripts/Makefile.lib @@ -157,6 +157,15 @@ _c_flags += $(if $(patsubst n%,, \ endif endif +ifeq ($(CONFIG_KMSAN),y) +_c_flags += $(if $(patsubst n%,, \ + $(KMSAN_SANITIZE_$(basetarget).o)$(KMSAN_SANITIZE)y), \ + $(CFLAGS_KMSAN)) +_c_flags += $(if $(patsubst n%,, \ + $(KMSAN_ENABLE_CHECKS_$(basetarget).o)$(KMSAN_ENABLE_CHECKS)y), \ + , -mllvm -msan-disable-checks=1) +endif + ifeq ($(CONFIG_UBSAN),y) _c_flags += $(if $(patsubst n%,, \ $(UBSAN_SANITIZE_$(basetarget).o)$(UBSAN_SANITIZE)$(CONFIG_UBSAN_SANITIZE_ALL)), \ From patchwork Tue Apr 26 16:42:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827496 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 231E4C433F5 for ; Tue, 26 Apr 2022 16:44:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 53C966B008C; Tue, 26 Apr 2022 12:44:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 49D666B0092; Tue, 26 Apr 2022 12:44:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2EE496B0093; Tue, 26 Apr 2022 12:44:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 21A356B008C for ; Tue, 26 Apr 2022 12:44:56 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id EFB83B5B for ; Tue, 26 Apr 2022 16:44:55 +0000 (UTC) X-FDA: 79399604550.04.C92A732 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf22.hostedemail.com (Postfix) with ESMTP id B45EDC0039 for ; Tue, 26 Apr 2022 16:44:54 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id sc26-20020a1709078a1a00b006effb6a81b9so9362861ejc.6 for ; Tue, 26 Apr 2022 09:44:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=W9bLVQHqrvNNLkakI5+eXR9eCGhjUZBWMw6dTDyY1eM=; b=BgZrayGSuVAfOhwVEgizv3zHVLAes+gixFFwYt44q0Anb09MOHmi4/yqVFg0co0xDn meP8eUM1Y1cp68Fqnny0uLKF0Qtln5gmkU4uSohGMZYlVmXcU+el8E0u30FueVIIajUM fV0/mRhuiJoD20Efo5TORsSuNROVI//1ETGTAUMCg4QXyCvHkSNJh0BZuJ+MEW//jk5W 5zxE3TyPvfyC/+M++78ZImLJvn13Sy60BHii4QN5AI+ODEg/JVu9RvQCDCnkT/RTmnsc 5z+2AC9k9WCqo7Z5jf84941KD9T2VCauRnuI4Jr4yEj//ttB99vhdrJTx2BaTgdSpkdR nqtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=W9bLVQHqrvNNLkakI5+eXR9eCGhjUZBWMw6dTDyY1eM=; b=SvxWMt2r/XV6ziWy9vvogdP9gTjY2jBoA/M6Z1WJXJLVD9MYhjA8eA6J2ShP0hs7M+ Q8ACf73O+UNBQckNP+XF9lShSbC+evdD4u1WQdvpHAK1PJMbvXHdpbo9EYrzro/1L/bf 3it57hnLTUB6AH7eIrKiAx8Gxeq8KaBpuWPNKdyGB9xrJGOaCCAAeg0+ubDdisdmtQcN pgm47ml+/J10hZ1EPW7drWciJS/Lp5nTF3TNPdPpwVjy0WGMOiUy2kPjYKpHPHCOAjbW druIzqqISRwWkTm4/h6iS5ZyYLxRA9zu9CHZOT8WE1G3gCeWJgBq3aXnBtZcFgvikVk+ 2ntw== X-Gm-Message-State: AOAM533t2EiTZpOrVHnITfUzPf5AN1iRias1yDiWkxIulnkZAAoRz/vr gLxZ6VrUX44KfxLMdMaV9LrNB6QHdSk= X-Google-Smtp-Source: ABdhPJwCG6zbeNkNwZsSEqpPcp6mObXzGX3q9XLvEMJvGGRoqht5Amkh6vhwVXvGFlIqKIoGDyBdhsZSVt0= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:84a:b0:423:fe99:8c53 with SMTP id b10-20020a056402084a00b00423fe998c53mr25379562edz.195.1650991494081; Tue, 26 Apr 2022 09:44:54 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:42 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-14-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 13/46] kmsan: implement kmsan_init(), initialize READ_ONCE_NOCHECK() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: iita5eezrmijqd5zyuykdrfb6uqyf8e7 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=BgZrayGS; spf=pass (imf22.hostedemail.com: domain of 3hiFoYgYKCIMnspklynvvnsl.jvtspu14-ttr2hjr.vyn@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3hiFoYgYKCIMnspklynvvnsl.jvtspu14-ttr2hjr.vyn@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: B45EDC0039 X-HE-Tag: 1650991494-88041 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kmsan_init() is a macro that takes a possibly uninitialized value and returns an initialized value of the same type. It can be used e.g. in cases when a value comes from non-instrumented code to avoid false positive reports. In particular, we use kmsan_init() in READ_ONCE_NOCHECK() so that it returns initialized values. This helps defeat false positives e.g. from leftover stack contents accessed by stack unwinders. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Icd1260073666f944922f031bfb6762379ba1fa38 --- include/asm-generic/rwonce.h | 5 +++-- include/linux/kmsan-checks.h | 40 ++++++++++++++++++++++++++++++++++++ mm/kmsan/Makefile | 5 ++++- mm/kmsan/annotations.c | 28 +++++++++++++++++++++++++ 4 files changed, 75 insertions(+), 3 deletions(-) create mode 100644 mm/kmsan/annotations.c diff --git a/include/asm-generic/rwonce.h b/include/asm-generic/rwonce.h index 8d0a6280e9824..7cf993af8e1ea 100644 --- a/include/asm-generic/rwonce.h +++ b/include/asm-generic/rwonce.h @@ -25,6 +25,7 @@ #include #include #include +#include /* * Yes, this permits 64-bit accesses on 32-bit architectures. These will @@ -69,14 +70,14 @@ unsigned long __read_once_word_nocheck(const void *addr) /* * Use READ_ONCE_NOCHECK() instead of READ_ONCE() if you need to load a - * word from memory atomically but without telling KASAN/KCSAN. This is + * word from memory atomically but without telling KASAN/KCSAN/KMSAN. This is * usually used by unwinding code when walking the stack of a running process. */ #define READ_ONCE_NOCHECK(x) \ ({ \ compiletime_assert(sizeof(x) == sizeof(unsigned long), \ "Unsupported access size for READ_ONCE_NOCHECK()."); \ - (typeof(x))__read_once_word_nocheck(&(x)); \ + kmsan_init((typeof(x))__read_once_word_nocheck(&(x))); \ }) static __no_kasan_or_inline diff --git a/include/linux/kmsan-checks.h b/include/linux/kmsan-checks.h index a6522a0c28df9..ecd8336190fc0 100644 --- a/include/linux/kmsan-checks.h +++ b/include/linux/kmsan-checks.h @@ -14,6 +14,44 @@ #ifdef CONFIG_KMSAN +/* + * Helper functions that mark the return value initialized. + * See mm/kmsan/annotations.c. + */ +u8 kmsan_init_1(u8 value); +u16 kmsan_init_2(u16 value); +u32 kmsan_init_4(u32 value); +u64 kmsan_init_8(u64 value); + +static inline void *kmsan_init_ptr(void *ptr) +{ + return (void *)kmsan_init_8((u64)ptr); +} + +static inline char kmsan_init_char(char value) +{ + return (u8)kmsan_init_1((u8)value); +} + +#define __decl_kmsan_init_type(type, fn) unsigned type : fn, signed type : fn + +/** + * kmsan_init - Make the value initialized. + * @val: 1-, 2-, 4- or 8-byte integer that may be treated as uninitialized by + * KMSAN. + * + * Return: value of @val that KMSAN treats as initialized. + */ +#define kmsan_init(val) \ + ( \ + (typeof(val))(_Generic((val), \ + __decl_kmsan_init_type(char, kmsan_init_1), \ + __decl_kmsan_init_type(short, kmsan_init_2), \ + __decl_kmsan_init_type(int, kmsan_init_4), \ + __decl_kmsan_init_type(long, kmsan_init_8), \ + char : kmsan_init_char, \ + void * : kmsan_init_ptr)(val))) + /** * kmsan_poison_memory() - Mark the memory range as uninitialized. * @address: address to start with. @@ -48,6 +86,8 @@ void kmsan_check_memory(const void *address, size_t size); #else +#define kmsan_init(value) (value) + static inline void kmsan_poison_memory(const void *address, size_t size, gfp_t flags) { diff --git a/mm/kmsan/Makefile b/mm/kmsan/Makefile index a80dde1de7048..73b705cbf75b9 100644 --- a/mm/kmsan/Makefile +++ b/mm/kmsan/Makefile @@ -1,9 +1,11 @@ -obj-y := core.o instrumentation.o hooks.o report.o shadow.o +obj-y := core.o instrumentation.o hooks.o report.o shadow.o annotations.o KMSAN_SANITIZE := n KCOV_INSTRUMENT := n UBSAN_SANITIZE := n +KMSAN_SANITIZE_kmsan_annotations.o := y + # Disable instrumentation of KMSAN runtime with other tools. CC_FLAGS_KMSAN_RUNTIME := -fno-stack-protector CC_FLAGS_KMSAN_RUNTIME += $(call cc-option,-fno-conserve-stack) @@ -11,6 +13,7 @@ CC_FLAGS_KMSAN_RUNTIME += -DDISABLE_BRANCH_PROFILING CFLAGS_REMOVE.o = $(CC_FLAGS_FTRACE) +CFLAGS_annotations.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_core.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_hooks.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_instrumentation.o := $(CC_FLAGS_KMSAN_RUNTIME) diff --git a/mm/kmsan/annotations.c b/mm/kmsan/annotations.c new file mode 100644 index 0000000000000..8ccde90bcd12b --- /dev/null +++ b/mm/kmsan/annotations.c @@ -0,0 +1,28 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN annotations. + * + * The kmsan_init_SIZE functions reside in a separate translation unit to + * prevent inlining them. Clang may inline functions marked with + * __no_sanitize_memory attribute into functions without it, which effectively + * results in ignoring the attribute. + * + * Copyright (C) 2017-2022 Google LLC + * Author: Alexander Potapenko + * + */ + +#include +#include + +#define DECLARE_KMSAN_INIT(size, t) \ + __no_sanitize_memory t kmsan_init_##size(t value) \ + { \ + return value; \ + } \ + EXPORT_SYMBOL(kmsan_init_##size) + +DECLARE_KMSAN_INIT(1, u8); +DECLARE_KMSAN_INIT(2, u16); +DECLARE_KMSAN_INIT(4, u32); +DECLARE_KMSAN_INIT(8, u64); From patchwork Tue Apr 26 16:42:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827497 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F2AF8C433EF for ; Tue, 26 Apr 2022 16:44:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8E32A6B0092; Tue, 26 Apr 2022 12:44:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8925A6B0093; Tue, 26 Apr 2022 12:44:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 70BAD6B0095; Tue, 26 Apr 2022 12:44:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 62AD66B0092 for ; Tue, 26 Apr 2022 12:44:59 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 3A2BF120B2C for ; Tue, 26 Apr 2022 16:44:59 +0000 (UTC) X-FDA: 79399604718.14.C1E92BC Received: from mail-lf1-f74.google.com (mail-lf1-f74.google.com [209.85.167.74]) by imf11.hostedemail.com (Postfix) with ESMTP id 0AE144004C for ; Tue, 26 Apr 2022 16:44:55 +0000 (UTC) Received: by mail-lf1-f74.google.com with SMTP id b16-20020a056512305000b00471effe87f9so4662204lfb.2 for ; Tue, 26 Apr 2022 09:44:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=QRr/YK4fONn3HTG8Fp9y+tnnWkzL1O1MoaFJezJL3sE=; b=GRtnB1LL/0WxvgZxxJD6lyURNXu6W8fs61POQbIbc0oumZ7k69NQoqD3tuCVl5yHj/ Gx/zngh3XGQ7t3vv0vTrJzo1vEkQTQ/ZRo418CMleof5FGdj9l6StzKqWUtXQAScQwo5 7P5qB/cvRckWW+BL+XiMdliFEO5yd4aWG7n9leQq17xa//Gzq2wBQTSCFsGG0JWGpu0+ pqy91hZcYLp1jkzx4uiMbKUqjtEQl6oUHgMvxwlnBvwzVMZsM6AIx5QoYYlhWVc7tQxH mHkgIrHRenuJcacBwVLYMBUhS7EqeYlTJUvvLtw99sQakaUg4whscJWcJBjl5RDYQ691 p17g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=QRr/YK4fONn3HTG8Fp9y+tnnWkzL1O1MoaFJezJL3sE=; b=3hB3N354+54HLHOkNWupCmS/ByG18lwhk8PRBF2cVJ22IvO2glrYZUp0Db1h40HY1t NFSlkZ2/hRaW+aI8f6KhpqyJlLOuBgCXe+6TPwBxoUI72KjserGx/luLX4nALOkkQA4Y nHrNIKRD8lcta8GvuzQMXUTCv7dNt+c0TnMaE1wQwgGlgBNUDyp7mqc33OfrivIk1OD0 BqsHAnkG4Gzh6H3I7XbDeTn/ATIdb4PfKmi52poru9XtVkU+yEOiIW2A5km44mcZaL76 IDQZKQYexma8OAgnjZnHrATguvngFjey9WGQYME1bL/wbYcQZGrrNsL/zh4/IeHv47Lk 6DvQ== X-Gm-Message-State: AOAM530HM8QihEJpuHK2i6CXe4Nj0BMyxmjiWL4rrwpXegqpbCJG8e2z Jtjn0ninN5gQVHu0fQTUn7r/+aeaaMQ= X-Google-Smtp-Source: ABdhPJxO/IzNek0yZm9+I1xLkkVDvFA7Rtx6U+bdK1jYJgO+mP11K/pbQde5ZHHXccF1PEslgOn1O7FKoX8= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6512:104a:b0:471:f0c2:99ee with SMTP id c10-20020a056512104a00b00471f0c299eemr14014612lfb.142.1650991496992; Tue, 26 Apr 2022 09:44:56 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:43 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-15-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 14/46] kmsan: disable instrumentation of unsupported common kernel code From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 0AE144004C X-Stat-Signature: iasxiyh4mnste9dwyyzhtxqocwdgewt9 X-Rspam-User: Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=GRtnB1LL; spf=pass (imf11.hostedemail.com: domain of 3iCFoYgYKCIUpurmn0pxxpun.lxvurw36-vvt4jlt.x0p@flex--glider.bounces.google.com designates 209.85.167.74 as permitted sender) smtp.mailfrom=3iCFoYgYKCIUpurmn0pxxpun.lxvurw36-vvt4jlt.x0p@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1650991495-134361 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: EFI stub cannot be linked with KMSAN runtime, so we disable instrumentation for it. Instrumenting kcov, stackdepot or lockdep leads to infinite recursion caused by instrumentation hooks calling instrumented code again. This patch was previously part of "kmsan: disable KMSAN instrumentation for certain kernel parts", but was split away per Mark Rutland's request. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I41ae706bd3474f074f6a870bfc3f0f90e9c720f7 --- drivers/firmware/efi/libstub/Makefile | 1 + kernel/Makefile | 1 + kernel/locking/Makefile | 3 ++- lib/Makefile | 1 + 4 files changed, 5 insertions(+), 1 deletion(-) diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile index d0537573501e9..81432d0c904b1 100644 --- a/drivers/firmware/efi/libstub/Makefile +++ b/drivers/firmware/efi/libstub/Makefile @@ -46,6 +46,7 @@ GCOV_PROFILE := n # Sanitizer runtimes are unavailable and cannot be linked here. KASAN_SANITIZE := n KCSAN_SANITIZE := n +KMSAN_SANITIZE := n UBSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y diff --git a/kernel/Makefile b/kernel/Makefile index 847a82bfe0e3a..2a98e46479817 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -39,6 +39,7 @@ KCOV_INSTRUMENT_kcov.o := n KASAN_SANITIZE_kcov.o := n KCSAN_SANITIZE_kcov.o := n UBSAN_SANITIZE_kcov.o := n +KMSAN_SANITIZE_kcov.o := n CFLAGS_kcov.o := $(call cc-option, -fno-conserve-stack) -fno-stack-protector # Don't instrument error handlers diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile index d51cabf28f382..ea925731fa40f 100644 --- a/kernel/locking/Makefile +++ b/kernel/locking/Makefile @@ -5,8 +5,9 @@ KCOV_INSTRUMENT := n obj-y += mutex.o semaphore.o rwsem.o percpu-rwsem.o -# Avoid recursion lockdep -> KCSAN -> ... -> lockdep. +# Avoid recursion lockdep -> sanitizer -> ... -> lockdep. KCSAN_SANITIZE_lockdep.o := n +KMSAN_SANITIZE_lockdep.o := n ifdef CONFIG_FUNCTION_TRACER CFLAGS_REMOVE_lockdep.o = $(CC_FLAGS_FTRACE) diff --git a/lib/Makefile b/lib/Makefile index 6b9ffc1bd1eed..caeb55f661726 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -269,6 +269,7 @@ obj-$(CONFIG_IRQ_POLL) += irq_poll.o CFLAGS_stackdepot.o += -fno-builtin obj-$(CONFIG_STACKDEPOT) += stackdepot.o KASAN_SANITIZE_stackdepot.o := n +KMSAN_SANITIZE_stackdepot.o := n KCOV_INSTRUMENT_stackdepot.o := n obj-$(CONFIG_REF_TRACKER) += ref_tracker.o From patchwork Tue Apr 26 16:42:44 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827498 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 264E4C4332F for ; Tue, 26 Apr 2022 16:45:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B2BB86B0093; Tue, 26 Apr 2022 12:45:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ADB0F6B0095; Tue, 26 Apr 2022 12:45:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9CADE6B0096; Tue, 26 Apr 2022 12:45:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 8C4326B0093 for ; Tue, 26 Apr 2022 12:45:01 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 6701460E67 for ; Tue, 26 Apr 2022 16:45:01 +0000 (UTC) X-FDA: 79399604802.01.6968E4D Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf04.hostedemail.com (Postfix) with ESMTP id 1E2B640058 for ; Tue, 26 Apr 2022 16:44:56 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id gs30-20020a1709072d1e00b006f00c67c0b0so8986608ejc.11 for ; Tue, 26 Apr 2022 09:45:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=eiEr+2P23HBOM4McbqvmLa2JAqZ5nkaX8a1H8WzeBdc=; b=AH6wl3/h3jZOQ/PP2XRz3H+6yTFTXd6FbVvqWs3dSD1VQoD7oZ4P8RfDCTZSOTbJuF Dea0WX2BnUSJqwq6L7bGv/kGkAoJpJXcX90MvZJCD8V3DLZNs48DS11HIb7sInlZfzue DMGpTwc5BhsC0Yj7lwGhp9whIz5ydF7WZDmw1yJ41rcNpl0D6aJSuZ/x8E6uGXhPjIWY MZ1Sk37oKsYl4w4mevm5K1deWm/GnWAIBSURersc8GWz4STFmSMBvkmT8PLUk5DY0bqQ 4G5CpFQfFmOpU7OaJZK8mUaojJl5Tf4k8xlMFEdi9tJgBabQcMfO6BrSj4NGwPy+nPHk 6HFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=eiEr+2P23HBOM4McbqvmLa2JAqZ5nkaX8a1H8WzeBdc=; b=zikbNxIvgTkes1s1uUehdAKf3lLoE/3Ffnlas/dBXvmnI0BRH8M24VqfVbZMR3ZNAb f4lfBUm05zt0psMlTb06J2UChB11JCRxcPJu4ljRy6ut3lvkCrLvVfaL6Uu7gzwLLMqD spjeK/EyQizMbIp6GAgPIbwKhzfy49taYohs19UyJa2vfRgrZ8lJAQ5LUv2l3/e/ggIU 13kq4YM6bAwymTepuh/Ab95dj6/defihTBpZ3ghLYLkN79Zf0Uj9uYbl5I3dJxUVGOCX jwP53J8qk0I20Qy9nmTt70hjd8C0OeCTTF+9j4Bzv1xqyx6H39b7EtlbzH213yfzXfpc BnOQ== X-Gm-Message-State: AOAM532Nb8QM7idGlHpv7hp4KR38q9DiTxyGy60hSgmww8vkARo6UG6n v3H889TJm3AH5d9gBwyP3CSXlYfdwCM= X-Google-Smtp-Source: ABdhPJxDSlpkDTmdkRbr1nbB7ZI7Yjkrt1D8Ci5Kh7HlHhT8mmXkAZD7Y6I7F3vOvJUv/S/j20xf/QcJdwc= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:11cd:b0:425:ee49:58cb with SMTP id j13-20020a05640211cd00b00425ee4958cbmr10117861edw.157.1650991499654; Tue, 26 Apr 2022 09:44:59 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:44 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-16-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 15/46] MAINTAINERS: add entry for KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="AH6wl3/h"; spf=pass (imf04.hostedemail.com: domain of 3iyFoYgYKCIgsxupq3s00sxq.o0yxuz69-yyw7mow.03s@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3iyFoYgYKCIgsxupq3s00sxq.o0yxuz69-yyw7mow.03s@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 1E2B640058 X-Stat-Signature: ftuwb4hyztzkayb7thx85k58f79hzfuu X-HE-Tag: 1650991496-21067 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add entry for KMSAN maintainers/reviewers. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ic5836c2bceb6b63f71a60d3327d18af3aa3dab77 --- MAINTAINERS | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/MAINTAINERS b/MAINTAINERS index 5e8c2f6117661..dc73b124971f1 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -10937,6 +10937,18 @@ F: kernel/kmod.c F: lib/test_kmod.c F: tools/testing/selftests/kmod/ +KMSAN +M: Alexander Potapenko +R: Marco Elver +R: Dmitry Vyukov +L: kasan-dev@googlegroups.com +S: Maintained +F: Documentation/dev-tools/kmsan.rst +F: include/linux/kmsan*.h +F: lib/Kconfig.kmsan +F: mm/kmsan/ +F: scripts/Makefile.kmsan + KPROBES M: Naveen N. Rao M: Anil S Keshavamurthy From patchwork Tue Apr 26 16:42:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827499 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2BF89C433EF for ; Tue, 26 Apr 2022 16:45:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BF0166B0095; Tue, 26 Apr 2022 12:45:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BA04C6B0096; Tue, 26 Apr 2022 12:45:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9F1708D0001; Tue, 26 Apr 2022 12:45:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 8EA5B6B0095 for ; Tue, 26 Apr 2022 12:45:04 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 4A2BA20D6D for ; Tue, 26 Apr 2022 16:45:04 +0000 (UTC) X-FDA: 79399604928.26.84DCDBD Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf05.hostedemail.com (Postfix) with ESMTP id 3D7CE100053 for ; Tue, 26 Apr 2022 16:44:57 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id 13-20020a170906328d00b006982d0888a4so9272072ejw.9 for ; Tue, 26 Apr 2022 09:45:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xJ2BmwZfS9U11w9738T9IIYSomxf8dgaSR2dg4ENYSM=; b=M78kkNEBLhoz120Hsc5NTi+kQYw3vfJ7Ofe3VzxcySxmocKv/b6vGvNS2ffV8ofLMt Ub8EknwRYPUdh8/3USQbMH+y3P6HdZw7qv2b51j6EV4Ne6raJKPwvp8VRM5bVuYBUO11 PICCITdy7VOMLndrYZrqZxMq7AFtVRkmxAHYZcMztdTWIyZiptfTK5KbGLYXwpTzNJ9b 44ENadmhCTtn1832j4ypKDxdW1wwVPN+21DGY8Q2iniakVP1O83WAGPnnq32oyyj2k+x uLTUBglTyA1FuXmul5INuSVHXnYRELBW7ttl8FFdrkbwJugfs4E4SJpate5lkIid50pw bZHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xJ2BmwZfS9U11w9738T9IIYSomxf8dgaSR2dg4ENYSM=; b=YU/DwEAHCQqxZTFJ2xI3Y3svK2KSRJXZelzpYmH3lSwcCkJf0fCJcucy1rLZjrT+lU apzubkLMEla1fGUJ/ArP3e3CQXmSjMcsea0r6LDoekIzNhAmoryWhwI40vx1qowhtEep 6eBB+PYpfqhvPt4pdrIg2FMoVm9nAGh42gHABvtgr4IZ/zyuYGT8paB9zmOZJAcxlYce +ha5KVg8C5Dv7DTYc22k4DCzGSsNTMj+h208PuXFO1nxBRzM9uRiXuNahDv8hUh/EwUV USXW4FBfHkwHYcLooF/anR6Sba0O3mLlDsk5Ykzs/hVCbbHG0DOOAzuXBcBoSxhJyx8U YlGA== X-Gm-Message-State: AOAM5326Q4TXZpd6s+skQ7m1IAzT9Xpcl+f1RQtzTQhlEWkqR9RBjb2b RqQ3axTIyB0t5z+o2LwWWkmOnFaHNTs= X-Google-Smtp-Source: ABdhPJwanrGXJn5S5Kb/uoAxmNWGHJmy0Y2Eba29AwNvQGec6+HVhIGagVjDS+qwYbcaVGcIOG9PfQHjjlI= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a17:906:4787:b0:6f3:7e2a:ebfd with SMTP id cw7-20020a170906478700b006f37e2aebfdmr14388383ejc.243.1650991502331; Tue, 26 Apr 2022 09:45:02 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:45 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-17-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 16/46] kmsan: mm: maintain KMSAN metadata for page operations From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 3D7CE100053 X-Stat-Signature: etobihgp4uhxgdbntdh1r49xrqus878s Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=M78kkNEB; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of 3jiFoYgYKCIsv0xst6v33v0t.r310x29C-11zAprz.36v@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3jiFoYgYKCIsv0xst6v33v0t.r310x29C-11zAprz.36v@flex--glider.bounces.google.com X-Rspam-User: X-HE-Tag: 1650991497-733115 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Insert KMSAN hooks that make the necessary bookkeeping changes: - poison page shadow and origins in alloc_pages()/free_page(); - clear page shadow and origins in clear_page(), copy_user_highpage(); - copy page metadata in copy_highpage(), wp_page_copy(); - handle vmap()/vunmap()/iounmap(); Signed-off-by: Alexander Potapenko --- v2: -- move page metadata hooks implementation here -- remove call to kmsan_memblock_free_pages() v3: -- use PAGE_SHIFT in kmsan_ioremap_page_range() Link: https://linux-review.googlesource.com/id/I6d4f53a0e7eab46fa29f0348f3095d9f2e326850 --- arch/x86/include/asm/page_64.h | 13 ++++ arch/x86/mm/ioremap.c | 3 + include/linux/highmem.h | 3 + include/linux/kmsan.h | 123 +++++++++++++++++++++++++++++++++ mm/internal.h | 6 ++ mm/kmsan/hooks.c | 87 +++++++++++++++++++++++ mm/kmsan/shadow.c | 114 ++++++++++++++++++++++++++++++ mm/memory.c | 2 + mm/page_alloc.c | 11 +++ mm/vmalloc.c | 20 +++++- 10 files changed, 380 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h index e9c86299b8351..36e270a8ea9a4 100644 --- a/arch/x86/include/asm/page_64.h +++ b/arch/x86/include/asm/page_64.h @@ -45,14 +45,27 @@ void clear_page_orig(void *page); void clear_page_rep(void *page); void clear_page_erms(void *page); +/* This is an assembly header, avoid including too much of kmsan.h */ +#ifdef CONFIG_KMSAN +void kmsan_unpoison_memory(const void *addr, size_t size); +#endif +__no_sanitize_memory static inline void clear_page(void *page) { +#ifdef CONFIG_KMSAN + /* alternative_call_2() changes @page. */ + void *page_copy = page; +#endif alternative_call_2(clear_page_orig, clear_page_rep, X86_FEATURE_REP_GOOD, clear_page_erms, X86_FEATURE_ERMS, "=D" (page), "0" (page) : "cc", "memory", "rax", "rcx"); +#ifdef CONFIG_KMSAN + /* Clear KMSAN shadow for the pages that have it. */ + kmsan_unpoison_memory(page_copy, PAGE_SIZE); +#endif } void copy_page(void *to, void *from); diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 17a492c273069..0da8608778221 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -474,6 +475,8 @@ void iounmap(volatile void __iomem *addr) return; } + kmsan_iounmap_page_range((unsigned long)addr, + (unsigned long)addr + get_vm_area_size(p)); memtype_free(p->phys_addr, p->phys_addr + get_vm_area_size(p)); /* Finally remove it */ diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 39bb9b47fa9cd..3e1898a44d7e3 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include @@ -277,6 +278,7 @@ static inline void copy_user_highpage(struct page *to, struct page *from, vfrom = kmap_local_page(from); vto = kmap_local_page(to); copy_user_page(vto, vfrom, vaddr, to); + kmsan_unpoison_memory(page_address(to), PAGE_SIZE); kunmap_local(vto); kunmap_local(vfrom); } @@ -292,6 +294,7 @@ static inline void copy_highpage(struct page *to, struct page *from) vfrom = kmap_local_page(from); vto = kmap_local_page(to); copy_page(vto, vfrom); + kmsan_copy_page_meta(to, from); kunmap_local(vto); kunmap_local(vfrom); } diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index 4e35f43eceaa9..da41850b46cbd 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -42,6 +42,129 @@ struct kmsan_ctx { bool allow_reporting; }; +/** + * kmsan_alloc_page() - Notify KMSAN about an alloc_pages() call. + * @page: struct page pointer returned by alloc_pages(). + * @order: order of allocated struct page. + * @flags: GFP flags used by alloc_pages() + * + * KMSAN marks 1<<@order pages starting at @page as uninitialized, unless + * @flags contain __GFP_ZERO. + */ +void kmsan_alloc_page(struct page *page, unsigned int order, gfp_t flags); + +/** + * kmsan_free_page() - Notify KMSAN about a free_pages() call. + * @page: struct page pointer passed to free_pages(). + * @order: order of deallocated struct page. + * + * KMSAN marks freed memory as uninitialized. + */ +void kmsan_free_page(struct page *page, unsigned int order); + +/** + * kmsan_copy_page_meta() - Copy KMSAN metadata between two pages. + * @dst: destination page. + * @src: source page. + * + * KMSAN copies the contents of metadata pages for @src into the metadata pages + * for @dst. If @dst has no associated metadata pages, nothing happens. + * If @src has no associated metadata pages, @dst metadata pages are unpoisoned. + */ +void kmsan_copy_page_meta(struct page *dst, struct page *src); + +/** + * kmsan_map_kernel_range_noflush() - Notify KMSAN about a vmap. + * @start: start of vmapped range. + * @end: end of vmapped range. + * @prot: page protection flags used for vmap. + * @pages: array of pages. + * @page_shift: page_shift passed to vmap_range_noflush(). + * + * KMSAN maps shadow and origin pages of @pages into contiguous ranges in + * vmalloc metadata address range. + */ +void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, + pgprot_t prot, struct page **pages, + unsigned int page_shift); + +/** + * kmsan_vunmap_kernel_range_noflush() - Notify KMSAN about a vunmap. + * @start: start of vunmapped range. + * @end: end of vunmapped range. + * + * KMSAN unmaps the contiguous metadata ranges created by + * kmsan_map_kernel_range_noflush(). + */ +void kmsan_vunmap_range_noflush(unsigned long start, unsigned long end); + +/** + * kmsan_ioremap_page_range() - Notify KMSAN about a ioremap_page_range() call. + * @addr: range start. + * @end: range end. + * @phys_addr: physical range start. + * @prot: page protection flags used for ioremap_page_range(). + * @page_shift: page_shift argument passed to vmap_range_noflush(). + * + * KMSAN creates new metadata pages for the physical pages mapped into the + * virtual memory. + */ +void kmsan_ioremap_page_range(unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int page_shift); + +/** + * kmsan_iounmap_page_range() - Notify KMSAN about a iounmap_page_range() call. + * @start: range start. + * @end: range end. + * + * KMSAN unmaps the metadata pages for the given range and, unlike for + * vunmap_page_range(), also deallocates them. + */ +void kmsan_iounmap_page_range(unsigned long start, unsigned long end); + +#else + +static inline int kmsan_alloc_page(struct page *page, unsigned int order, + gfp_t flags) +{ + return 0; +} + +static inline void kmsan_free_page(struct page *page, unsigned int order) +{ +} + +static inline void kmsan_copy_page_meta(struct page *dst, struct page *src) +{ +} + +static inline void kmsan_vmap_pages_range_noflush(unsigned long start, + unsigned long end, + pgprot_t prot, + struct page **pages, + unsigned int page_shift) +{ +} + +static inline void kmsan_vunmap_range_noflush(unsigned long start, + unsigned long end) +{ +} + +static inline void kmsan_ioremap_page_range(unsigned long start, + unsigned long end, + phys_addr_t phys_addr, + pgprot_t prot, + unsigned int page_shift) +{ +} + +static inline void kmsan_iounmap_page_range(unsigned long start, + unsigned long end) +{ +} + #endif #endif /* _LINUX_KMSAN_H */ diff --git a/mm/internal.h b/mm/internal.h index cf16280ce1321..3cf6fde8f02c4 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -744,8 +744,14 @@ int vmap_pages_range_noflush(unsigned long addr, unsigned long end, } #endif +int __vmap_pages_range_noflush(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, + unsigned int page_shift); + void vunmap_range_noflush(unsigned long start, unsigned long end); +void __vunmap_range_noflush(unsigned long start, unsigned long end); + int numa_migrate_prep(struct page *page, struct vm_area_struct *vma, unsigned long addr, int page_nid, int *flags); diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 4ac62fa67a02a..070756be70e3a 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -26,6 +26,93 @@ * skipping effects of functions like memset() inside instrumented code. */ +static unsigned long vmalloc_shadow(unsigned long addr) +{ + return (unsigned long)kmsan_get_metadata((void *)addr, + KMSAN_META_SHADOW); +} + +static unsigned long vmalloc_origin(unsigned long addr) +{ + return (unsigned long)kmsan_get_metadata((void *)addr, + KMSAN_META_ORIGIN); +} + +void kmsan_vunmap_range_noflush(unsigned long start, unsigned long end) +{ + __vunmap_range_noflush(vmalloc_shadow(start), vmalloc_shadow(end)); + __vunmap_range_noflush(vmalloc_origin(start), vmalloc_origin(end)); + flush_cache_vmap(vmalloc_shadow(start), vmalloc_shadow(end)); + flush_cache_vmap(vmalloc_origin(start), vmalloc_origin(end)); +} +EXPORT_SYMBOL(kmsan_vunmap_range_noflush); + +/* + * This function creates new shadow/origin pages for the physical pages mapped + * into the virtual memory. If those physical pages already had shadow/origin, + * those are ignored. + */ +void kmsan_ioremap_page_range(unsigned long start, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot, + unsigned int page_shift) +{ + gfp_t gfp_mask = GFP_KERNEL | __GFP_ZERO; + struct page *shadow, *origin; + unsigned long off = 0; + int i, nr; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + nr = (end - start) / PAGE_SIZE; + kmsan_enter_runtime(); + for (i = 0; i < nr; i++, off += PAGE_SIZE) { + shadow = alloc_pages(gfp_mask, 1); + origin = alloc_pages(gfp_mask, 1); + __vmap_pages_range_noflush( + vmalloc_shadow(start + off), + vmalloc_shadow(start + off + PAGE_SIZE), prot, &shadow, + PAGE_SHIFT); + __vmap_pages_range_noflush( + vmalloc_origin(start + off), + vmalloc_origin(start + off + PAGE_SIZE), prot, &origin, + PAGE_SHIFT); + } + flush_cache_vmap(vmalloc_shadow(start), vmalloc_shadow(end)); + flush_cache_vmap(vmalloc_origin(start), vmalloc_origin(end)); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_ioremap_page_range); + +void kmsan_iounmap_page_range(unsigned long start, unsigned long end) +{ + unsigned long v_shadow, v_origin; + struct page *shadow, *origin; + int i, nr; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + nr = (end - start) / PAGE_SIZE; + kmsan_enter_runtime(); + v_shadow = (unsigned long)vmalloc_shadow(start); + v_origin = (unsigned long)vmalloc_origin(start); + for (i = 0; i < nr; i++, v_shadow += PAGE_SIZE, v_origin += PAGE_SIZE) { + shadow = kmsan_vmalloc_to_page_or_null((void *)v_shadow); + origin = kmsan_vmalloc_to_page_or_null((void *)v_origin); + __vunmap_range_noflush(v_shadow, vmalloc_shadow(end)); + __vunmap_range_noflush(v_origin, vmalloc_origin(end)); + if (shadow) + __free_pages(shadow, 1); + if (origin) + __free_pages(origin, 1); + } + flush_cache_vmap(vmalloc_shadow(start), vmalloc_shadow(end)); + flush_cache_vmap(vmalloc_origin(start), vmalloc_origin(end)); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_iounmap_page_range); + /* Functions from kmsan-checks.h follow. */ void kmsan_poison_memory(const void *address, size_t size, gfp_t flags) { diff --git a/mm/kmsan/shadow.c b/mm/kmsan/shadow.c index de58cfbc55b9d..8fe6a5ed05e67 100644 --- a/mm/kmsan/shadow.c +++ b/mm/kmsan/shadow.c @@ -184,3 +184,117 @@ void *kmsan_get_metadata(void *address, bool is_origin) ret = (is_origin ? origin_ptr_for(page) : shadow_ptr_for(page)) + off; return ret; } + +void kmsan_copy_page_meta(struct page *dst, struct page *src) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + if (!dst || !page_has_metadata(dst)) + return; + if (!src || !page_has_metadata(src)) { + kmsan_internal_unpoison_memory(page_address(dst), PAGE_SIZE, + /*checked*/ false); + return; + } + + kmsan_enter_runtime(); + __memcpy(shadow_ptr_for(dst), shadow_ptr_for(src), PAGE_SIZE); + __memcpy(origin_ptr_for(dst), origin_ptr_for(src), PAGE_SIZE); + kmsan_leave_runtime(); +} + +void kmsan_alloc_page(struct page *page, unsigned int order, gfp_t flags) +{ + bool initialized = (flags & __GFP_ZERO) || !kmsan_enabled; + struct page *shadow, *origin; + depot_stack_handle_t handle; + int pages = 1 << order; + int i; + + if (!page) + return; + + shadow = shadow_page_for(page); + origin = origin_page_for(page); + + if (initialized) { + __memset(page_address(shadow), 0, PAGE_SIZE * pages); + __memset(page_address(origin), 0, PAGE_SIZE * pages); + return; + } + + /* Zero pages allocated by the runtime should also be initialized. */ + if (kmsan_in_runtime()) + return; + + __memset(page_address(shadow), -1, PAGE_SIZE * pages); + kmsan_enter_runtime(); + handle = kmsan_save_stack_with_flags(flags, /*extra_bits*/ 0); + kmsan_leave_runtime(); + /* + * Addresses are page-aligned, pages are contiguous, so it's ok + * to just fill the origin pages with |handle|. + */ + for (i = 0; i < PAGE_SIZE * pages / sizeof(handle); i++) + ((depot_stack_handle_t *)page_address(origin))[i] = handle; +} + +void kmsan_free_page(struct page *page, unsigned int order) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + kmsan_internal_poison_memory(page_address(page), + PAGE_SIZE << compound_order(page), + GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(); +} + +void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, + pgprot_t prot, struct page **pages, + unsigned int page_shift) +{ + unsigned long shadow_start, origin_start, shadow_end, origin_end; + struct page **s_pages, **o_pages; + int nr, i, mapped; + + if (!kmsan_enabled) + return; + + shadow_start = vmalloc_meta((void *)start, KMSAN_META_SHADOW); + shadow_end = vmalloc_meta((void *)end, KMSAN_META_SHADOW); + if (!shadow_start) + return; + + nr = (end - start) / PAGE_SIZE; + s_pages = kcalloc(nr, sizeof(struct page *), GFP_KERNEL); + o_pages = kcalloc(nr, sizeof(struct page *), GFP_KERNEL); + if (!s_pages || !o_pages) + goto ret; + for (i = 0; i < nr; i++) { + s_pages[i] = shadow_page_for(pages[i]); + o_pages[i] = origin_page_for(pages[i]); + } + prot = __pgprot(pgprot_val(prot) | _PAGE_NX); + prot = PAGE_KERNEL; + + origin_start = vmalloc_meta((void *)start, KMSAN_META_ORIGIN); + origin_end = vmalloc_meta((void *)end, KMSAN_META_ORIGIN); + kmsan_enter_runtime(); + mapped = __vmap_pages_range_noflush(shadow_start, shadow_end, prot, + s_pages, page_shift); + KMSAN_WARN_ON(mapped); + mapped = __vmap_pages_range_noflush(origin_start, origin_end, prot, + o_pages, page_shift); + KMSAN_WARN_ON(mapped); + kmsan_leave_runtime(); + flush_tlb_kernel_range(shadow_start, shadow_end); + flush_tlb_kernel_range(origin_start, origin_end); + flush_cache_vmap(shadow_start, shadow_end); + flush_cache_vmap(origin_start, origin_end); + +ret: + kfree(s_pages); + kfree(o_pages); +} diff --git a/mm/memory.c b/mm/memory.c index 76e3af9639d93..04aa68acffeb3 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -52,6 +52,7 @@ #include #include #include +#include #include #include #include @@ -3032,6 +3033,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) put_page(old_page); return 0; } + kmsan_copy_page_meta(new_page, old_page); } if (mem_cgroup_charge(page_folio(new_page), mm, GFP_KERNEL)) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0e42038382c12..98393e01e4259 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -27,6 +27,7 @@ #include #include #include +#include #include #include #include @@ -1305,6 +1306,7 @@ static __always_inline bool free_pages_prepare(struct page *page, VM_BUG_ON_PAGE(PageTail(page), page); trace_mm_page_free(page, order); + kmsan_free_page(page, order); if (unlikely(PageHWPoison(page)) && !order) { /* @@ -3696,6 +3698,14 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, /* * Allocate a page from the given zone. Use pcplists for order-0 allocations. */ + +/* + * Do not instrument rmqueue() with KMSAN. This function may call + * __msan_poison_alloca() through a call to set_pfnblock_flags_mask(). + * If __msan_poison_alloca() attempts to allocate pages for the stack depot, it + * may call rmqueue() again, which will result in a deadlock. + */ +__no_sanitize_memory static inline struct page *rmqueue(struct zone *preferred_zone, struct zone *zone, unsigned int order, @@ -5428,6 +5438,7 @@ struct page *__alloc_pages(gfp_t gfp, unsigned int order, int preferred_nid, } trace_mm_page_alloc(page, order, alloc_gfp, ac.migratetype); + kmsan_alloc_page(page, order, alloc_gfp); return page; } diff --git a/mm/vmalloc.c b/mm/vmalloc.c index cadfbb5155ea5..76a54007e5142 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -320,6 +320,9 @@ int ioremap_page_range(unsigned long addr, unsigned long end, err = vmap_range_noflush(addr, end, phys_addr, pgprot_nx(prot), ioremap_max_page_shift); flush_cache_vmap(addr, end); + if (!err) + kmsan_ioremap_page_range(addr, end, phys_addr, prot, + ioremap_max_page_shift); return err; } @@ -419,7 +422,7 @@ static void vunmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end, * * This is an internal function only. Do not use outside mm/. */ -void vunmap_range_noflush(unsigned long start, unsigned long end) +void __vunmap_range_noflush(unsigned long start, unsigned long end) { unsigned long next; pgd_t *pgd; @@ -441,6 +444,12 @@ void vunmap_range_noflush(unsigned long start, unsigned long end) arch_sync_kernel_mappings(start, end); } +void vunmap_range_noflush(unsigned long start, unsigned long end) +{ + kmsan_vunmap_range_noflush(start, end); + __vunmap_range_noflush(start, end); +} + /** * vunmap_range - unmap kernel virtual addresses * @addr: start of the VM area to unmap @@ -575,7 +584,7 @@ static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end, * * This is an internal function only. Do not use outside mm/. */ -int vmap_pages_range_noflush(unsigned long addr, unsigned long end, +int __vmap_pages_range_noflush(unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, unsigned int page_shift) { unsigned int i, nr = (end - addr) >> PAGE_SHIFT; @@ -601,6 +610,13 @@ int vmap_pages_range_noflush(unsigned long addr, unsigned long end, return 0; } +int vmap_pages_range_noflush(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, unsigned int page_shift) +{ + kmsan_vmap_pages_range_noflush(addr, end, prot, pages, page_shift); + return __vmap_pages_range_noflush(addr, end, prot, pages, page_shift); +} + /** * vmap_pages_range - map pages to a kernel virtual address * @addr: start of the VM area to map From patchwork Tue Apr 26 16:42:46 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827500 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CC62C433FE for ; Tue, 26 Apr 2022 16:45:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 316026B0096; Tue, 26 Apr 2022 12:45:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2C8248D0001; Tue, 26 Apr 2022 12:45:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 13FCA6B0099; Tue, 26 Apr 2022 12:45:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 05C956B0096 for ; Tue, 26 Apr 2022 12:45:07 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id DB5F780ABE for ; Tue, 26 Apr 2022 16:45:06 +0000 (UTC) X-FDA: 79399605012.02.946AA44 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf12.hostedemail.com (Postfix) with ESMTP id 0C03E40047 for ; Tue, 26 Apr 2022 16:44:57 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id n4-20020a5099c4000000b00418ed58d92fso10617559edb.0 for ; Tue, 26 Apr 2022 09:45:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=mDlUedrAZbNWp5ky59jIg6dQAqQa/OsCzADtdk11998=; b=YzWXjysgR42Q3TFVESmvN5pkk/chWv55MOWNS/6w5CgonTDe7EDOi6/dgov5acrkBk +eLpaTXxOwDMeDh/5NLJd2WaBWiomoLwEkXAYy/dGDvTLkfWpvwhhRdnoGBIUGURSa1L l9wqvDH8ai6JCL9QFA+PPkYwc0EQxgwx22SxZgPjsmRCAAgfTssPPlR7afWFGoadKtI2 Eron+27fhMxsLHiuvzn/2YPIzI2J1qzeOFB+DZVuNcwHxAUs4RAkE6xoNyDRZHC6y18/ xT82pJiOb+S+0kbNumQuG4h2OZz5fftBJtuJ9LtfEMIi/Y/tl9m44tBDJirEip3Wxd0t /VOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=mDlUedrAZbNWp5ky59jIg6dQAqQa/OsCzADtdk11998=; b=2NdUtDDhcWpNOY5GwI1HkO7Gvo2wpwH6CsifEhYMPm5bZInBVj8RuzdMOtWZP8a9wZ QUZ6CicHh9W0q3SAHZt0jChY/MZdPVoDM72O7S4Xo+eJ1ApZuQtKgUOxhPmD25i4w1KQ 0EKPdx3lcJW5Ew/sLdGNT9bVyY8PtKPU4Dbm4/7EMfmU2K6OXmvAf56gQG5SErtsdx4w /N649WpUjwt5WbQIsXFdsuzx7J75QFUOUWqWo304HKTBYtV3Wf4JmLevI4NFroaJE1TH TiWBF1FJCqplnjCumQdx5P11slkOEqEUvKOU/pNXPjRvOxRC7i1Sd7iiwQ7j0hUDlVqa txHg== X-Gm-Message-State: AOAM530xPoExLyk0uXmg/aGE5U0xeMByHVksuO5gNNSxPJsr36QjGwQi SkTr5OudPPso9Rg4Xp0vVdcWGFbpkmc= X-Google-Smtp-Source: ABdhPJwjleNQ43dfYY89UGlYdB1Ku84OaO0hN9pyi13kzCfUmzv59quoT4hFyhlBNfthppVsEwYHULVNmjY= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:5114:b0:423:f33d:b3c with SMTP id m20-20020a056402511400b00423f33d0b3cmr25558297edd.199.1650991504954; Tue, 26 Apr 2022 09:45:04 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:46 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-18-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 17/46] kmsan: mm: call KMSAN hooks from SLUB code From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=YzWXjysg; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf12.hostedemail.com: domain of 3kCFoYgYKCI0x2zuv8x55x2v.t532z4BE-331Crt1.58x@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3kCFoYgYKCI0x2zuv8x55x2v.t532z4BE-331Crt1.58x@flex--glider.bounces.google.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 0C03E40047 X-Rspam-User: X-Stat-Signature: ff3i1z4r4mogtzq95upz97fb1ob9ip36 X-HE-Tag: 1650991497-901392 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In order to report uninitialized memory coming from heap allocations KMSAN has to poison them unless they're created with __GFP_ZERO. It's handy that we need KMSAN hooks in the places where init_on_alloc/init_on_free initialization is performed. Signed-off-by: Alexander Potapenko --- v2: -- move the implementation of SLUB hooks here Link: https://linux-review.googlesource.com/id/I6954b386c5c5d7f99f48bb6cbcc74b75136ce86e --- include/linux/kmsan.h | 57 ++++++++++++++++++++++++++++++ mm/kmsan/hooks.c | 80 +++++++++++++++++++++++++++++++++++++++++++ mm/slab.h | 1 + mm/slub.c | 21 ++++++++++-- 4 files changed, 157 insertions(+), 2 deletions(-) diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index da41850b46cbd..ed3630068e2ef 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -16,6 +16,7 @@ #include struct page; +struct kmem_cache; #ifdef CONFIG_KMSAN @@ -73,6 +74,44 @@ void kmsan_free_page(struct page *page, unsigned int order); */ void kmsan_copy_page_meta(struct page *dst, struct page *src); +/** + * kmsan_slab_alloc() - Notify KMSAN about a slab allocation. + * @s: slab cache the object belongs to. + * @object: object pointer. + * @flags: GFP flags passed to the allocator. + * + * Depending on cache flags and GFP flags, KMSAN sets up the metadata of the + * newly created object, marking it as initialized or uninitialized. + */ +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags); + +/** + * kmsan_slab_free() - Notify KMSAN about a slab deallocation. + * @s: slab cache the object belongs to. + * @object: object pointer. + * + * KMSAN marks the freed object as uninitialized. + */ +void kmsan_slab_free(struct kmem_cache *s, void *object); + +/** + * kmsan_kmalloc_large() - Notify KMSAN about a large slab allocation. + * @ptr: object pointer. + * @size: object size. + * @flags: GFP flags passed to the allocator. + * + * Similar to kmsan_slab_alloc(), but for large allocations. + */ +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags); + +/** + * kmsan_kfree_large() - Notify KMSAN about a large slab deallocation. + * @ptr: object pointer. + * + * Similar to kmsan_slab_free(), but for large allocations. + */ +void kmsan_kfree_large(const void *ptr); + /** * kmsan_map_kernel_range_noflush() - Notify KMSAN about a vmap. * @start: start of vmapped range. @@ -139,6 +178,24 @@ static inline void kmsan_copy_page_meta(struct page *dst, struct page *src) { } +static inline void kmsan_slab_alloc(struct kmem_cache *s, void *object, + gfp_t flags) +{ +} + +static inline void kmsan_slab_free(struct kmem_cache *s, void *object) +{ +} + +static inline void kmsan_kmalloc_large(const void *ptr, size_t size, + gfp_t flags) +{ +} + +static inline void kmsan_kfree_large(const void *ptr) +{ +} + static inline void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, pgprot_t prot, diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 070756be70e3a..052e17b7a717d 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -26,6 +26,86 @@ * skipping effects of functions like memset() inside instrumented code. */ +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags) +{ + if (unlikely(object == NULL)) + return; + if (!kmsan_enabled || kmsan_in_runtime()) + return; + /* + * There's a ctor or this is an RCU cache - do nothing. The memory + * status hasn't changed since last use. + */ + if (s->ctor || (s->flags & SLAB_TYPESAFE_BY_RCU)) + return; + + kmsan_enter_runtime(); + if (flags & __GFP_ZERO) + kmsan_internal_unpoison_memory(object, s->object_size, + KMSAN_POISON_CHECK); + else + kmsan_internal_poison_memory(object, s->object_size, flags, + KMSAN_POISON_CHECK); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_slab_alloc); + +void kmsan_slab_free(struct kmem_cache *s, void *object) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + /* RCU slabs could be legally used after free within the RCU period */ + if (unlikely(s->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON))) + return; + /* + * If there's a constructor, freed memory must remain in the same state + * until the next allocation. We cannot save its state to detect + * use-after-free bugs, instead we just keep it unpoisoned. + */ + if (s->ctor) + return; + kmsan_enter_runtime(); + kmsan_internal_poison_memory(object, s->object_size, GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_slab_free); + +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) +{ + if (unlikely(ptr == NULL)) + return; + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + if (flags & __GFP_ZERO) + kmsan_internal_unpoison_memory((void *)ptr, size, + /*checked*/ true); + else + kmsan_internal_poison_memory((void *)ptr, size, flags, + KMSAN_POISON_CHECK); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_kmalloc_large); + +void kmsan_kfree_large(const void *ptr) +{ + struct page *page; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + page = virt_to_head_page((void *)ptr); + KMSAN_WARN_ON(ptr != page_address(page)); + kmsan_internal_poison_memory((void *)ptr, + PAGE_SIZE << compound_order(page), + GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_kfree_large); + static unsigned long vmalloc_shadow(unsigned long addr) { return (unsigned long)kmsan_get_metadata((void *)addr, diff --git a/mm/slab.h b/mm/slab.h index 95eb34174c1bb..1276b83656091 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -751,6 +751,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, memset(p[i], 0, s->object_size); kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, flags); + kmsan_slab_alloc(s, p[i], flags); } memcg_slab_post_alloc_hook(s, objcg, flags, size, p); diff --git a/mm/slub.c b/mm/slub.c index ed5c2c03a47aa..45082acaa6739 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -357,18 +358,28 @@ static void prefetch_freepointer(const struct kmem_cache *s, void *object) prefetchw(object + s->offset); } +/* + * When running under KMSAN, get_freepointer_safe() may return an uninitialized + * pointer value in the case the current thread loses the race for the next + * memory chunk in the freelist. In that case this_cpu_cmpxchg_double() in + * slab_alloc_node() will fail, so the uninitialized value won't be used, but + * KMSAN will still check all arguments of cmpxchg because of imperfect + * handling of inline assembly. + * To work around this problem, use kmsan_init() to force initialize the + * return value of get_freepointer_safe(). + */ static inline void *get_freepointer_safe(struct kmem_cache *s, void *object) { unsigned long freepointer_addr; void *p; if (!debug_pagealloc_enabled_static()) - return get_freepointer(s, object); + return kmsan_init(get_freepointer(s, object)); object = kasan_reset_tag(object); freepointer_addr = (unsigned long)object + s->offset; copy_from_kernel_nofault(&p, (void **)freepointer_addr, sizeof(p)); - return freelist_ptr(s, p, freepointer_addr); + return kmsan_init(freelist_ptr(s, p, freepointer_addr)); } static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp) @@ -1683,6 +1694,7 @@ static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) ptr = kasan_kmalloc_large(ptr, size, flags); /* As ptr might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ptr, size, 1, flags); + kmsan_kmalloc_large(ptr, size, flags); return ptr; } @@ -1690,12 +1702,14 @@ static __always_inline void kfree_hook(void *x) { kmemleak_free(x); kasan_kfree_large(x); + kmsan_kfree_large(x); } static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x, bool init) { kmemleak_free_recursive(x, s->flags); + kmsan_slab_free(s, x); debug_check_no_locks_freed(x, s->object_size); @@ -3730,6 +3744,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, */ slab_post_alloc_hook(s, objcg, flags, size, p, slab_want_init_on_alloc(flags, s)); + return i; error: slub_put_cpu_ptr(s->cpu_slab); @@ -5898,6 +5913,7 @@ static char *create_unique_id(struct kmem_cache *s) p += sprintf(p, "%07u", s->size); BUG_ON(p > name + ID_STR_LENGTH - 1); + kmsan_unpoison_memory(name, p - name); return name; } @@ -5999,6 +6015,7 @@ static int sysfs_slab_alias(struct kmem_cache *s, const char *name) al->name = name; al->next = alias_list; alias_list = al; + kmsan_unpoison_memory(al, sizeof(struct saved_alias)); return 0; } From patchwork Tue Apr 26 16:42:47 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827501 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AE4AC433EF for ; Tue, 26 Apr 2022 16:45:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9CC136B0098; Tue, 26 Apr 2022 12:45:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 97D036B0099; Tue, 26 Apr 2022 12:45:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 86A276B009A; Tue, 26 Apr 2022 12:45:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 7A41F6B0098 for ; Tue, 26 Apr 2022 12:45:09 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 2066060AF0 for ; Tue, 26 Apr 2022 16:45:09 +0000 (UTC) X-FDA: 79399605138.25.ED57BC6 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf14.hostedemail.com (Postfix) with ESMTP id A5F2210004D for ; Tue, 26 Apr 2022 16:45:07 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id s24-20020a05640217d800b00425e19e7deaso3804888edy.3 for ; Tue, 26 Apr 2022 09:45:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=d8pL6jrPvx0aomk5qPe/O8+76gWOKR1XrEif20OjnQY=; b=qHuNE5uJ2FDzh5HoUwOExyBkIzzUGi7u3r8cb1fmNh9JoURq8b/0KZazqEqktkONuv RSLViVqt3oajP2cDiy1OzrTVw7OGCCwD1o48WWrtmf3dphbUmtfJDJYJSX5YCXSpNfdF aXfERsMSgciWjwFSSbUv2J0vUdBe6KCUA+6Uqtn6lrlmCctTBymZW6DYKyoqZXJTtacd rnvL/jsjRqjNWe6PKbeI0qf9Va3S9UXaomO1/Ti1Zz7QraDNV7xVEPjV7ex+essI4Lra yDFX1T5khoEdzXqF+In8TAeLE7luEASOtTb9QCfHpF9L7qw2pr9BQJoomZYDCxZL6tCZ 7FYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=d8pL6jrPvx0aomk5qPe/O8+76gWOKR1XrEif20OjnQY=; b=xXbJTHZYI1CCkGNNoStT8CeMCNn57tGy5tghhldVAXtEWLV0AzSpLK1AxEC9bt+1YZ 2AZ/5Lx0Tulemagk5QJZa+DubshVkcx9jpvF6u7CnZr16SPfiFfwRC4V/JRV0vmLCe2c cRf80H2eLKFlYO+fxUmgUF3HwFFhk4kDHgzBe/j0JxYUZqhXxBmKhwwW4jwDzlEi88Nv m8EPFludmOocETePfNP59uF0pRiV5ja0j/GV/A68jMppomypHh7TPYhY/F61AcyVPNI5 1KlLi5rhy5WlD3eFxzjjOn7IiCLMx4x4VPTCWmnKUbVXsGwTqr/tbOv7BLne3Bykh+Q/ hrmw== X-Gm-Message-State: AOAM530lq/iwmBPjB2p6OPE4XhY+SYQF0nmBS73pY4FbRCibqP1TLm78 1ClPnCpZDxOQNDM8gzXz79WQASrjqik= X-Google-Smtp-Source: ABdhPJxFI4DglMrNp8eiqcTF6h41+ZsecBuhf95AwThCb0V12de/kX/qAKmHnmtUDGFTwaevSaUDv/yKfZQ= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:486:b0:413:bd00:4f3f with SMTP id k6-20020a056402048600b00413bd004f3fmr26069921edv.103.1650991507458; Tue, 26 Apr 2022 09:45:07 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:47 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-19-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 18/46] kmsan: handle task creation and exiting From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=qHuNE5uJ; spf=pass (imf14.hostedemail.com: domain of 3kyFoYgYKCJA052xyB08805y.w86527EH-664Fuw4.8B0@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3kyFoYgYKCJA052xyB08805y.w86527EH-664Fuw4.8B0@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: A5F2210004D X-Stat-Signature: uptfo1dhyog5q4rm7dmh8c8hkkj3x8nc X-HE-Tag: 1650991507-377059 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Tell KMSAN that a new task is created, so the tool creates a backing metadata structure for that task. Signed-off-by: Alexander Potapenko --- v2: -- move implementation of kmsan_task_create() and kmsan_task_exit() here Link: https://linux-review.googlesource.com/id/I0f41c3a1c7d66f7e14aabcfdfc7c69addb945805 --- include/linux/kmsan.h | 17 +++++++++++++++++ kernel/exit.c | 2 ++ kernel/fork.c | 2 ++ mm/kmsan/core.c | 10 ++++++++++ mm/kmsan/hooks.c | 19 +++++++++++++++++++ mm/kmsan/kmsan.h | 2 ++ 6 files changed, 52 insertions(+) diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index ed3630068e2ef..dca42e0e91991 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -17,6 +17,7 @@ struct page; struct kmem_cache; +struct task_struct; #ifdef CONFIG_KMSAN @@ -43,6 +44,14 @@ struct kmsan_ctx { bool allow_reporting; }; +void kmsan_task_create(struct task_struct *task); + +/** + * kmsan_task_exit() - Notify KMSAN that a task has exited. + * @task: task about to finish. + */ +void kmsan_task_exit(struct task_struct *task); + /** * kmsan_alloc_page() - Notify KMSAN about an alloc_pages() call. * @page: struct page pointer returned by alloc_pages(). @@ -164,6 +173,14 @@ void kmsan_iounmap_page_range(unsigned long start, unsigned long end); #else +static inline void kmsan_task_create(struct task_struct *task) +{ +} + +static inline void kmsan_task_exit(struct task_struct *task) +{ +} + static inline int kmsan_alloc_page(struct page *page, unsigned int order, gfp_t flags) { diff --git a/kernel/exit.c b/kernel/exit.c index f072959fcab7f..1784b7a741ddd 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -60,6 +60,7 @@ #include #include #include +#include #include #include #include @@ -741,6 +742,7 @@ void __noreturn do_exit(long code) WARN_ON(tsk->plug); kcov_task_exit(tsk); + kmsan_task_exit(tsk); coredump_task_exit(tsk); ptrace_event(PTRACE_EVENT_EXIT, code); diff --git a/kernel/fork.c b/kernel/fork.c index 9796897560ab1..a6178bd28c409 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -37,6 +37,7 @@ #include #include #include +#include #include #include #include @@ -1027,6 +1028,7 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) tsk->worker_private = NULL; kcov_task_init(tsk); + kmsan_task_create(tsk); kmap_local_fork(tsk); #ifdef CONFIG_FAULT_INJECTION diff --git a/mm/kmsan/core.c b/mm/kmsan/core.c index 933d864d9d467..4b405abbb6c03 100644 --- a/mm/kmsan/core.c +++ b/mm/kmsan/core.c @@ -44,6 +44,16 @@ bool kmsan_enabled __read_mostly; */ DEFINE_PER_CPU(struct kmsan_ctx, kmsan_percpu_ctx); +void kmsan_internal_task_create(struct task_struct *task) +{ + struct kmsan_ctx *ctx = &task->kmsan_ctx; + + __memset(ctx, 0, sizeof(struct kmsan_ctx)); + ctx->allow_reporting = true; + kmsan_internal_unpoison_memory(current_thread_info(), + sizeof(struct thread_info), false); +} + void kmsan_internal_poison_memory(void *address, size_t size, gfp_t flags, unsigned int poison_flags) { diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 052e17b7a717d..43a529569053d 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -26,6 +26,25 @@ * skipping effects of functions like memset() inside instrumented code. */ +void kmsan_task_create(struct task_struct *task) +{ + kmsan_enter_runtime(); + kmsan_internal_task_create(task); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_task_create); + +void kmsan_task_exit(struct task_struct *task) +{ + struct kmsan_ctx *ctx = &task->kmsan_ctx; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + ctx->allow_reporting = false; +} +EXPORT_SYMBOL(kmsan_task_exit); + void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags) { if (unlikely(object == NULL)) diff --git a/mm/kmsan/kmsan.h b/mm/kmsan/kmsan.h index bfe38789950a6..a1b5900ffd97b 100644 --- a/mm/kmsan/kmsan.h +++ b/mm/kmsan/kmsan.h @@ -172,6 +172,8 @@ void kmsan_internal_set_shadow_origin(void *address, size_t size, int b, u32 origin, bool checked); depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id); +void kmsan_internal_task_create(struct task_struct *task); + bool kmsan_metadata_is_contiguous(void *addr, size_t size); void kmsan_internal_check_memory(void *addr, size_t size, const void *user_addr, int reason); From patchwork Tue Apr 26 16:42:48 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827502 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 787B6C433EF for ; Tue, 26 Apr 2022 16:45:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 140EC6B0099; Tue, 26 Apr 2022 12:45:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0F0C06B009A; Tue, 26 Apr 2022 12:45:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EAD656B009B; Tue, 26 Apr 2022 12:45:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id DCB0C6B0099 for ; Tue, 26 Apr 2022 12:45:11 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay11.hostedemail.com (Postfix) with ESMTP id B15A0809B4 for ; Tue, 26 Apr 2022 16:45:11 +0000 (UTC) X-FDA: 79399605222.30.FA9C18D Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf12.hostedemail.com (Postfix) with ESMTP id C20FE40043 for ; Tue, 26 Apr 2022 16:45:02 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id sa27-20020a1709076d1b00b006e8b357a2e7so9326172ejc.14 for ; Tue, 26 Apr 2022 09:45:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=chRz9X+UoCgYtEKtJoTXT2tB633Q6kPzDish0ozWoJs=; b=iILPrkLPO23rfUUMvFSi4Z8er+bWDXpVUGW5WdqHRHVMWR6ksGloltRebKp+ZboRbt QDGCpX85b/0Qkl+38vYNDUD9JIgepW3D8ilhETtrTocxGWyK+r1AYQUDE4n0p+/wYlmc +cjjHVpxRswHVzBMoiapsUo2PnuCGD3h//7hXHdNCrq7yv12Xr7dPUFXuZn2YlpYkDIe lwg8HyX5Wczbjw3trREix2yESuYLL0ykOVKHx8fhUb4wFwtbgRnzyHxtzjBWFXxAHUwr iy1IY/QUZNxe2BPBRYT44V3X6pBMsGR9M5gPPvWc6U8RiiktG08RiBn63BI0OAZNzBBY SNWg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=chRz9X+UoCgYtEKtJoTXT2tB633Q6kPzDish0ozWoJs=; b=7vco0qVE1ECMSFFNbtyj8Ie2UWkp5t31J2o6QWRyWyRc63R0Do6IbqgpVXrl6l9Gjr IvnZCnyW8asMsQk8MtUNfCzsS2sy0x6+XrPY6Nr6i3yNImjM3WOHWB32KmnOz6F5J+nd ys78FCYD64YycIooQKNCm4gUcAcMLlPgOw1lNywgHgPC/h+3VEpMg9z18v3DM9RhrAy5 G8Iop+judOZRPk4AORQYupnQsCClhASgpGhQv456NPqmSENgwapIE2MuoQqvrhb1l6rz 5Cap6OMrE3ab/LpPa+v4LNDefvHy3h8a+hdhyd/5+x/CT7yMXenU3epCkLvXWEPXiLcm YJ0A== X-Gm-Message-State: AOAM531EHatOaxKH1f/Xy8aOjRZPo1oA6BSVfbBc/oZnMSedq83jjEpa 4UiWXUzZqnY3ysbteE9JlOSRIrmBfGc= X-Google-Smtp-Source: ABdhPJwRDzo6oVCot+pcNMGMkgKP0gMioynSPYX1iZPl03lH9XerXzgQUEWYQxlu8/hqLkIa5gsTio/2muM= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:5189:b0:423:f342:e0ce with SMTP id q9-20020a056402518900b00423f342e0cemr25740119edd.120.1650991509865; Tue, 26 Apr 2022 09:45:09 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:48 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-20-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 19/46] kmsan: init: call KMSAN initialization routines From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Queue-Id: C20FE40043 X-Stat-Signature: ca7twgiwu5zuin1641eqrkihcqaxd83t Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=iILPrkLP; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf12.hostedemail.com: domain of 3lSFoYgYKCJI274z0D2AA270.yA8749GJ-886Hwy6.AD2@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3lSFoYgYKCJI274z0D2AA270.yA8749GJ-886Hwy6.AD2@flex--glider.bounces.google.com X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1650991502-439174 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kmsan_init_shadow() scans the mappings created at boot time and creates metadata pages for those mappings. When the memblock allocator returns pages to pagealloc, we reserve 2/3 of those pages and use them as metadata for the remaining 1/3. Once KMSAN starts, every page allocated by pagealloc has its associated shadow and origin pages. kmsan_initialize() initializes the bookkeeping for init_task and enables KMSAN. Signed-off-by: Alexander Potapenko --- v2: -- move mm/kmsan/init.c and kmsan_memblock_free_pages() to this patch -- print a warning that KMSAN is a debugging tool (per Greg K-H's request) Link: https://linux-review.googlesource.com/id/I7bc53706141275914326df2345881ffe0cdd16bd --- include/linux/kmsan.h | 48 +++++++++ init/main.c | 3 + mm/kmsan/Makefile | 3 +- mm/kmsan/init.c | 240 ++++++++++++++++++++++++++++++++++++++++++ mm/kmsan/kmsan.h | 3 + mm/kmsan/shadow.c | 36 +++++++ mm/page_alloc.c | 3 + 7 files changed, 335 insertions(+), 1 deletion(-) create mode 100644 mm/kmsan/init.c diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index dca42e0e91991..a5767c728a46b 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -52,6 +52,40 @@ void kmsan_task_create(struct task_struct *task); */ void kmsan_task_exit(struct task_struct *task); +/** + * kmsan_init_shadow() - Initialize KMSAN shadow at boot time. + * + * Allocate and initialize KMSAN metadata for early allocations. + */ +void __init kmsan_init_shadow(void); + +/** + * kmsan_init_runtime() - Initialize KMSAN state and enable KMSAN. + */ +void __init kmsan_init_runtime(void); + +/** + * kmsan_memblock_free_pages() - handle freeing of memblock pages. + * @page: struct page to free. + * @order: order of @page. + * + * Freed pages are either returned to buddy allocator or held back to be used + * as metadata pages. + */ +bool __init kmsan_memblock_free_pages(struct page *page, unsigned int order); + +/** + * kmsan_task_create() - Initialize KMSAN state for the task. + * @task: task to initialize. + */ +void kmsan_task_create(struct task_struct *task); + +/** + * kmsan_task_exit() - Notify KMSAN that a task has exited. + * @task: task about to finish. + */ +void kmsan_task_exit(struct task_struct *task); + /** * kmsan_alloc_page() - Notify KMSAN about an alloc_pages() call. * @page: struct page pointer returned by alloc_pages(). @@ -173,6 +207,20 @@ void kmsan_iounmap_page_range(unsigned long start, unsigned long end); #else +static inline void kmsan_init_shadow(void) +{ +} + +static inline void kmsan_init_runtime(void) +{ +} + +static inline bool kmsan_memblock_free_pages(struct page *page, + unsigned int order) +{ + return true; +} + static inline void kmsan_task_create(struct task_struct *task) { } diff --git a/init/main.c b/init/main.c index 98182c3c2c4b3..5c6937921c890 100644 --- a/init/main.c +++ b/init/main.c @@ -34,6 +34,7 @@ #include #include #include +#include #include #include #include @@ -835,6 +836,7 @@ static void __init mm_init(void) init_mem_debugging_and_hardening(); kfence_alloc_pool(); report_meminit(); + kmsan_init_shadow(); stack_depot_early_init(); mem_init(); mem_init_print_info(); @@ -852,6 +854,7 @@ static void __init mm_init(void) init_espfix_bsp(); /* Should be run after espfix64 is set up. */ pti_init(); + kmsan_init_runtime(); } #ifdef CONFIG_RANDOMIZE_KSTACK_OFFSET diff --git a/mm/kmsan/Makefile b/mm/kmsan/Makefile index 73b705cbf75b9..f57a956cb1c8b 100644 --- a/mm/kmsan/Makefile +++ b/mm/kmsan/Makefile @@ -1,4 +1,4 @@ -obj-y := core.o instrumentation.o hooks.o report.o shadow.o annotations.o +obj-y := core.o instrumentation.o init.o hooks.o report.o shadow.o annotations.o KMSAN_SANITIZE := n KCOV_INSTRUMENT := n @@ -16,6 +16,7 @@ CFLAGS_REMOVE.o = $(CC_FLAGS_FTRACE) CFLAGS_annotations.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_core.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_hooks.o := $(CC_FLAGS_KMSAN_RUNTIME) +CFLAGS_init.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_instrumentation.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_report.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_shadow.o := $(CC_FLAGS_KMSAN_RUNTIME) diff --git a/mm/kmsan/init.c b/mm/kmsan/init.c new file mode 100644 index 0000000000000..45757d1390402 --- /dev/null +++ b/mm/kmsan/init.c @@ -0,0 +1,240 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN initialization routines. + * + * Copyright (C) 2017-2021 Google LLC + * Author: Alexander Potapenko + * + */ + +#include "kmsan.h" + +#include +#include +#include + +#include "../internal.h" + +#define NUM_FUTURE_RANGES 128 +struct start_end_pair { + u64 start, end; +}; + +static struct start_end_pair start_end_pairs[NUM_FUTURE_RANGES] __initdata; +static int future_index __initdata; + +/* + * Record a range of memory for which the metadata pages will be created once + * the page allocator becomes available. + */ +static void __init kmsan_record_future_shadow_range(void *start, void *end) +{ + u64 nstart = (u64)start, nend = (u64)end, cstart, cend; + bool merged = false; + int i; + + KMSAN_WARN_ON(future_index == NUM_FUTURE_RANGES); + KMSAN_WARN_ON((nstart >= nend) || !nstart || !nend); + nstart = ALIGN_DOWN(nstart, PAGE_SIZE); + nend = ALIGN(nend, PAGE_SIZE); + + /* + * Scan the existing ranges to see if any of them overlaps with + * [start, end). In that case, merge the two ranges instead of + * creating a new one. + * The number of ranges is less than 20, so there is no need to organize + * them into a more intelligent data structure. + */ + for (i = 0; i < future_index; i++) { + cstart = start_end_pairs[i].start; + cend = start_end_pairs[i].end; + if ((cstart < nstart && cend < nstart) || + (cstart > nend && cend > nend)) + /* ranges are disjoint - do not merge */ + continue; + start_end_pairs[i].start = min(nstart, cstart); + start_end_pairs[i].end = max(nend, cend); + merged = true; + break; + } + if (merged) + return; + start_end_pairs[future_index].start = nstart; + start_end_pairs[future_index].end = nend; + future_index++; +} + +/* + * Initialize the shadow for existing mappings during kernel initialization. + * These include kernel text/data sections, NODE_DATA and future ranges + * registered while creating other data (e.g. percpu). + * + * Allocations via memblock can be only done before slab is initialized. + */ +void __init kmsan_init_shadow(void) +{ + const size_t nd_size = roundup(sizeof(pg_data_t), PAGE_SIZE); + phys_addr_t p_start, p_end; + int nid; + u64 i; + + for_each_reserved_mem_range(i, &p_start, &p_end) + kmsan_record_future_shadow_range(phys_to_virt(p_start), + phys_to_virt(p_end)); + /* Allocate shadow for .data */ + kmsan_record_future_shadow_range(_sdata, _edata); + + for_each_online_node(nid) + kmsan_record_future_shadow_range( + NODE_DATA(nid), (char *)NODE_DATA(nid) + nd_size); + + for (i = 0; i < future_index; i++) + kmsan_init_alloc_meta_for_range( + (void *)start_end_pairs[i].start, + (void *)start_end_pairs[i].end); +} +EXPORT_SYMBOL(kmsan_init_shadow); + +struct page_pair { + struct page *shadow, *origin; +}; +static struct page_pair held_back[MAX_ORDER] __initdata; + +/* + * Eager metadata allocation. When the memblock allocator is freeing pages to + * pagealloc, we use 2/3 of them as metadata for the remaining 1/3. + * We store the pointers to the returned blocks of pages in held_back[] grouped + * by their order: when kmsan_memblock_free_pages() is called for the first + * time with a certain order, it is reserved as a shadow block, for the second + * time - as an origin block. On the third time the incoming block receives its + * shadow and origin ranges from the previously saved shadow and origin blocks, + * after which held_back[order] can be used again. + * + * At the very end there may be leftover blocks in held_back[]. They are + * collected later by kmsan_memblock_discard(). + */ +bool kmsan_memblock_free_pages(struct page *page, unsigned int order) +{ + struct page *shadow, *origin; + + if (!held_back[order].shadow) { + held_back[order].shadow = page; + return false; + } + if (!held_back[order].origin) { + held_back[order].origin = page; + return false; + } + shadow = held_back[order].shadow; + origin = held_back[order].origin; + kmsan_setup_meta(page, shadow, origin, order); + + held_back[order].shadow = NULL; + held_back[order].origin = NULL; + return true; +} + +#define MAX_BLOCKS 8 +struct smallstack { + struct page *items[MAX_BLOCKS]; + int index; + int order; +}; + +struct smallstack collect = { + .index = 0, + .order = MAX_ORDER, +}; + +static void smallstack_push(struct smallstack *stack, struct page *pages) +{ + KMSAN_WARN_ON(stack->index == MAX_BLOCKS); + stack->items[stack->index] = pages; + stack->index++; +} +#undef MAX_BLOCKS + +static struct page *smallstack_pop(struct smallstack *stack) +{ + struct page *ret; + + KMSAN_WARN_ON(stack->index == 0); + stack->index--; + ret = stack->items[stack->index]; + stack->items[stack->index] = NULL; + return ret; +} + +static void do_collection(void) +{ + struct page *page, *shadow, *origin; + + while (collect.index >= 3) { + page = smallstack_pop(&collect); + shadow = smallstack_pop(&collect); + origin = smallstack_pop(&collect); + kmsan_setup_meta(page, shadow, origin, collect.order); + __free_pages_core(page, collect.order); + } +} + +static void collect_split(void) +{ + struct smallstack tmp = { + .order = collect.order - 1, + .index = 0, + }; + struct page *page; + + if (!collect.order) + return; + while (collect.index) { + page = smallstack_pop(&collect); + smallstack_push(&tmp, &page[0]); + smallstack_push(&tmp, &page[1 << tmp.order]); + } + __memcpy(&collect, &tmp, sizeof(struct smallstack)); +} + +/* + * Memblock is about to go away. Split the page blocks left over in held_back[] + * and return 1/3 of that memory to the system. + */ +static void kmsan_memblock_discard(void) +{ + int i; + + /* + * For each order=N: + * - push held_back[N].shadow and .origin to |collect|; + * - while there are >= 3 elements in |collect|, do garbage collection: + * - pop 3 ranges from |collect|; + * - use two of them as shadow and origin for the third one; + * - repeat; + * - split each remaining element from |collect| into 2 ranges of + * order=N-1, + * - repeat. + */ + collect.order = MAX_ORDER - 1; + for (i = MAX_ORDER - 1; i >= 0; i--) { + if (held_back[i].shadow) + smallstack_push(&collect, held_back[i].shadow); + if (held_back[i].origin) + smallstack_push(&collect, held_back[i].origin); + held_back[i].shadow = NULL; + held_back[i].origin = NULL; + do_collection(); + collect_split(); + } +} + +void __init kmsan_init_runtime(void) +{ + /* Assuming current is init_task */ + kmsan_internal_task_create(current); + kmsan_memblock_discard(); + pr_info("Starting KernelMemorySanitizer\n"); + pr_info("ATTENTION: KMSAN is a debugging tool! Do not use it on production machines!\n"); + kmsan_enabled = true; +} +EXPORT_SYMBOL(kmsan_init_runtime); diff --git a/mm/kmsan/kmsan.h b/mm/kmsan/kmsan.h index a1b5900ffd97b..059f21c39ec1b 100644 --- a/mm/kmsan/kmsan.h +++ b/mm/kmsan/kmsan.h @@ -66,6 +66,7 @@ struct shadow_origin_ptr { struct shadow_origin_ptr kmsan_get_shadow_origin_ptr(void *addr, u64 size, bool store); void *kmsan_get_metadata(void *addr, bool is_origin); +void __init kmsan_init_alloc_meta_for_range(void *start, void *end); enum kmsan_bug_reason { REASON_ANY, @@ -181,5 +182,7 @@ bool kmsan_internal_is_module_addr(void *vaddr); bool kmsan_internal_is_vmalloc_addr(void *addr); struct page *kmsan_vmalloc_to_page_or_null(void *vaddr); +void kmsan_setup_meta(struct page *page, struct page *shadow, + struct page *origin, int order); #endif /* __MM_KMSAN_KMSAN_H */ diff --git a/mm/kmsan/shadow.c b/mm/kmsan/shadow.c index 8fe6a5ed05e67..99cb9436eddc6 100644 --- a/mm/kmsan/shadow.c +++ b/mm/kmsan/shadow.c @@ -298,3 +298,39 @@ void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, kfree(s_pages); kfree(o_pages); } + +/* Allocate metadata for pages allocated at boot time. */ +void __init kmsan_init_alloc_meta_for_range(void *start, void *end) +{ + struct page *shadow_p, *origin_p; + void *shadow, *origin; + struct page *page; + u64 addr, size; + + start = (void *)ALIGN_DOWN((u64)start, PAGE_SIZE); + size = ALIGN((u64)end - (u64)start, PAGE_SIZE); + shadow = memblock_alloc(size, PAGE_SIZE); + origin = memblock_alloc(size, PAGE_SIZE); + for (addr = 0; addr < size; addr += PAGE_SIZE) { + page = virt_to_page_or_null((char *)start + addr); + shadow_p = virt_to_page_or_null((char *)shadow + addr); + set_no_shadow_origin_page(shadow_p); + shadow_page_for(page) = shadow_p; + origin_p = virt_to_page_or_null((char *)origin + addr); + set_no_shadow_origin_page(origin_p); + origin_page_for(page) = origin_p; + } +} + +void kmsan_setup_meta(struct page *page, struct page *shadow, + struct page *origin, int order) +{ + int i; + + for (i = 0; i < (1 << order); i++) { + set_no_shadow_origin_page(&shadow[i]); + set_no_shadow_origin_page(&origin[i]); + shadow_page_for(&page[i]) = &shadow[i]; + origin_page_for(&page[i]) = &origin[i]; + } +} diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 98393e01e4259..35b1fedb2f09c 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1716,6 +1716,9 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn, { if (early_page_uninitialised(pfn)) return; + if (!kmsan_memblock_free_pages(page, order)) + /* KMSAN will take care of these pages. */ + return; __free_pages_core(page, order); } From patchwork Tue Apr 26 16:42:49 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827503 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 05A24C433FE for ; Tue, 26 Apr 2022 16:45:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 909946B009A; Tue, 26 Apr 2022 12:45:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B7A96B009B; Tue, 26 Apr 2022 12:45:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 75BF86B009C; Tue, 26 Apr 2022 12:45:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 66FCC6B009A for ; Tue, 26 Apr 2022 12:45:14 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 438C1120B2D for ; Tue, 26 Apr 2022 16:45:14 +0000 (UTC) X-FDA: 79399605348.20.46DFB60 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf18.hostedemail.com (Postfix) with ESMTP id A978B1C0048 for ; Tue, 26 Apr 2022 16:45:09 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id cz24-20020a0564021cb800b00425dfdd7768so3907016edb.2 for ; Tue, 26 Apr 2022 09:45:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=PbBXRuKFLbtHp5AyM4yBRHn0ZnsWI8OdvT0jRFW7dGk=; b=UzHbOhn27qVpptDuuU81LcEBLt7rOLKE1Z4yNRMf0zkmHeofLcAdHL+SqDQLWH82Il 6cto3RZpExd7ZEnNnAe35UkzS2jkAjUX69Q2YDKqz0g6NCSq/WjRaGJHtw43EHu3cb2U 74H178/TCTR18+CzvmKyfFX9b2Z4X7YP6GK3JO9AiYQ2MlSw+zDsaJJUZQ1bJWeQtgsz KYwuUT517UeVfpvjEvnbP65ucrioTA1bLWKz7WGDz6RnAmYdszQ4/JzyWv4TAbKiuOGb sOlT+jbPA+6lfsFROSMY/3byYOL6djxZQq1uuRZWZI8XWXlEes7tAvr+I/ZTgHKJS0le Mwmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=PbBXRuKFLbtHp5AyM4yBRHn0ZnsWI8OdvT0jRFW7dGk=; b=OtMIk4qWyYvyx7VwugvNcVEMIoVznrknBvTfWjdqnEu0GN1efGdJcyvT/CAUnLoc3M QnNoFnb/hi35QX5YDMR8HStXj3qR3TJ7Ka8SRamI+s4RpQyic9xJ9QWiPU4Ged2nqwBh MRKbU8xxqbajWEfHzSBiFOrj/pKCewo61QzjjqFf2FIXAzzOBO4jmL/0wiJIU//gwzXA aSPeLOt70Je3Smqte0n826fv16sbMGC4gV5iyabcWeCq/XakxPZWkupBK6snynXGUOBB BGr+Ftcm0R9gqMzFNRTHXqCyo3MQLSM9PKS0Fhhku6RemwLMPR65nliLFcaBTLJJgOQN NXcw== X-Gm-Message-State: AOAM533UFip0zsVYnh5WIGx+Y3yfnG+PnRE4Eu/BdyNt49rseS6TjwR6 HAzicCkEZIoIi59W3gxiA/EaAN5qdqA= X-Google-Smtp-Source: ABdhPJxJswKGNx1XbX2+/B2Rkoy0KYZvI3ec74susirQuoLEq7dCT4VYCkV8muuT7Bszjk5rbONHKLbz8bs= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:400b:b0:425:f59a:c221 with SMTP id d11-20020a056402400b00b00425f59ac221mr7821838eda.307.1650991512487; Tue, 26 Apr 2022 09:45:12 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:49 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-21-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 20/46] instrumented.h: add KMSAN support From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: A978B1C0048 X-Stat-Signature: ab5fjn5rwo7snqemo6jtiem1iyqfwfqu Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=UzHbOhn2; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf18.hostedemail.com: domain of 3mCFoYgYKCJU5A723G5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3mCFoYgYKCJU5A723G5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--glider.bounces.google.com X-Rspam-User: X-HE-Tag: 1650991509-808649 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To avoid false positives, KMSAN needs to unpoison the data copied from the userspace. To detect infoleaks - check the memory buffer passed to copy_to_user(). Signed-off-by: Alexander Potapenko --- v2: -- move implementation of kmsan_copy_to_user() here Link: https://linux-review.googlesource.com/id/I43e93b9c02709e6be8d222342f1b044ac8bdbaaf --- include/linux/instrumented.h | 5 ++++- include/linux/kmsan-checks.h | 19 ++++++++++++++++++ mm/kmsan/hooks.c | 38 ++++++++++++++++++++++++++++++++++++ 3 files changed, 61 insertions(+), 1 deletion(-) diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h index ee8f7d17d34f5..c73c1b19e9227 100644 --- a/include/linux/instrumented.h +++ b/include/linux/instrumented.h @@ -2,7 +2,7 @@ /* * This header provides generic wrappers for memory access instrumentation that - * the compiler cannot emit for: KASAN, KCSAN. + * the compiler cannot emit for: KASAN, KCSAN, KMSAN. */ #ifndef _LINUX_INSTRUMENTED_H #define _LINUX_INSTRUMENTED_H @@ -10,6 +10,7 @@ #include #include #include +#include #include /** @@ -117,6 +118,7 @@ instrument_copy_to_user(void __user *to, const void *from, unsigned long n) { kasan_check_read(from, n); kcsan_check_read(from, n); + kmsan_copy_to_user(to, from, n, 0); } /** @@ -151,6 +153,7 @@ static __always_inline void instrument_copy_from_user_after(const void *to, const void __user *from, unsigned long n, unsigned long left) { + kmsan_unpoison_memory(to, n - left); } #endif /* _LINUX_INSTRUMENTED_H */ diff --git a/include/linux/kmsan-checks.h b/include/linux/kmsan-checks.h index ecd8336190fc0..aabaf1ba7c251 100644 --- a/include/linux/kmsan-checks.h +++ b/include/linux/kmsan-checks.h @@ -84,6 +84,21 @@ void kmsan_unpoison_memory(const void *address, size_t size); */ void kmsan_check_memory(const void *address, size_t size); +/** + * kmsan_copy_to_user() - Notify KMSAN about a data transfer to userspace. + * @to: destination address in the userspace. + * @from: source address in the kernel. + * @to_copy: number of bytes to copy. + * @left: number of bytes not copied. + * + * If this is a real userspace data transfer, KMSAN checks the bytes that were + * actually copied to ensure there was no information leak. If @to belongs to + * the kernel space (which is possible for compat syscalls), KMSAN just copies + * the metadata. + */ +void kmsan_copy_to_user(void __user *to, const void *from, size_t to_copy, + size_t left); + #else #define kmsan_init(value) (value) @@ -98,6 +113,10 @@ static inline void kmsan_unpoison_memory(const void *address, size_t size) static inline void kmsan_check_memory(const void *address, size_t size) { } +static inline void kmsan_copy_to_user(void __user *to, const void *from, + size_t to_copy, size_t left) +{ +} #endif diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 43a529569053d..1cdb4420977f1 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -212,6 +212,44 @@ void kmsan_iounmap_page_range(unsigned long start, unsigned long end) } EXPORT_SYMBOL(kmsan_iounmap_page_range); +void kmsan_copy_to_user(void __user *to, const void *from, size_t to_copy, + size_t left) +{ + unsigned long ua_flags; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + /* + * At this point we've copied the memory already. It's hard to check it + * before copying, as the size of actually copied buffer is unknown. + */ + + /* copy_to_user() may copy zero bytes. No need to check. */ + if (!to_copy) + return; + /* Or maybe copy_to_user() failed to copy anything. */ + if (to_copy <= left) + return; + + ua_flags = user_access_save(); + if ((u64)to < TASK_SIZE) { + /* This is a user memory access, check it. */ + kmsan_internal_check_memory((void *)from, to_copy - left, to, + REASON_COPY_TO_USER); + user_access_restore(ua_flags); + return; + } + /* Otherwise this is a kernel memory access. This happens when a compat + * syscall passes an argument allocated on the kernel stack to a real + * syscall. + * Don't check anything, just copy the shadow of the copied bytes. + */ + kmsan_internal_memmove_metadata((void *)to, (void *)from, + to_copy - left); + user_access_restore(ua_flags); +} +EXPORT_SYMBOL(kmsan_copy_to_user); + /* Functions from kmsan-checks.h follow. */ void kmsan_poison_memory(const void *address, size_t size, gfp_t flags) { From patchwork Tue Apr 26 16:42:50 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827504 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F1FCAC433EF for ; Tue, 26 Apr 2022 16:45:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7F71F6B009B; Tue, 26 Apr 2022 12:45:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 77FC86B009C; Tue, 26 Apr 2022 12:45:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 66EDC8D0001; Tue, 26 Apr 2022 12:45:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 5999E6B009B for ; Tue, 26 Apr 2022 12:45:17 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 3032020CFB for ; Tue, 26 Apr 2022 16:45:17 +0000 (UTC) X-FDA: 79399605474.15.BA0E6AC Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf09.hostedemail.com (Postfix) with ESMTP id 80CDE140048 for ; Tue, 26 Apr 2022 16:45:13 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id sh14-20020a1709076e8e00b006f3b7adb9ffso1014271ejc.16 for ; Tue, 26 Apr 2022 09:45:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=vFqZ9iNZJAVQIjd7MnY5jhYV4PJ/+WpYoNR2VRSWpiM=; b=eaD4tCJ/YLTDe4ajl2gQ/TiaUeEA4djHXSqlMDmsulc+qbwyzuVd2W9yINHg7fHnga 3Y+nBojMivWb/vlWu7aJjqdQ3YqnLBqv0yN1IxxX322BLlHzj3IM9ELCwxkKAp16qOhv LyADFdDfCUsE+lxz9NxlRdyYnWL3NGagYS8JkeULjygmzHGrjysJLw6McGcbkSgOELQg 1Rc7TqbJxh506aPuLFlQCeuB/YWI3RUvsBYyts3m6+c10BC++lIyRNin1VxoT8iWyPi+ 5eWh+n4wLJVVQzkWe/7aM9SIO+vr3tXqfKqj3v7+4FSCDACZ5wNHzbimfz+M1Qn6e6s2 Njyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=vFqZ9iNZJAVQIjd7MnY5jhYV4PJ/+WpYoNR2VRSWpiM=; b=YTihtTE5kWadIvhictlFEJa1GAEb0xMeKb4NXFL2AOa6NWrrsrkmtzRBWEptgAMYcF /GZbZGh+EwTSseKUFgSos4k9NJ7KN+SWz5Okrz1ramIksjxvO4jMWqsgyLROgda7jcIK W11RLOKjvmQ7Bgn2elZVKAktv8kZoDnDUU9XuwOFJ84INXfa2IP/+GkOJ27CwdDJlddD lVi8SzcsuOVQXryJ7mS3xgJj+DP8btSiDxgjrsWG62SJ/0fdbORHQqqGptGRnL9c9cBO bNdm/jUw9wq1aVX4ehFpebWusJQZurfZiqMjYZaIvp1O9hdtE8a+uDm9yy2rrtDRRr/X 6byQ== X-Gm-Message-State: AOAM533qdbLTULKec2AizysftM4MfWURbo2evIQ6LzcZQHksli0R4f/X UP9xmlciaiK/yFenni3PMC4q2OInlXY= X-Google-Smtp-Source: ABdhPJxSGoEtwF4OOq0yTBsFRqUEi4ydvzEhY/S3s2+/j9D6/negQ9CVvZDUHjNLSrRATTf6/ojzjnCmZdU= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a17:906:3e05:b0:6f3:a14a:fd3f with SMTP id k5-20020a1709063e0500b006f3a14afd3fmr7558438eji.640.1650991515225; Tue, 26 Apr 2022 09:45:15 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:50 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-22-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 21/46] kmsan: unpoison @tlb in arch_tlb_gather_mmu() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 80CDE140048 X-Stat-Signature: 37jj7pctnagcupu8enxobn6refiyisk1 Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="eaD4tCJ/"; spf=pass (imf09.hostedemail.com: domain of 3myFoYgYKCJg8DA56J8GG8D6.4GEDAFMP-EECN24C.GJ8@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3myFoYgYKCJg8DA56J8GG8D6.4GEDAFMP-EECN24C.GJ8@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1650991513-883791 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a hack to reduce stackdepot pressure. struct mmu_gather contains 7 1-bit fields packed into a 32-bit unsigned int value. The remaining 25 bits remain uninitialized and are never used, but KMSAN updates the origin for them in zap_pXX_range() in mm/memory.c, thus creating very long origin chains. This is technically correct, but consumes too much memory. Unpoisoning the whole structure will prevent creating such chains. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I76abee411b8323acfdbc29bc3a60dca8cff2de77 --- mm/mmu_gather.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index afb7185ffdc45..2f3821268b311 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -1,6 +1,7 @@ #include #include #include +#include #include #include #include @@ -253,6 +254,15 @@ void tlb_flush_mmu(struct mmu_gather *tlb) static void __tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, bool fullmm) { + /* + * struct mmu_gather contains 7 1-bit fields packed into a 32-bit + * unsigned int value. The remaining 25 bits remain uninitialized + * and are never used, but KMSAN updates the origin for them in + * zap_pXX_range() in mm/memory.c, thus creating very long origin + * chains. This is technically correct, but consumes too much memory. + * Unpoisoning the whole structure will prevent creating such chains. + */ + kmsan_unpoison_memory(tlb, sizeof(*tlb)); tlb->mm = mm; tlb->fullmm = fullmm; From patchwork Tue Apr 26 16:42:51 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827505 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2C591C433EF for ; Tue, 26 Apr 2022 16:45:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BC8B26B009C; Tue, 26 Apr 2022 12:45:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B77696B009D; Tue, 26 Apr 2022 12:45:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A19F56B009E; Tue, 26 Apr 2022 12:45:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 9293F6B009C for ; Tue, 26 Apr 2022 12:45:19 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 5E8BD269C1 for ; Tue, 26 Apr 2022 16:45:19 +0000 (UTC) X-FDA: 79399605558.29.5FD5874 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf10.hostedemail.com (Postfix) with ESMTP id 024F8C004C for ; Tue, 26 Apr 2022 16:45:10 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id w8-20020a50d788000000b00418e6810364so10518361edi.13 for ; Tue, 26 Apr 2022 09:45:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=8DiARL40b6+V/ufuvO47fdIAJou+WObIc+AUIejVSTY=; b=EUCnsUCRGBWdtTQAs7pEjigjtVlLNT5aUlwj+4ZqDn1AxMPgRR2DYXAW2EBPSLj5jq LSB/caN3RsCO5hDsl8In3F4gA8EH5xPbxv6LqVGHvoRzyZyXGYgANA68YhHsXXS+vnZ+ OcYOcvT0i08OEQfBjlycTxaFszB8WC9yK9ds1kfWnbajGrhHWXXsFXSdRLABRxG1dZSW Rhw88HYvuatIRRgwKt2yrWqq1vsLUCOLFOAIO8iKRMXZrvUjfGJ3c3X0qRnKvcsyUZMs Xb1mFmXQFpitzvw/5RfLlpk7s39iSkZsekeMIYoYzpQiwCw44UexPBJH9QY30S2Loi24 sKNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8DiARL40b6+V/ufuvO47fdIAJou+WObIc+AUIejVSTY=; b=UaIuBq+A+Qes99NbfYdtHghncB7QERltsNQy/39tWAvh85gkq0U1Z0XuOKcLgCmLgo lBI79FsTdYNywrMKPlgpqktsGuxHe67THaaOSZJeYE55IkcH/9ysZ5u059QHQCkFdW7f pqcPFqrV1qxlL+WMBaC2lOEYu5oN8KyMlJ/qo92PGvwMIHUyhdY/xtog8lYvqam9ycMa mpKV+m4NlU7CtIOkdZ4mqT8jUkpGL7m7JukLLQThEWDjHnGECCSDabD2mG2t+caD1zEa LcUdsUNv+MUbpFudScLhOXEYtkbafGtvM2AFXTfk6yRfOHfti69Zj5veRZumrL809mt+ ICmw== X-Gm-Message-State: AOAM5318Q6XtqI9rmvR9bqebfL3XAB6xLMz27QiY09RlRiaLvBUE7GOu jZDodrEcpdmNMtlgpTe/7LiwdeWwqOI= X-Google-Smtp-Source: ABdhPJx8IprxR1s4FocdY4Bbtloh6XPKAEXkKiAjkB9XNvQcFCjh141Gp/J8iNuLq3R7S/RE8xQrIOLnKF8= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:aa7:c31a:0:b0:425:df3c:de8e with SMTP id l26-20020aa7c31a000000b00425df3cde8emr14404475edq.83.1650991517926; Tue, 26 Apr 2022 09:45:17 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:51 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-23-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 22/46] kmsan: add iomap support From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: yqibsrjqf863w1586k3egcu6si77g8yq X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 024F8C004C X-Rspam-User: Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=EUCnsUCR; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of 3nSFoYgYKCJoAFC78LAIIAF8.6IGFCHOR-GGEP46E.ILA@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3nSFoYgYKCJoAFC78LAIIAF8.6IGFCHOR-GGEP46E.ILA@flex--glider.bounces.google.com X-HE-Tag: 1650991510-527710 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Functions from lib/iomap.c interact with hardware, so KMSAN must ensure that: - every read function returns an initialized value - every write function checks values before sending them to hardware. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I45527599f09090aca046dfe1a26df453adab100d --- lib/iomap.c | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/lib/iomap.c b/lib/iomap.c index fbaa3e8f19d6c..bdda1a42771b2 100644 --- a/lib/iomap.c +++ b/lib/iomap.c @@ -6,6 +6,7 @@ */ #include #include +#include #include @@ -70,26 +71,31 @@ static void bad_io_access(unsigned long port, const char *access) #define mmio_read64be(addr) swab64(readq(addr)) #endif +__no_sanitize_memory unsigned int ioread8(const void __iomem *addr) { IO_COND(addr, return inb(port), return readb(addr)); return 0xff; } +__no_sanitize_memory unsigned int ioread16(const void __iomem *addr) { IO_COND(addr, return inw(port), return readw(addr)); return 0xffff; } +__no_sanitize_memory unsigned int ioread16be(const void __iomem *addr) { IO_COND(addr, return pio_read16be(port), return mmio_read16be(addr)); return 0xffff; } +__no_sanitize_memory unsigned int ioread32(const void __iomem *addr) { IO_COND(addr, return inl(port), return readl(addr)); return 0xffffffff; } +__no_sanitize_memory unsigned int ioread32be(const void __iomem *addr) { IO_COND(addr, return pio_read32be(port), return mmio_read32be(addr)); @@ -142,18 +148,21 @@ static u64 pio_read64be_hi_lo(unsigned long port) return lo | (hi << 32); } +__no_sanitize_memory u64 ioread64_lo_hi(const void __iomem *addr) { IO_COND(addr, return pio_read64_lo_hi(port), return readq(addr)); return 0xffffffffffffffffULL; } +__no_sanitize_memory u64 ioread64_hi_lo(const void __iomem *addr) { IO_COND(addr, return pio_read64_hi_lo(port), return readq(addr)); return 0xffffffffffffffffULL; } +__no_sanitize_memory u64 ioread64be_lo_hi(const void __iomem *addr) { IO_COND(addr, return pio_read64be_lo_hi(port), @@ -161,6 +170,7 @@ u64 ioread64be_lo_hi(const void __iomem *addr) return 0xffffffffffffffffULL; } +__no_sanitize_memory u64 ioread64be_hi_lo(const void __iomem *addr) { IO_COND(addr, return pio_read64be_hi_lo(port), @@ -188,22 +198,32 @@ EXPORT_SYMBOL(ioread64be_hi_lo); void iowrite8(u8 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, outb(val,port), writeb(val, addr)); } void iowrite16(u16 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, outw(val,port), writew(val, addr)); } void iowrite16be(u16 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write16be(val,port), mmio_write16be(val, addr)); } void iowrite32(u32 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, outl(val,port), writel(val, addr)); } void iowrite32be(u32 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write32be(val,port), mmio_write32be(val, addr)); } EXPORT_SYMBOL(iowrite8); @@ -239,24 +259,32 @@ static void pio_write64be_hi_lo(u64 val, unsigned long port) void iowrite64_lo_hi(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64_lo_hi(val, port), writeq(val, addr)); } void iowrite64_hi_lo(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64_hi_lo(val, port), writeq(val, addr)); } void iowrite64be_lo_hi(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64be_lo_hi(val, port), mmio_write64be(val, addr)); } void iowrite64be_hi_lo(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64be_hi_lo(val, port), mmio_write64be(val, addr)); } @@ -328,14 +356,20 @@ static inline void mmio_outsl(void __iomem *addr, const u32 *src, int count) void ioread8_rep(const void __iomem *addr, void *dst, unsigned long count) { IO_COND(addr, insb(port,dst,count), mmio_insb(addr, dst, count)); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_memory(dst, count); } void ioread16_rep(const void __iomem *addr, void *dst, unsigned long count) { IO_COND(addr, insw(port,dst,count), mmio_insw(addr, dst, count)); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_memory(dst, count * 2); } void ioread32_rep(const void __iomem *addr, void *dst, unsigned long count) { IO_COND(addr, insl(port,dst,count), mmio_insl(addr, dst, count)); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_memory(dst, count * 4); } EXPORT_SYMBOL(ioread8_rep); EXPORT_SYMBOL(ioread16_rep); @@ -343,14 +377,20 @@ EXPORT_SYMBOL(ioread32_rep); void iowrite8_rep(void __iomem *addr, const void *src, unsigned long count) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(src, count); IO_COND(addr, outsb(port, src, count), mmio_outsb(addr, src, count)); } void iowrite16_rep(void __iomem *addr, const void *src, unsigned long count) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(src, count * 2); IO_COND(addr, outsw(port, src, count), mmio_outsw(addr, src, count)); } void iowrite32_rep(void __iomem *addr, const void *src, unsigned long count) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(src, count * 4); IO_COND(addr, outsl(port, src,count), mmio_outsl(addr, src, count)); } EXPORT_SYMBOL(iowrite8_rep); From patchwork Tue Apr 26 16:42:52 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827506 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29FC2C433EF for ; Tue, 26 Apr 2022 16:45:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B2BE26B0073; Tue, 26 Apr 2022 12:45:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id ADADA6B009D; Tue, 26 Apr 2022 12:45:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 97D0F6B009E; Tue, 26 Apr 2022 12:45:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 86FB66B0073 for ; Tue, 26 Apr 2022 12:45:22 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 5CEA7D0B for ; Tue, 26 Apr 2022 16:45:22 +0000 (UTC) X-FDA: 79399605684.30.4CDD0A1 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf18.hostedemail.com (Postfix) with ESMTP id A8AF61C0051 for ; Tue, 26 Apr 2022 16:45:17 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id hs26-20020a1709073e9a00b006f3b957ebb4so837893ejc.7 for ; Tue, 26 Apr 2022 09:45:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=A4BOD5bYYCwqT6UysGwaTqw9NZkJge0TWQm9U6+WrG0=; b=khSFQmZPrfoPgxGEUABi/FVW8V9rw4+gcS76epQACJKqyg7hkcjwrrNZSEPqT8cKN9 ONCpTChsOEZ0awUUpbA5x5xayPRdehOMzeg0fmzK8U1O+otCttBP994AhBNpWeRijLhC uA0BFEVHSShFszO60gnvZ/g8XEEAvlCsHxSsQtTYtMUoAiTqbiLeIdThYaket6Dbkru7 YLDTtMsAFMm7WODYFRUO4LbdZSTFrmTAN4yb5OHT1IcBSYVtVTBthQ8DYmMN9xjdT8Y5 P4oaK9oxiUba69g6lk7NgMT+HmLDsQRoGLtP+vFJRnamxCbKjs/0LNdi6KunA8g68VkO EfjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=A4BOD5bYYCwqT6UysGwaTqw9NZkJge0TWQm9U6+WrG0=; b=RxDjUqQn3ZU6RJLNsaisNn2SG52IO6/+bYdDwanZbYdWNttQFfNN1B1KBzG50Mx/yR FkWyMANkXC0oje5ZEE80H3YImANegLDxlm0tR8xbcuyYdK/XZlqywihcsPdORCeurAte FQN3eZ5tvh5hhlP7Vg4PVIj9m9d5EM6FSSpS5rrudZWHwP/Kyb0/heTnfT/cNcM2uahu xVmH6iVxpYCimjuEdE7dXbxtcANJWTgO4TRmW5jfVnGL3fn89PBJCyxieBNdLQJYfmN8 fJLSWDpUhWxSJmtWdBQVNyFc3hKSsdc4FbXjd1ozQ+ud+h1ZShXh09R2GEYWIN0ertOW Oysg== X-Gm-Message-State: AOAM533goPI33oZ369lR7UTbAEDwrysT7X9t9l4DFRyLrB4rZ0QqCT8U Kg5gfBjVLDjXKtbPFNWS0O83FpYthS8= X-Google-Smtp-Source: ABdhPJxAop9pzi2nkfiGiO4x6mNOPP93/Bjni9kA40EEqg0TirQbaSSuyCXI2m0QfvecUipohdkmvzvyeto= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:254e:b0:424:244:faf with SMTP id l14-20020a056402254e00b0042402440fafmr25661420edb.260.1650991520418; Tue, 26 Apr 2022 09:45:20 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:52 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-24-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 23/46] Input: libps2: mark data received in __ps2_command() as initialized From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=khSFQmZP; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf18.hostedemail.com: domain of 3oCFoYgYKCJ0DIFABODLLDIB.9LJIFKRU-JJHS79H.LOD@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3oCFoYgYKCJ0DIFABODLLDIB.9LJIFKRU-JJHS79H.LOD@flex--glider.bounces.google.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: A8AF61C0051 X-Rspam-User: X-Stat-Signature: cf9gggtkerny5xme6ed6igor5u1a4asd X-HE-Tag: 1650991517-17706 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN does not know that the device initializes certain bytes in ps2dev->cmdbuf. Call kmsan_unpoison_memory() to explicitly mark them as initialized. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I2d26f6baa45271d37320d3f4a528c39cb7e545f0 --- drivers/input/serio/libps2.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/input/serio/libps2.c b/drivers/input/serio/libps2.c index 250e213cc80c6..3e19344eda93c 100644 --- a/drivers/input/serio/libps2.c +++ b/drivers/input/serio/libps2.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -294,9 +295,11 @@ int __ps2_command(struct ps2dev *ps2dev, u8 *param, unsigned int command) serio_pause_rx(ps2dev->serio); - if (param) + if (param) { for (i = 0; i < receive; i++) param[i] = ps2dev->cmdbuf[(receive - 1) - i]; + kmsan_unpoison_memory(param, receive); + } if (ps2dev->cmdcnt && (command != PS2_CMD_RESET_BAT || ps2dev->cmdcnt != 1)) { From patchwork Tue Apr 26 16:42:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827507 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C5BD4C433EF for ; Tue, 26 Apr 2022 16:45:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5ADCE6B009D; Tue, 26 Apr 2022 12:45:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 536696B009E; Tue, 26 Apr 2022 12:45:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3D7F26B009F; Tue, 26 Apr 2022 12:45:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 313F56B009D for ; Tue, 26 Apr 2022 12:45:25 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 0506AD28 for ; Tue, 26 Apr 2022 16:45:24 +0000 (UTC) X-FDA: 79399605810.22.5FDC0FD Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf19.hostedemail.com (Postfix) with ESMTP id CDC6D1A004A for ; Tue, 26 Apr 2022 16:45:20 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id dk9-20020a0564021d8900b00425a9c3d40cso8110032edb.7 for ; Tue, 26 Apr 2022 09:45:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ae6KBGyLKJoXcxB2yOctfhYiWJk7lyt59+0aGNQoZFE=; b=kru0e6mwMpQdFsNTXlAfmTpSGH0ko9Z62aiquxpVYOR0JXp5EtPpYUXKeC/mmsN9Uv Mp3CAzX6GOJEixW0dclu6CRFgdkOwz2UImg1qNv1NPxGTD3yEIxR2rgv8ej3AjUD5RT0 1+4QmZncDejfeevrwV59cqIXS/SsyVIM1YxmMil8RxMiL6nOjlfyWrun6JkjBe06L8gX yPe6oH2Oe0cNHqZUNa5jyVwSkEa/2QoaQq9AU6Rl3YYAf/EnVNPhfP0bZRfTEIERvMVW XKOQ0Si/wyaghQgFEr588BSzsT8VRUjOfovii7JYjgVO78nBwq8Bg/+CpEfFdTD5/Zo5 GoAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ae6KBGyLKJoXcxB2yOctfhYiWJk7lyt59+0aGNQoZFE=; b=jHlJrZyfe66VzUwMksq7WW5RIC+iwhxPcS0HVXVKgNbF+TzJez1ImkZjgnhbUzZVNl 5ANj5IzzntEgLXxY+cW/vuj2LYFdIv0f0H6dIv9KHLM752DmbeuHSiE7uMJGqofa0tvG XH73u9uqiG+q+hKNJomopOC1VFtLwg3Afci0HdeTRsrTBhZVlnlSNo/y1Fiu1ZmLZry3 ZQltPWvU9I+YVcfwkKcolF1vIfgPKUtiwXabcD+7NMZCvltoJ+25gyMkeNPRqiVnYTAV IpM22Qu7RVCuA8tinqirLlbSdGNrT+6lmBk8iGcWDRHSRY9B0gcdR36xyNMKemwPFpvu F3sw== X-Gm-Message-State: AOAM531CHcBTJgVNt+9Xd+6FBFPG+Lu/+xJyO7fh4qo7qa+oZd/lp++m HMVW6fuXFNNa6NpPrUdcR8pE1ota+AI= X-Google-Smtp-Source: ABdhPJwgCnNxvqRMEZQ8KvQ5jOB/J8AlGoWTaRunP9zD2mXvGus8OSk8rSp8TRR9z2UMTYCCAqyWXZmoFZQ= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:274b:b0:423:fe73:95a0 with SMTP id z11-20020a056402274b00b00423fe7395a0mr25532792edd.224.1650991523134; Tue, 26 Apr 2022 09:45:23 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:53 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-25-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 24/46] kmsan: dma: unpoison DMA mappings From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Queue-Id: CDC6D1A004A X-Stat-Signature: giza48qgx9uqzxrzzqcc6ocfwas68aet Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=kru0e6mw; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 3oyFoYgYKCKAGLIDERGOOGLE.COMLINUX-MMKVACK.ORG@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3oyFoYgYKCKAGLIDERGOOGLE.COMLINUX-MMKVACK.ORG@flex--glider.bounces.google.com X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1650991520-176548 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN doesn't know about DMA memory writes performed by devices. We unpoison such memory when it's mapped to avoid false positive reports. Signed-off-by: Alexander Potapenko --- v2: -- move implementation of kmsan_handle_dma() and kmsan_handle_dma_sg() here Link: https://linux-review.googlesource.com/id/Ia162dc4c5a92e74d4686c1be32a4dfeffc5c32cd --- include/linux/kmsan.h | 41 +++++++++++++++++++++++++++++ kernel/dma/mapping.c | 9 ++++--- mm/kmsan/hooks.c | 61 +++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 108 insertions(+), 3 deletions(-) diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index a5767c728a46b..d8667161a10c8 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -9,6 +9,7 @@ #ifndef _LINUX_KMSAN_H #define _LINUX_KMSAN_H +#include #include #include #include @@ -18,6 +19,7 @@ struct page; struct kmem_cache; struct task_struct; +struct scatterlist; #ifdef CONFIG_KMSAN @@ -205,6 +207,35 @@ void kmsan_ioremap_page_range(unsigned long addr, unsigned long end, */ void kmsan_iounmap_page_range(unsigned long start, unsigned long end); +/** + * kmsan_handle_dma() - Handle a DMA data transfer. + * @page: first page of the buffer. + * @offset: offset of the buffer within the first page. + * @size: buffer size. + * @dir: one of possible dma_data_direction values. + * + * Depending on @direction, KMSAN: + * * checks the buffer, if it is copied to device; + * * initializes the buffer, if it is copied from device; + * * does both, if this is a DMA_BIDIRECTIONAL transfer. + */ +void kmsan_handle_dma(struct page *page, size_t offset, size_t size, + enum dma_data_direction dir); + +/** + * kmsan_handle_dma_sg() - Handle a DMA transfer using scatterlist. + * @sg: scatterlist holding DMA buffers. + * @nents: number of scatterlist entries. + * @dir: one of possible dma_data_direction values. + * + * Depending on @direction, KMSAN: + * * checks the buffers in the scatterlist, if they are copied to device; + * * initializes the buffers, if they are copied from device; + * * does both, if this is a DMA_BIDIRECTIONAL transfer. + */ +void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, + enum dma_data_direction dir); + #else static inline void kmsan_init_shadow(void) @@ -287,6 +318,16 @@ static inline void kmsan_iounmap_page_range(unsigned long start, { } +static inline void kmsan_handle_dma(struct page *page, size_t offset, + size_t size, enum dma_data_direction dir) +{ +} + +static inline void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, + enum dma_data_direction dir) +{ +} + #endif #endif /* _LINUX_KMSAN_H */ diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index db7244291b745..5d17d5d62166b 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -156,6 +156,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, addr = dma_direct_map_page(dev, page, offset, size, dir, attrs); else addr = ops->map_page(dev, page, offset, size, dir, attrs); + kmsan_handle_dma(page, offset, size, dir); debug_dma_map_page(dev, page, offset, size, dir, addr, attrs); return addr; @@ -194,11 +195,13 @@ static int __dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, else ents = ops->map_sg(dev, sg, nents, dir, attrs); - if (ents > 0) + if (ents > 0) { + kmsan_handle_dma_sg(sg, nents, dir); debug_dma_map_sg(dev, sg, nents, ents, dir, attrs); - else if (WARN_ON_ONCE(ents != -EINVAL && ents != -ENOMEM && - ents != -EIO)) + } else if (WARN_ON_ONCE(ents != -EINVAL && ents != -ENOMEM && + ents != -EIO)) { return -EIO; + } return ents; } diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 1cdb4420977f1..8a6947a2a2f22 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -10,9 +10,11 @@ */ #include +#include #include #include #include +#include #include #include @@ -250,6 +252,65 @@ void kmsan_copy_to_user(void __user *to, const void *from, size_t to_copy, } EXPORT_SYMBOL(kmsan_copy_to_user); +static void kmsan_handle_dma_page(const void *addr, size_t size, + enum dma_data_direction dir) +{ + switch (dir) { + case DMA_BIDIRECTIONAL: + kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0, + REASON_ANY); + kmsan_internal_unpoison_memory((void *)addr, size, + /*checked*/ false); + break; + case DMA_TO_DEVICE: + kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0, + REASON_ANY); + break; + case DMA_FROM_DEVICE: + kmsan_internal_unpoison_memory((void *)addr, size, + /*checked*/ false); + break; + case DMA_NONE: + break; + } +} + +/* Helper function to handle DMA data transfers. */ +void kmsan_handle_dma(struct page *page, size_t offset, size_t size, + enum dma_data_direction dir) +{ + u64 page_offset, to_go, addr; + + if (PageHighMem(page)) + return; + addr = (u64)page_address(page) + offset; + /* + * The kernel may occasionally give us adjacent DMA pages not belonging + * to the same allocation. Process them separately to avoid triggering + * internal KMSAN checks. + */ + while (size > 0) { + page_offset = addr % PAGE_SIZE; + to_go = min(PAGE_SIZE - page_offset, (u64)size); + kmsan_handle_dma_page((void *)addr, to_go, dir); + addr += to_go; + size -= to_go; + } +} +EXPORT_SYMBOL(kmsan_handle_dma); + +void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, + enum dma_data_direction dir) +{ + struct scatterlist *item; + int i; + + for_each_sg(sg, item, nents, i) + kmsan_handle_dma(sg_page(item), item->offset, item->length, + dir); +} +EXPORT_SYMBOL(kmsan_handle_dma_sg); + /* Functions from kmsan-checks.h follow. */ void kmsan_poison_memory(const void *address, size_t size, gfp_t flags) { From patchwork Tue Apr 26 16:42:54 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827508 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C382C433F5 for ; Tue, 26 Apr 2022 16:45:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C65716B009E; Tue, 26 Apr 2022 12:45:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C157E6B009F; Tue, 26 Apr 2022 12:45:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AB6A96B00A0; Tue, 26 Apr 2022 12:45:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 9C57C6B009E for ; Tue, 26 Apr 2022 12:45:27 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 6AFA0C93 for ; Tue, 26 Apr 2022 16:45:27 +0000 (UTC) X-FDA: 79399605894.14.82106D6 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf18.hostedemail.com (Postfix) with ESMTP id D7BE71C004F for ; Tue, 26 Apr 2022 16:45:22 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id hr35-20020a1709073fa300b006f3647cd980so5654018ejc.5 for ; Tue, 26 Apr 2022 09:45:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=n12oscJoQy25ZeJEgKUppRaHioaFclURl8PsqxmYCmo=; b=PIyBhpOCM8Ov1/+Aan+YSMRBfpf54pHX8ytL+CV9/1wxPrduFClfDo2MC25BFRGqUr LC5Hr2Ef8nJt4G1Cr30fbnTHYjTJ5Gax8GaKjFXG0iL+r/c29mApiZT+RSwu1Gv2kkOz 34uVLpDsIsKTbG2q42gMn4a9Vr9GOVlaxiLfXa8b/4pWebY6TjT2x9JFCSEXRuOPPg3n b/OGc6pwoUP/SU7+PJ629KnVXCQHZctVc5m6qyPQPfIcXk7XKqS9AYp6qyxiblVFksU/ vePEsS5V3b8GJxiZEtgsjh4z7hZSSCwDY93JM9PnMnCLZhZrlsTx6bFvrt7KV12fLlqe re3Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=n12oscJoQy25ZeJEgKUppRaHioaFclURl8PsqxmYCmo=; b=03UFrlffcmSXyCDAcAVV1z2HDG9r1uexabHQO+il7p1xLxpx+a318l3+qip+rEH81u rB73taJPXCRi361/9c3HuLO7fOcjJTeSEP3LjpuG+9R7pu16buH7lrywiikZ4c9BhnQI pK6BDvD5tWGOTjTEZd4W+zTcYZkzXrs5V3SlpOI51zDs1amdpPrvVdYVvE8YwtMEO3TU j73oRdlIC6unRwqtNP5v1LFo69FzfpsgwgKg+hoFZwdNgzHXphdxEbTDBDSpSE7uObEa yWrNtSH5/RdnuJeEd+v+lVbzi6MeHShV+fMWV+pD4USsGmiNo27JRqpTIRaGTGH95iuZ ablg== X-Gm-Message-State: AOAM531ATTH6HiXUDtDTb8C09igF8qy29LFwl/noiH8GtfeSCHu0ep8I jesM4SaERqwjdXYy5i1tLBbuYfu6IO4= X-Google-Smtp-Source: ABdhPJxU8JwPhcVfaOXEOS2U2ep3mvLwmmcZHEz36qAom9R3kEeJAH/9Z6s5Sw7k1Wv/sYn9qCbZoM2w78E= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:5210:b0:423:de77:2a4d with SMTP id s16-20020a056402521000b00423de772a4dmr25186177edd.295.1650991525861; Tue, 26 Apr 2022 09:45:25 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:54 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-26-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 25/46] kmsan: virtio: check/unpoison scatterlist in vring_map_one_sg() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=PIyBhpOC; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf18.hostedemail.com: domain of 3pSFoYgYKCKIINKFGTIQQING.EQONKPWZ-OOMXCEM.QTI@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3pSFoYgYKCKIINKFGTIQQING.EQONKPWZ-OOMXCEM.QTI@flex--glider.bounces.google.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: D7BE71C004F X-Rspam-User: X-Stat-Signature: efuntgsfzznzhk8uot5ootd316kujjo4 X-HE-Tag: 1650991522-430343 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If vring doesn't use the DMA API, KMSAN is unable to tell whether the memory is initialized by hardware. Explicitly call kmsan_handle_dma() from vring_map_one_sg() in this case to prevent false positives. Signed-off-by: Alexander Potapenko Acked-by: Michael S. Tsirkin --- Link: https://linux-review.googlesource.com/id/I211533ecb86a66624e151551f83ddd749536b3af --- drivers/virtio/virtio_ring.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index cfb028ca238eb..faecd9e3d6560 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -331,8 +332,15 @@ static dma_addr_t vring_map_one_sg(const struct vring_virtqueue *vq, struct scatterlist *sg, enum dma_data_direction direction) { - if (!vq->use_dma_api) + if (!vq->use_dma_api) { + /* + * If DMA is not used, KMSAN doesn't know that the scatterlist + * is initialized by the hardware. Explicitly check/unpoison it + * depending on the direction. + */ + kmsan_handle_dma(sg_page(sg), sg->offset, sg->length, direction); return (dma_addr_t)sg_phys(sg); + } /* * We can't use dma_map_sg, because we don't use scatterlists in From patchwork Tue Apr 26 16:42:55 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827509 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A04A1C433FE for ; Tue, 26 Apr 2022 16:45:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 427786B009F; Tue, 26 Apr 2022 12:45:30 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B1356B00A0; Tue, 26 Apr 2022 12:45:30 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 251426B00A1; Tue, 26 Apr 2022 12:45:30 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 179036B009F for ; Tue, 26 Apr 2022 12:45:30 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id EBA2E60FDA for ; Tue, 26 Apr 2022 16:45:29 +0000 (UTC) X-FDA: 79399605978.14.FB4F040 Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf30.hostedemail.com (Postfix) with ESMTP id CA6B48005C for ; Tue, 26 Apr 2022 16:45:22 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id o8-20020a170906974800b006f3a8be7502so2043967ejy.8 for ; Tue, 26 Apr 2022 09:45:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=kXck7gc8vUdG+uIpTKbHiyCLJU3V9DKcF3fYel7nQCw=; b=HKjl4po9qikfY5w7sANNOWF8wIYoTZhGhPfGpG2oRS5qaieTXeddZbMui9oEk30REF GfljSR8S1QIxGgAiHnxwjgEv6+EF2+8jJ7T7QZK2oSf8eJtoGjdxW0lf0sHwGI/lNoOh KSwqI4UvdZtVfWHWxZE/JKd/uRDNjbPmLMNeGT3iF7SCfBSOExZCHwBizTIx/FZ301Kf goPWSOE0bCzYKGYIAeB4b4w+/WYrsLZPjYvsQGeev90WIWm90jQbop08KYN+ktuukkjW eqLwPo+Y1By7UOL4GBv0clQvJfdJAgsm6ae+0ok9ios/vlSXb1rGqpnveoFRq063DiYQ GxtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kXck7gc8vUdG+uIpTKbHiyCLJU3V9DKcF3fYel7nQCw=; b=7+toQ6oHoG89BRqKNwuPF5e7ZE4ok2LRnbLQcRGxztuAg89S6UKp5hv7eSst+n9s6D wvUyDqGioaa2sk/XgkNJFuSEWv2fVndL7tdonP3Id7p3VLOx8/BHkwoJW/SIWtnKjRFy hEP+RcryFSNby04+uDSr+zH1o7C42s0pj6CKzx0uMEXdOWCSMBbJ0tM5wGBO37Dp3Zli Bloh5JRbZyIe/Gm1mGhEkVb74NDGAIN07lhgNi3ImO9JP1F/I+pC7HkuHu9Bcf7HHunz 5/WX4ZNYaZFOIupLSrQNiXfkxi0lZz3LY9mvlO5dl1c8h7njgcYOdkvy7ePaDb8J5RdT tvuw== X-Gm-Message-State: AOAM533YOH/Qcb8TE0mRWudCss6VYzn2uVaNbsR1d8ojP5wyPeKhTn2i wgIryCUm650hUHhwL/nf7Nh481gw0nA= X-Google-Smtp-Source: ABdhPJz2TfzAbBIJqBzng2cj5zaotZzRLvL6/wlz3cGmkr5GPkO9jG7+a46FVlyYEs+vMIVeqbruPJFqxPk= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a17:907:1b15:b0:6d7:13bd:dd62 with SMTP id mp21-20020a1709071b1500b006d713bddd62mr21844408ejc.673.1650991528244; Tue, 26 Apr 2022 09:45:28 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:55 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-27-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 26/46] kmsan: handle memory sent to/from USB From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=HKjl4po9; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf30.hostedemail.com: domain of 3qCFoYgYKCKULQNIJWLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3qCFoYgYKCKULQNIJWLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--glider.bounces.google.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: CA6B48005C X-Rspam-User: X-Stat-Signature: bky5qe5hx3fuy9xdyg4z6spnzo3qjk44 X-HE-Tag: 1650991522-66539 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Depending on the value of is_out kmsan_handle_urb() KMSAN either marks the data copied to the kernel from a USB device as initialized, or checks the data sent to the device for being initialized. Signed-off-by: Alexander Potapenko --- v2: -- move kmsan_handle_urb() implementation to this patch Link: https://linux-review.googlesource.com/id/Ifa67fb72015d4de14c30e971556f99fc8b2ee506 --- drivers/usb/core/urb.c | 2 ++ include/linux/kmsan.h | 15 +++++++++++++++ mm/kmsan/hooks.c | 17 +++++++++++++++++ 3 files changed, 34 insertions(+) diff --git a/drivers/usb/core/urb.c b/drivers/usb/core/urb.c index 33d62d7e3929f..1fe3f23205624 100644 --- a/drivers/usb/core/urb.c +++ b/drivers/usb/core/urb.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -426,6 +427,7 @@ int usb_submit_urb(struct urb *urb, gfp_t mem_flags) URB_SETUP_MAP_SINGLE | URB_SETUP_MAP_LOCAL | URB_DMA_SG_COMBINED); urb->transfer_flags |= (is_out ? URB_DIR_OUT : URB_DIR_IN); + kmsan_handle_urb(urb, is_out); if (xfertype != USB_ENDPOINT_XFER_CONTROL && dev->state < USB_STATE_CONFIGURED) diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index d8667161a10c8..55f976b721566 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -20,6 +20,7 @@ struct page; struct kmem_cache; struct task_struct; struct scatterlist; +struct urb; #ifdef CONFIG_KMSAN @@ -236,6 +237,16 @@ void kmsan_handle_dma(struct page *page, size_t offset, size_t size, void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, enum dma_data_direction dir); +/** + * kmsan_handle_urb() - Handle a USB data transfer. + * @urb: struct urb pointer. + * @is_out: data transfer direction (true means output to hardware). + * + * If @is_out is true, KMSAN checks the transfer buffer of @urb. Otherwise, + * KMSAN initializes the transfer buffer. + */ +void kmsan_handle_urb(const struct urb *urb, bool is_out); + #else static inline void kmsan_init_shadow(void) @@ -328,6 +339,10 @@ static inline void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, { } +static inline void kmsan_handle_urb(const struct urb *urb, bool is_out) +{ +} + #endif #endif /* _LINUX_KMSAN_H */ diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 8a6947a2a2f22..9aecbf2825837 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -17,6 +17,7 @@ #include #include #include +#include #include "../internal.h" #include "../slab.h" @@ -252,6 +253,22 @@ void kmsan_copy_to_user(void __user *to, const void *from, size_t to_copy, } EXPORT_SYMBOL(kmsan_copy_to_user); +/* Helper function to check an URB. */ +void kmsan_handle_urb(const struct urb *urb, bool is_out) +{ + if (!urb) + return; + if (is_out) + kmsan_internal_check_memory(urb->transfer_buffer, + urb->transfer_buffer_length, + /*user_addr*/ 0, REASON_SUBMIT_URB); + else + kmsan_internal_unpoison_memory(urb->transfer_buffer, + urb->transfer_buffer_length, + /*checked*/ false); +} +EXPORT_SYMBOL(kmsan_handle_urb); + static void kmsan_handle_dma_page(const void *addr, size_t size, enum dma_data_direction dir) { From patchwork Tue Apr 26 16:42:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827510 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5DF1BC433F5 for ; Tue, 26 Apr 2022 16:45:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E39416B00A0; Tue, 26 Apr 2022 12:45:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DC15C6B00A1; Tue, 26 Apr 2022 12:45:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C89436B00A2; Tue, 26 Apr 2022 12:45:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id BB6596B00A0 for ; Tue, 26 Apr 2022 12:45:32 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 9C45A20D38 for ; Tue, 26 Apr 2022 16:45:32 +0000 (UTC) X-FDA: 79399606104.27.03445CF Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf29.hostedemail.com (Postfix) with ESMTP id 742C5120055 for ; Tue, 26 Apr 2022 16:45:29 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id k13-20020a50ce4d000000b00425e4447e64so3792054edj.22 for ; Tue, 26 Apr 2022 09:45:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=8+FTymzEXuQwrQ48VSTvyp6MK3t+ENPCVe5WmeEaVIc=; b=fcJ50fxU8vlV4Zt4ow5QLp80TC8c56IdxjhwU67LEgyBqe8j9VHcHGiTXWvty1WMSf 1bPBfjkXUa51JG42+wVa+w3aHgsOYSrC1f9U8SZ2jz031VP/HC88wQw4NfXbVOoGdtcl SS8jFJBwA5MYs/dZR6sjMvgZtOfXiYQhrCsRhuTSkWp5gWJqFxoT0Go1VP32lUvN66NF QMYm/Pv7rIuLPqSnr2MRY4/KkU+A6liqUHe5KRLS7lCFdZ3JcG6G4DdynX7jifdqwnNt rgjoa8UfdHnj1e0EoDPPBI7pDd0IfYihgK9kALl0k/qSo8xllk+StsshEgAnxbfsnzPI nRkQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8+FTymzEXuQwrQ48VSTvyp6MK3t+ENPCVe5WmeEaVIc=; b=oopfl2gBcpAmm5UFilpjY01RxxLrjZMx0GVdNFAlelmg9P1mbZWmB8PX6exe3vRKku B6B1qavDrSrDrrZzwRRSSMSQjIXDKHGFEg6FMbaFOW6heJGkg3rCNZmZ61l1wgn5QGeb Ohb0OlaEDKLF6gG4HJdGmUhi/znftOMd+M6pKz0mAjWFraZmJ3lzFT7qRQ5kSf2UhzKb BjAaN3qgPLfUNMH8wbzu076HvfYlRH6/BnU6B+uFR6LHKBqeeH8NvaB7ygqsI7SKZfVD 2sEA6unBa5Mn8cDzAKl/JNdgD4YzVcp/Y5CVB6ZuJWxGoWriWWvpCeU2mqbL1ju8/g5p UBgw== X-Gm-Message-State: AOAM530BM10EP1SAo6GUD20i029TKEKphUZqV/jBKALXpJJ2p0B4Rrvt vxxr5m+BdXD86WcAhJZGZK+1RNxiumk= X-Google-Smtp-Source: ABdhPJzMNlQ7wgMMMKP2+ozorBxditMXEaVfv9SlYu5XvLbQQ4QNVo7xn2ciiuqzzcwq86sHWkzQRQWNmV4= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a50:eb87:0:b0:425:c3e2:17a9 with SMTP id y7-20020a50eb87000000b00425c3e217a9mr22640245edr.109.1650991530577; Tue, 26 Apr 2022 09:45:30 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:56 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-28-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 27/46] kmsan: instrumentation.h: add instrumentation_begin_with_regs() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=fcJ50fxU; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of 3qiFoYgYKCKcNSPKLYNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3qiFoYgYKCKcNSPKLYNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--glider.bounces.google.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 742C5120055 X-Rspam-User: X-Stat-Signature: x11ry69ys9mueq1xpiyyp4fxrhuyj9qf X-HE-Tag: 1650991529-86063 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When calling KMSAN-instrumented functions from non-instrumented functions, function parameters may not be initialized properly, leading to false positive reports. In particular, this happens all the time when calling interrupt handlers from `noinstr` IDT entries. We introduce instrumentation_begin_with_regs(), which calls instrumentation_begin() and notifies KMSAN about the beginning of the potentially instrumented region by calling kmsan_instrumentation_begin(), which: - wipes the current KMSAN state at the beginning of the region, ensuring that the first call of an instrumented function receives initialized parameters (this is a pretty good approximation of having all other instrumented functions receive initialized parameters); - unpoisons the `struct pt_regs` set up by the non-instrumented assembly code. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I0f5e3372e00bd5fe25ddbf286f7260aae9011858 --- include/linux/instrumentation.h | 6 ++++++ include/linux/kmsan.h | 11 +++++++++++ mm/kmsan/hooks.c | 16 ++++++++++++++++ 3 files changed, 33 insertions(+) diff --git a/include/linux/instrumentation.h b/include/linux/instrumentation.h index 24359b4a96053..3bbce9d556381 100644 --- a/include/linux/instrumentation.h +++ b/include/linux/instrumentation.h @@ -15,6 +15,11 @@ }) #define instrumentation_begin() __instrumentation_begin(__COUNTER__) +#define instrumentation_begin_with_regs(regs) do { \ + __instrumentation_begin(__COUNTER__); \ + kmsan_instrumentation_begin(regs); \ +} while (0) + /* * Because instrumentation_{begin,end}() can nest, objtool validation considers * _begin() a +1 and _end() a -1 and computes a sum over the instructions. @@ -55,6 +60,7 @@ #define instrumentation_end() __instrumentation_end(__COUNTER__) #else # define instrumentation_begin() do { } while(0) +# define instrumentation_begin_with_regs(regs) kmsan_instrumentation_begin(regs) # define instrumentation_end() do { } while(0) #endif diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index 55f976b721566..209a5a2192e22 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -247,6 +247,13 @@ void kmsan_handle_dma_sg(struct scatterlist *sg, int nents, */ void kmsan_handle_urb(const struct urb *urb, bool is_out); +/** + * kmsan_instrumentation_begin() - handle instrumentation_begin(). + * @regs: pointer to struct pt_regs that non-instrumented code passes to + * instrumented code. + */ +void kmsan_instrumentation_begin(struct pt_regs *regs); + #else static inline void kmsan_init_shadow(void) @@ -343,6 +350,10 @@ static inline void kmsan_handle_urb(const struct urb *urb, bool is_out) { } +static inline void kmsan_instrumentation_begin(struct pt_regs *regs) +{ +} + #endif #endif /* _LINUX_KMSAN_H */ diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 9aecbf2825837..c20d105c143c1 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -366,3 +366,19 @@ void kmsan_check_memory(const void *addr, size_t size) REASON_ANY); } EXPORT_SYMBOL(kmsan_check_memory); + +void kmsan_instrumentation_begin(struct pt_regs *regs) +{ + struct kmsan_context_state *state = &kmsan_get_context()->cstate; + + if (state) + __memset(state, 0, sizeof(struct kmsan_context_state)); + if (!kmsan_enabled || !regs) + return; + /* + * @regs may reside in cpu_entry_area, for which KMSAN does not allocate + * metadata. Do not force an error in that case. + */ + kmsan_internal_unpoison_memory(regs, sizeof(*regs), /*checked*/ false); +} +EXPORT_SYMBOL(kmsan_instrumentation_begin); From patchwork Tue Apr 26 16:42:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827511 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D063DC433EF for ; Tue, 26 Apr 2022 16:45:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 70BE86B00A1; Tue, 26 Apr 2022 12:45:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6E4246B00A2; Tue, 26 Apr 2022 12:45:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5AB596B00A3; Tue, 26 Apr 2022 12:45:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 4D1E66B00A1 for ; Tue, 26 Apr 2022 12:45:35 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 2EC1026C02 for ; Tue, 26 Apr 2022 16:45:35 +0000 (UTC) X-FDA: 79399606230.24.8FE85B3 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf20.hostedemail.com (Postfix) with ESMTP id 9211E1C005F for ; Tue, 26 Apr 2022 16:45:31 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id w8-20020a50d788000000b00418e6810364so10518770edi.13 for ; Tue, 26 Apr 2022 09:45:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=7rWXomHmmGtw2qSW/Jql3Xky6Nc8Jl3o+xpl6HNe0X8=; b=q5MasDLtRuk73svjnlxTla0qBFDkgnt0PsXrVkFMN41i4z8Uc7BjsHa4IZlwyI4mOh PSjNtr6oT4aSfGpSW0YXGpJKMOZ3sk1QnOzt4PQcjsP2frb+KqFrSVrmWcDHhZkns/d/ pdEqJGCSd8xeQiS1/OQIYoiHjeol8c3Q+yW+/B+Knn6wOUS02dBp111b/1mBG/LNHKLX o57oj7KW5sVZwCE68ycL0SdOrbqmApIoCkh8dUgH6H+ygVwXbl7Gp7mJ6YMwZNc44jBv kf7eWkFxMSiWJtqo7427dYj43Sgw5gav9YAFNr+RIt6rHBJ0ZfsALEQce0T6+nWFX4OZ E6gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=7rWXomHmmGtw2qSW/Jql3Xky6Nc8Jl3o+xpl6HNe0X8=; b=7qkN5KQDfMf5+TIP+w1hnnfn2l0dK2Ilt3RSC3AxBliMCuVm66Ew3VQYqIerScDX+b mUE1A0U5VcNmGxujEAt6ZGAeA7k9mzvLa2QpEdVDdn01hnIqkti/Vb4ixQII/sC+czH1 QnpE25T+zbcHW6dts4e4I2M5uvQE1mg9vxqx6ruaBJMVRTEiIdBlx+2FhXzw5mnUguGP BjjOvimZErGnBtJQwFFZwt/ce3lIDrmulRh0ADdilwebO5kIdxT3Iq6dO/ClyzYgFn7e nvSwVcUZskFCA+wym8866DmQthXMK55Er82duYXKxolm1dXFCyrXLb/Rr8tHYDy4qrv5 ImgA== X-Gm-Message-State: AOAM532K5TO5xaKAi2EiETuDREMhr0KhgG9dx+I4xLPm3gWl6wZsql+x uZ8NmfbXFY2eOVWuNb3a5wHDkvvsm6Y= X-Google-Smtp-Source: ABdhPJyHNp1hORgE0TcahA4SesrvDuNa9ln2vyqb/yiKW2MgJrSo3ZS2uZRSG/oHoxHKcWZIpPWb90g9a0g= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:27d1:b0:425:f92f:aac0 with SMTP id c17-20020a05640227d100b00425f92faac0mr5194069ede.409.1650991533278; Tue, 26 Apr 2022 09:45:33 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:57 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-29-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 28/46] kmsan: entry: handle register passing from uninstrumented code From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: 9qpacojq8dtq37io1qjrfky3h7z64iw6 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 9211E1C005F Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=q5MasDLt; spf=pass (imf20.hostedemail.com: domain of 3rSFoYgYKCKoQVSNObQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3rSFoYgYKCKoQVSNObQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-HE-Tag: 1650991531-160422 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Replace instrumentation_begin() with instrumentation_begin_with_regs() to let KMSAN handle the non-instrumented code and unpoison pt_regs passed from the instrumented part. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I7f0a9809b66bd85faae43142971d0095771b7a42 --- kernel/entry/common.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/kernel/entry/common.c b/kernel/entry/common.c index 93c3b86e781c1..ce2324374882c 100644 --- a/kernel/entry/common.c +++ b/kernel/entry/common.c @@ -23,7 +23,7 @@ static __always_inline void __enter_from_user_mode(struct pt_regs *regs) CT_WARN_ON(ct_state() != CONTEXT_USER); user_exit_irqoff(); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); trace_hardirqs_off_finish(); instrumentation_end(); } @@ -105,7 +105,7 @@ noinstr long syscall_enter_from_user_mode(struct pt_regs *regs, long syscall) __enter_from_user_mode(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); local_irq_enable(); ret = __syscall_enter_from_user_work(regs, syscall); instrumentation_end(); @@ -116,7 +116,7 @@ noinstr long syscall_enter_from_user_mode(struct pt_regs *regs, long syscall) noinstr void syscall_enter_from_user_mode_prepare(struct pt_regs *regs) { __enter_from_user_mode(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); local_irq_enable(); instrumentation_end(); } @@ -290,7 +290,7 @@ void syscall_exit_to_user_mode_work(struct pt_regs *regs) __visible noinstr void syscall_exit_to_user_mode(struct pt_regs *regs) { - instrumentation_begin(); + instrumentation_begin_with_regs(regs); __syscall_exit_to_user_mode_work(regs); instrumentation_end(); __exit_to_user_mode(); @@ -303,7 +303,7 @@ noinstr void irqentry_enter_from_user_mode(struct pt_regs *regs) noinstr void irqentry_exit_to_user_mode(struct pt_regs *regs) { - instrumentation_begin(); + instrumentation_begin_with_regs(regs); exit_to_user_mode_prepare(regs); instrumentation_end(); __exit_to_user_mode(); @@ -351,7 +351,7 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs) */ lockdep_hardirqs_off(CALLER_ADDR0); rcu_irq_enter(); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); trace_hardirqs_off_finish(); instrumentation_end(); @@ -366,7 +366,7 @@ noinstr irqentry_state_t irqentry_enter(struct pt_regs *regs) * in having another one here. */ lockdep_hardirqs_off(CALLER_ADDR0); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); rcu_irq_enter_check_tick(); trace_hardirqs_off_finish(); instrumentation_end(); @@ -413,7 +413,7 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state) * and RCU as the return to user mode path. */ if (state.exit_rcu) { - instrumentation_begin(); + instrumentation_begin_with_regs(regs); /* Tell the tracer that IRET will enable interrupts */ trace_hardirqs_on_prepare(); lockdep_hardirqs_on_prepare(CALLER_ADDR0); @@ -423,7 +423,7 @@ noinstr void irqentry_exit(struct pt_regs *regs, irqentry_state_t state) return; } - instrumentation_begin(); + instrumentation_begin_with_regs(regs); if (IS_ENABLED(CONFIG_PREEMPTION)) irqentry_exit_cond_resched(); @@ -451,7 +451,7 @@ irqentry_state_t noinstr irqentry_nmi_enter(struct pt_regs *regs) lockdep_hardirq_enter(); rcu_nmi_enter(); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); trace_hardirqs_off_finish(); ftrace_nmi_enter(); instrumentation_end(); @@ -461,7 +461,7 @@ irqentry_state_t noinstr irqentry_nmi_enter(struct pt_regs *regs) void noinstr irqentry_nmi_exit(struct pt_regs *regs, irqentry_state_t irq_state) { - instrumentation_begin(); + instrumentation_begin_with_regs(regs); ftrace_nmi_exit(); if (irq_state.lockdep) { trace_hardirqs_on_prepare(); From patchwork Tue Apr 26 16:42:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827512 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C660BC433EF for ; Tue, 26 Apr 2022 16:45:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5FC926B00A2; Tue, 26 Apr 2022 12:45:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5AB386B00A3; Tue, 26 Apr 2022 12:45:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4244C6B00A4; Tue, 26 Apr 2022 12:45:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 3259D6B00A2 for ; Tue, 26 Apr 2022 12:45:38 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 041D726C0B for ; Tue, 26 Apr 2022 16:45:37 +0000 (UTC) X-FDA: 79399606356.13.478A039 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf05.hostedemail.com (Postfix) with ESMTP id 043F1100056 for ; Tue, 26 Apr 2022 16:45:30 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id b24-20020a50e798000000b0041631767675so10629187edn.23 for ; Tue, 26 Apr 2022 09:45:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=16ohZwKZ9t3WAbi0BPPBDE5qeMRuLRROgr0ny4RlYME=; b=hd5fKa3yl1cD60GLBnVYHLFqj30FOh54rE2Oq7JHoLD+E1R0r+8ebWBFi4Ec6/yN6I 2VHfaD9u+6R+5pPvZC2vcp3AmeAUtR5OUzLIsbP+sGZGJheeX+7A9SWP7pQngMMz3qjL QTripPctFa88nzwXMYJ8UC69nodJsg64C33KTS2WMClfT1hp1HxZzTUL/pU2gMmMNlXH xX20l1qJhPIRfQXmmOUzJ/S7aHCFrZqOcjs3VcKTw2C+wh835SRNI9el2psQYkn3f7/G tK6Hzl1skO9u7qLGDR3Qbdn2jaaLVwP2o6+HcdbDhyAw87g3MyScAg+EKj1eX/Ob/MJy aLHg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=16ohZwKZ9t3WAbi0BPPBDE5qeMRuLRROgr0ny4RlYME=; b=Y+nJUWe6jfMp4uBekLwxoVEyQ/Fkcuq/wBnHGgUczQjc2QRx9bcPdwtDIU8XkGJU9X lg8O05Ub9aelwSwriYxyAD0x+0aYMJJzBLcslqz8JrthAhHGTAtcYbizcwhSrNrSyJsg 5kko8CumbQKooz6rNMJl3hynBHtmTyrTdew7UBv6xZt53gpYpcUx3ni1qTFMNxRdAFnA f5B3/BKG/GyOfGBACYAbi3LlvLMMD0X3j+55XM4mAGd/i1nxXNnPa0xz96SXgxyviWA8 5w1bpFvPHGnnqOSk5wJR2xNiN1AY2WnJhzMNSsldTh/3nXw7C1UIkzAY3AJZNJD9IxIo 3EaQ== X-Gm-Message-State: AOAM533to1wl5sytsP5cq0UCb7fWr67/hJrIlXJtPZHFNYdp69b+4Gt1 WdYCkjG1rfQWvVpf93/l66ryOmXpjwU= X-Google-Smtp-Source: ABdhPJwgIvBqqtb4+UwfAnrYuzq842nCIM4NCU7gD9HqbyhroCBAje/ekLduvR5oCdli1Ru0qSmFr0Qjhq0= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:5255:b0:425:e40a:c927 with SMTP id t21-20020a056402525500b00425e40ac927mr12754417edd.308.1650991535747; Tue, 26 Apr 2022 09:45:35 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:58 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-30-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 29/46] kmsan: add tests for KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 043F1100056 X-Stat-Signature: excm4oxisa1pz9t7jz7kqhehyp4r5ghx X-Rspam-User: Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=hd5fKa3y; spf=pass (imf05.hostedemail.com: domain of 3ryFoYgYKCKwSXUPQdSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3ryFoYgYKCKwSXUPQdSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1650991530-423832 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The testing module triggers KMSAN warnings in different cases and checks that the errors are properly reported, using console probes to capture the tool's output. Signed-off-by: Alexander Potapenko --- v2: -- add memcpy tests Link: https://linux-review.googlesource.com/id/I49c3f59014cc37fd13541c80beb0b75a75244650 --- lib/Kconfig.kmsan | 16 ++ mm/kmsan/Makefile | 4 + mm/kmsan/kmsan_test.c | 536 ++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 556 insertions(+) create mode 100644 mm/kmsan/kmsan_test.c diff --git a/lib/Kconfig.kmsan b/lib/Kconfig.kmsan index 199f79d031f94..a68fdb5ed5d92 100644 --- a/lib/Kconfig.kmsan +++ b/lib/Kconfig.kmsan @@ -21,3 +21,19 @@ config KMSAN the whole system down. See for more details. + +if KMSAN + +config KMSAN_KUNIT_TEST + tristate "KMSAN integration test suite" if !KUNIT_ALL_TESTS + default KUNIT_ALL_TESTS + depends on TRACEPOINTS && KUNIT + help + Test suite for KMSAN, testing various error detection scenarios, + and checking that reports are correctly output to console. + + Say Y here if you want the test to be built into the kernel and run + during boot; say M if you want the test to build as a module; say N + if you are unsure. + +endif diff --git a/mm/kmsan/Makefile b/mm/kmsan/Makefile index f57a956cb1c8b..7be6a7e92394f 100644 --- a/mm/kmsan/Makefile +++ b/mm/kmsan/Makefile @@ -20,3 +20,7 @@ CFLAGS_init.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_instrumentation.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_report.o := $(CC_FLAGS_KMSAN_RUNTIME) CFLAGS_shadow.o := $(CC_FLAGS_KMSAN_RUNTIME) + +obj-$(CONFIG_KMSAN_KUNIT_TEST) += kmsan_test.o +KMSAN_SANITIZE_kmsan_test.o := y +CFLAGS_kmsan_test.o += $(call cc-disable-warning, uninitialized) diff --git a/mm/kmsan/kmsan_test.c b/mm/kmsan/kmsan_test.c new file mode 100644 index 0000000000000..44bb2e0f87d81 --- /dev/null +++ b/mm/kmsan/kmsan_test.c @@ -0,0 +1,536 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Test cases for KMSAN. + * For each test case checks the presence (or absence) of generated reports. + * Relies on 'console' tracepoint to capture reports as they appear in the + * kernel log. + * + * Copyright (C) 2021-2022, Google LLC. + * Author: Alexander Potapenko + * + */ + +#include +#include "kmsan.h" + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static DEFINE_PER_CPU(int, per_cpu_var); + +/* Report as observed from console. */ +static struct { + spinlock_t lock; + bool available; + bool ignore; /* Stop console output collection. */ + char header[256]; +} observed = { + .lock = __SPIN_LOCK_UNLOCKED(observed.lock), +}; + +/* Probe for console output: obtains observed lines of interest. */ +static void probe_console(void *ignore, const char *buf, size_t len) +{ + unsigned long flags; + + if (observed.ignore) + return; + spin_lock_irqsave(&observed.lock, flags); + + if (strnstr(buf, "BUG: KMSAN: ", len)) { + /* + * KMSAN report and related to the test. + * + * The provided @buf is not NUL-terminated; copy no more than + * @len bytes and let strscpy() add the missing NUL-terminator. + */ + strscpy(observed.header, buf, + min(len + 1, sizeof(observed.header))); + WRITE_ONCE(observed.available, true); + observed.ignore = true; + } + spin_unlock_irqrestore(&observed.lock, flags); +} + +/* Check if a report related to the test exists. */ +static bool report_available(void) +{ + return READ_ONCE(observed.available); +} + +/* Information we expect in a report. */ +struct expect_report { + const char *error_type; /* Error type. */ + /* + * Kernel symbol from the error header, or NULL if no report is + * expected. + */ + const char *symbol; +}; + +/* Check observed report matches information in @r. */ +static bool report_matches(const struct expect_report *r) +{ + typeof(observed.header) expected_header; + unsigned long flags; + bool ret = false; + const char *end; + char *cur; + + /* Doubled-checked locking. */ + if (!report_available() || !r->symbol) + return (!report_available() && !r->symbol); + + /* Generate expected report contents. */ + + /* Title */ + cur = expected_header; + end = &expected_header[sizeof(expected_header) - 1]; + + cur += scnprintf(cur, end - cur, "BUG: KMSAN: %s", r->error_type); + + scnprintf(cur, end - cur, " in %s", r->symbol); + /* The exact offset won't match, remove it; also strip module name. */ + cur = strchr(expected_header, '+'); + if (cur) + *cur = '\0'; + + spin_lock_irqsave(&observed.lock, flags); + if (!report_available()) + goto out; /* A new report is being captured. */ + + /* Finally match expected output to what we actually observed. */ + ret = strstr(observed.header, expected_header); +out: + spin_unlock_irqrestore(&observed.lock, flags); + + return ret; +} + +/* ===== Test cases ===== */ + +/* Prevent replacing branch with select in LLVM. */ +static noinline void check_true(char *arg) +{ + pr_info("%s is true\n", arg); +} + +static noinline void check_false(char *arg) +{ + pr_info("%s is false\n", arg); +} + +#define USE(x) \ + do { \ + if (x) \ + check_true(#x); \ + else \ + check_false(#x); \ + } while (0) + +#define EXPECTATION_ETYPE_FN(e, reason, fn) \ + struct expect_report e = { \ + .error_type = reason, \ + .symbol = fn, \ + } + +#define EXPECTATION_NO_REPORT(e) EXPECTATION_ETYPE_FN(e, NULL, NULL) +#define EXPECTATION_UNINIT_VALUE_FN(e, fn) \ + EXPECTATION_ETYPE_FN(e, "uninit-value", fn) +#define EXPECTATION_UNINIT_VALUE(e) EXPECTATION_UNINIT_VALUE_FN(e, __func__) +#define EXPECTATION_USE_AFTER_FREE(e) \ + EXPECTATION_ETYPE_FN(e, "use-after-free", __func__) + +/* Test case: ensure that kmalloc() returns uninitialized memory. */ +static void test_uninit_kmalloc(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE(expect); + int *ptr; + + kunit_info(test, "uninitialized kmalloc test (UMR report)\n"); + ptr = kmalloc(sizeof(int), GFP_KERNEL); + USE(*ptr); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that kmalloc'ed memory becomes initialized after memset(). + */ +static void test_init_kmalloc(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + int *ptr; + + kunit_info(test, "initialized kmalloc test (no reports)\n"); + ptr = kmalloc(sizeof(int), GFP_KERNEL); + memset(ptr, 0, sizeof(int)); + USE(*ptr); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* Test case: ensure that kzalloc() returns initialized memory. */ +static void test_init_kzalloc(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + int *ptr; + + kunit_info(test, "initialized kzalloc test (no reports)\n"); + ptr = kzalloc(sizeof(int), GFP_KERNEL); + USE(*ptr); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* Test case: ensure that local variables are uninitialized by default. */ +static void test_uninit_stack_var(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE(expect); + volatile int cond; + + kunit_info(test, "uninitialized stack variable (UMR report)\n"); + USE(cond); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* Test case: ensure that local variables with initializers are initialized. */ +static void test_init_stack_var(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + volatile int cond = 1; + + kunit_info(test, "initialized stack variable (no reports)\n"); + USE(cond); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static noinline void two_param_fn_2(int arg1, int arg2) +{ + USE(arg1); + USE(arg2); +} + +static noinline void one_param_fn(int arg) +{ + two_param_fn_2(arg, arg); + USE(arg); +} + +static noinline void two_param_fn(int arg1, int arg2) +{ + int init = 0; + + one_param_fn(init); + USE(arg1); + USE(arg2); +} + +static void test_params(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE_FN(expect, "two_param_fn"); + volatile int uninit, init = 1; + + kunit_info(test, + "uninit passed through a function parameter (UMR report)\n"); + two_param_fn(uninit, init); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static int signed_sum3(int a, int b, int c) +{ + return a + b + c; +} + +/* + * Test case: ensure that uninitialized values are tracked through function + * arguments. + */ +static void test_uninit_multiple_params(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE(expect); + volatile char b = 3, c; + volatile int a; + + kunit_info(test, "uninitialized local passed to fn (UMR report)\n"); + USE(signed_sum3(a, b, c)); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* Helper function to make an array uninitialized. */ +static noinline void do_uninit_local_array(char *array, int start, int stop) +{ + volatile char uninit; + int i; + + for (i = start; i < stop; i++) + array[i] = uninit; +} + +/* + * Test case: ensure kmsan_check_memory() reports an error when checking + * uninitialized memory. + */ +static void test_uninit_kmsan_check_memory(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE_FN(expect, "test_uninit_kmsan_check_memory"); + volatile char local_array[8]; + + kunit_info( + test, + "kmsan_check_memory() called on uninit local (UMR report)\n"); + do_uninit_local_array((char *)local_array, 5, 7); + + kmsan_check_memory((char *)local_array, 8); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: check that a virtual memory range created with vmap() from + * initialized pages is still considered as initialized. + */ +static void test_init_kmsan_vmap_vunmap(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + const int npages = 2; + struct page **pages; + void *vbuf; + int i; + + kunit_info(test, "pages initialized via vmap (no reports)\n"); + + pages = kmalloc_array(npages, sizeof(struct page), GFP_KERNEL); + for (i = 0; i < npages; i++) + pages[i] = alloc_page(GFP_KERNEL); + vbuf = vmap(pages, npages, VM_MAP, PAGE_KERNEL); + memset(vbuf, 0xfe, npages * PAGE_SIZE); + for (i = 0; i < npages; i++) + kmsan_check_memory(page_address(pages[i]), PAGE_SIZE); + + if (vbuf) + vunmap(vbuf); + for (i = 0; i < npages; i++) + if (pages[i]) + __free_page(pages[i]); + kfree(pages); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that memset() can initialize a buffer allocated via + * vmalloc(). + */ +static void test_init_vmalloc(struct kunit *test) +{ + EXPECTATION_NO_REPORT(expect); + int npages = 8, i; + char *buf; + + kunit_info(test, "vmalloc buffer can be initialized (no reports)\n"); + buf = vmalloc(PAGE_SIZE * npages); + buf[0] = 1; + memset(buf, 0xfe, PAGE_SIZE * npages); + USE(buf[0]); + for (i = 0; i < npages; i++) + kmsan_check_memory(&buf[PAGE_SIZE * i], PAGE_SIZE); + vfree(buf); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* Test case: ensure that use-after-free reporting works. */ +static void test_uaf(struct kunit *test) +{ + EXPECTATION_USE_AFTER_FREE(expect); + volatile int value; + volatile int *var; + + kunit_info(test, "use-after-free in kmalloc-ed buffer (UMR report)\n"); + var = kmalloc(80, GFP_KERNEL); + var[3] = 0xfeedface; + kfree((int *)var); + /* Copy the invalid value before checking it. */ + value = var[3]; + USE(value); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that uninitialized values are propagated through per-CPU + * memory. + */ +static void test_percpu_propagate(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE(expect); + volatile int uninit, check; + + kunit_info(test, + "uninit local stored to per_cpu memory (UMR report)\n"); + + this_cpu_write(per_cpu_var, uninit); + check = this_cpu_read(per_cpu_var); + USE(check); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that passing uninitialized values to printk() leads to an + * error report. + */ +static void test_printk(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE_FN(expect, "number"); + volatile int uninit; + + kunit_info(test, "uninit local passed to pr_info() (UMR report)\n"); + pr_info("%px contains %d\n", &uninit, uninit); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that memcpy() correctly copies uninitialized values between + * aligned `src` and `dst`. + */ +static void test_memcpy_aligned_to_aligned(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE_FN(expect, "test_memcpy_aligned_to_aligned"); + volatile int uninit_src; + volatile int dst = 0; + + kunit_info(test, "memcpy()ing aligned uninit src to aligned dst (UMR report)\n"); + memcpy((void *)&dst, (void *)&uninit_src, sizeof(uninit_src)); + kmsan_check_memory((void *)&dst, sizeof(dst)); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that memcpy() correctly copies uninitialized values between + * aligned `src` and unaligned `dst`. + * + * Copying aligned 4-byte value to an unaligned one leads to touching two + * aligned 4-byte values. This test case checks that KMSAN correctly reports an + * error on the first of the two values. + */ +static void test_memcpy_aligned_to_unaligned(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE_FN(expect, "test_memcpy_aligned_to_unaligned"); + volatile int uninit_src; + volatile char dst[8] = {0}; + + kunit_info(test, "memcpy()ing aligned uninit src to unaligned dst (UMR report)\n"); + memcpy((void *)&dst[1], (void *)&uninit_src, sizeof(uninit_src)); + kmsan_check_memory((void *)dst, 4); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +/* + * Test case: ensure that memcpy() correctly copies uninitialized values between + * aligned `src` and unaligned `dst`. + * + * Copying aligned 4-byte value to an unaligned one leads to touching two + * aligned 4-byte values. This test case checks that KMSAN correctly reports an + * error on the second of the two values. + */ +static void test_memcpy_aligned_to_unaligned2(struct kunit *test) +{ + EXPECTATION_UNINIT_VALUE_FN(expect, "test_memcpy_aligned_to_unaligned2"); + volatile int uninit_src; + volatile char dst[8] = {0}; + + kunit_info(test, "memcpy()ing aligned uninit src to unaligned dst - part 2 (UMR report)\n"); + memcpy((void *)&dst[1], (void *)&uninit_src, sizeof(uninit_src)); + kmsan_check_memory((void *)&dst[4], sizeof(uninit_src)); + KUNIT_EXPECT_TRUE(test, report_matches(&expect)); +} + +static struct kunit_case kmsan_test_cases[] = { + KUNIT_CASE(test_uninit_kmalloc), + KUNIT_CASE(test_init_kmalloc), + KUNIT_CASE(test_init_kzalloc), + KUNIT_CASE(test_uninit_stack_var), + KUNIT_CASE(test_init_stack_var), + KUNIT_CASE(test_params), + KUNIT_CASE(test_uninit_multiple_params), + KUNIT_CASE(test_uninit_kmsan_check_memory), + KUNIT_CASE(test_init_kmsan_vmap_vunmap), + KUNIT_CASE(test_init_vmalloc), + KUNIT_CASE(test_uaf), + KUNIT_CASE(test_percpu_propagate), + KUNIT_CASE(test_printk), + KUNIT_CASE(test_memcpy_aligned_to_aligned), + KUNIT_CASE(test_memcpy_aligned_to_unaligned), + KUNIT_CASE(test_memcpy_aligned_to_unaligned2), + {}, +}; + +/* ===== End test cases ===== */ + +static int test_init(struct kunit *test) +{ + unsigned long flags; + + spin_lock_irqsave(&observed.lock, flags); + observed.header[0] = '\0'; + observed.ignore = false; + observed.available = false; + spin_unlock_irqrestore(&observed.lock, flags); + + return 0; +} + +static void test_exit(struct kunit *test) +{ +} + +static struct kunit_suite kmsan_test_suite = { + .name = "kmsan", + .test_cases = kmsan_test_cases, + .init = test_init, + .exit = test_exit, +}; +static struct kunit_suite *kmsan_test_suites[] = { &kmsan_test_suite, NULL }; + +static void register_tracepoints(struct tracepoint *tp, void *ignore) +{ + check_trace_callback_type_console(probe_console); + if (!strcmp(tp->name, "console")) + WARN_ON(tracepoint_probe_register(tp, probe_console, NULL)); +} + +static void unregister_tracepoints(struct tracepoint *tp, void *ignore) +{ + if (!strcmp(tp->name, "console")) + tracepoint_probe_unregister(tp, probe_console, NULL); +} + +/* + * We only want to do tracepoints setup and teardown once, therefore we have to + * customize the init and exit functions and cannot rely on kunit_test_suite(). + */ +static int __init kmsan_test_init(void) +{ + /* + * Because we want to be able to build the test as a module, we need to + * iterate through all known tracepoints, since the static registration + * won't work here. + */ + for_each_kernel_tracepoint(register_tracepoints, NULL); + return __kunit_test_suites_init(kmsan_test_suites); +} + +static void kmsan_test_exit(void) +{ + __kunit_test_suites_exit(kmsan_test_suites); + for_each_kernel_tracepoint(unregister_tracepoints, NULL); + tracepoint_synchronize_unregister(); +} + +late_initcall_sync(kmsan_test_init); +module_exit(kmsan_test_exit); + +MODULE_LICENSE("GPL v2"); +MODULE_AUTHOR("Alexander Potapenko "); From patchwork Tue Apr 26 16:42:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827513 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9FF8C4332F for ; Tue, 26 Apr 2022 16:45:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 512516B0074; Tue, 26 Apr 2022 12:45:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 474D36B00A4; Tue, 26 Apr 2022 12:45:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 339FC6B00A5; Tue, 26 Apr 2022 12:45:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 20A556B00A3 for ; Tue, 26 Apr 2022 12:45:40 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id 101BA60995 for ; Tue, 26 Apr 2022 16:45:40 +0000 (UTC) X-FDA: 79399606440.30.83CAB1B Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf17.hostedemail.com (Postfix) with ESMTP id A82DD4004A for ; Tue, 26 Apr 2022 16:45:32 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id dk9-20020a0564021d8900b00425a9c3d40cso8110471edb.7 for ; Tue, 26 Apr 2022 09:45:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=bWOHkFAZUowQhXbgV/15IRTSfhe/pFIeo0hpg6VoqgU=; b=Y425LRYJ7pqp2qUV4Un5BKaPtlMc56oRipAYvk35bNiZHMqwQj5K1oHUY+3UtaBrm8 6F8LD23RH7puAqmN6VxxN3dzRPCX2o3jtXwChLes9m38UGHVO/PjltBdViFuvOO0/pv+ 9a5CV3vayhrQVrbmrEv1bjS0Nmz7caoEnvxuLCxERX2znPJk4SiPOZ9Qb7HS9LCHG3Lh qNuuAbPiqTn/Kfp6058UKfvV0NLpFCO10Egzm+8gacy2cKWG5HhTdR3sVdEPim8XyvUO oM/6bhYxYTS9e2sL18gV9sczR7Og8Y4lewCULG7qERx2rzD5bbV+C50LUjYRSUrnHoTG GPqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bWOHkFAZUowQhXbgV/15IRTSfhe/pFIeo0hpg6VoqgU=; b=HD17Ebh8vePRfUfFISzG4L0U3msXHoimS7PpJ/tgq3bnEkG6X8qbbco+E6Xs4pTWCo p+kzFwF+UKB064YniJZL4llkiNX3spo2I1szCNZ/zeKhHVnVlfhkntPYpFDNMR6/hGDA NXLfkm3OtCEhYeDYNYfyXZY6uTYsOh9yrW6zf/kH5UcoUAD5xSW7KN5fm2xtBcRQWsUX PrEjEbqQ93lUaB+BzAMq2lABHAZg00j89G/8hOA9diBujuxvud9zC59axnUuvZf7MBwL Ap/0rJ8Mh1Z9dBr5/2bLNsMOZOzydk1fm+ouv/q/J1tlCQhnoCQW7n82UIRKxV69xoEf 8yTA== X-Gm-Message-State: AOAM5326P/AxDNVpwHWiLEbEG0tMlViUmUXIbfRNKwHUeDnVkrCX5spR HMZg7NCuq7KqlYjh1tGjBj9Xr8cRn2Y= X-Google-Smtp-Source: ABdhPJy8hRX/dOnDKYiAzHg5kT/4VfAMjL04J1HrqWNmAu8buMPT+aF4N+PIwLqQH/MNoEjw6XwUMlGRDw4= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:aa7:c789:0:b0:413:605d:8d17 with SMTP id n9-20020aa7c789000000b00413605d8d17mr25370617eds.100.1650991538395; Tue, 26 Apr 2022 09:45:38 -0700 (PDT) Date: Tue, 26 Apr 2022 18:42:59 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-31-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 30/46] kmsan: disable strscpy() optimization under KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: A82DD4004A Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Y425LRYJ; spf=pass (imf17.hostedemail.com: domain of 3siFoYgYKCK8VaXSTgVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3siFoYgYKCK8VaXSTgVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Stat-Signature: epiwfwrijk7cumz8chxg7zf94yhkk7uc X-HE-Tag: 1650991532-485994 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Disable the efficient 8-byte reading under KMSAN to avoid false positives. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Iffd8336965e88fce915db2e6a9d6524422975f69 --- lib/string.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/lib/string.c b/lib/string.c index 485777c9da832..4ece4c7e7831b 100644 --- a/lib/string.c +++ b/lib/string.c @@ -197,6 +197,14 @@ ssize_t strscpy(char *dest, const char *src, size_t count) max = 0; #endif + /* + * read_word_at_a_time() below may read uninitialized bytes after the + * trailing zero and use them in comparisons. Disable this optimization + * under KMSAN to prevent false positive reports. + */ + if (IS_ENABLED(CONFIG_KMSAN)) + max = 0; + while (max >= sizeof(unsigned long)) { unsigned long c, data; From patchwork Tue Apr 26 16:43:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827514 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7C709C433EF for ; Tue, 26 Apr 2022 16:45:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1DFB06B0075; Tue, 26 Apr 2022 12:45:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 13DE76B0078; Tue, 26 Apr 2022 12:45:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EFA3A6B007B; Tue, 26 Apr 2022 12:45:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id E183D6B0075 for ; Tue, 26 Apr 2022 12:45:42 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id C26FC26B51 for ; Tue, 26 Apr 2022 16:45:42 +0000 (UTC) X-FDA: 79399606524.02.FA28372 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf19.hostedemail.com (Postfix) with ESMTP id 8F1101A0045 for ; Tue, 26 Apr 2022 16:45:38 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id dp12-20020a170906c14c00b006e7e8234ae2so9350823ejc.2 for ; Tue, 26 Apr 2022 09:45:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=38v0n0Mu0Z4gUwOcZ/qVPNhGReXdywq1ed1xS1ZMc/s=; b=CJZchZCi7h9Kr6IqDDTDcDumDHdj5sj0cAnBKwNoFce2+vz2Yl0MST5GWeDoMSKZ5a NVfjA+ZhItY65EpbQfZDQUCrqWGpDgv9nhiufcLfyRUfcjRvhRPn5OWtwZ3RALS3H0KH LhLDzIa1Or/rBw7+3a9BvsHMp18Jf58zzFyw76E+xVBiJy1T4ARK87dTf7j5OKfhdDne Iai9CoJ2Q3R2SWuzH5JHoscZjh4ZBFKysHomae/gt81c6+HE2/qqUTttMsNQx4fHXJkp vLvPqFXYcALtcsBGrJo7cj8kPyJ2pPrXNttP82G8cBzD6ZB4Kfjj/ja2SDjIFA84dxep n0HQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=38v0n0Mu0Z4gUwOcZ/qVPNhGReXdywq1ed1xS1ZMc/s=; b=okhSib15gXtsVNtzx8tK8pJY+ByFQ8B7dkncTiQqUIdeH/Gt5Up3B5L/RDzZHbmqf8 NVGFbqGrz53yfRDVx15aNhFOBBjpb7ZtzAFKowN3lxF8sjlrT5UTqZd6Sjrm4jqf4eQC LtjwpVWFeH2Wk/fTOj+/pjGZHcafICCXLl40ZJRZ68ydeTMJ2rHDZ9XyBx+I9d7Dy254 3aAD276qP5X3Po/Ql21qogFGuObmV42EzhC48lOjM482muaMJCWYOjwFb0wjcfmKL0hJ mFoWRMrYF4YCURtxTzf7IFiA6gQSTumtvUx14C2cVGKJkEy24a2r7c77Ve2vBU/1Wa+W pKYA== X-Gm-Message-State: AOAM533iQ3yqC4SG4WmQ5N0/+ZISukavCcq3d4cCFHjiXXBJxuiIL7GP I/PgYGyt/x9W5lmXCV11W8Jb2Iphngs= X-Google-Smtp-Source: ABdhPJxDQJ1KHzAU3KQwWTLZtud1ICLmCrTQkWtJUuVwEnyYS6TOqN9JDU2m4nyDsEQ80Q+Det4ouE9T2mY= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:4305:b0:423:f73b:4dd8 with SMTP id m5-20020a056402430500b00423f73b4dd8mr25672281edc.218.1650991540848; Tue, 26 Apr 2022 09:45:40 -0700 (PDT) Date: Tue, 26 Apr 2022 18:43:00 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-32-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 31/46] crypto: kmsan: disable accelerated configs under KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 8F1101A0045 X-Stat-Signature: czscr7o1zf8swyaaccjbwbkgnfqy3o6y Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=CJZchZCi; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 3tCFoYgYKCLEXcZUViXffXcV.TfdcZelo-ddbmRTb.fiX@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3tCFoYgYKCLEXcZUViXffXcV.TfdcZelo-ddbmRTb.fiX@flex--glider.bounces.google.com X-Rspam-User: X-HE-Tag: 1650991538-684136 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN is unable to understand when initialized values come from assembly. Disable accelerated configs in KMSAN builds to prevent false positive reports. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Idb2334bf3a1b68b31b399709baefaa763038cc50 --- crypto/Kconfig | 30 ++++++++++++++++++++++++++++++ drivers/net/Kconfig | 1 + 2 files changed, 31 insertions(+) diff --git a/crypto/Kconfig b/crypto/Kconfig index 41068811fd0e1..8078dbba8dd2c 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -297,6 +297,7 @@ config CRYPTO_CURVE25519 config CRYPTO_CURVE25519_X86 tristate "x86_64 accelerated Curve25519 scalar multiplication library" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_LIB_CURVE25519_GENERIC select CRYPTO_ARCH_HAVE_LIB_CURVE25519 @@ -345,11 +346,13 @@ config CRYPTO_AEGIS128 config CRYPTO_AEGIS128_SIMD bool "Support SIMD acceleration for AEGIS-128" depends on CRYPTO_AEGIS128 && ((ARM || ARM64) && KERNEL_MODE_NEON) + depends on !KMSAN # avoid false positives from assembly default y config CRYPTO_AEGIS128_AESNI_SSE2 tristate "AEGIS-128 AEAD algorithm (x86_64 AESNI+SSE2 implementation)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_AEAD select CRYPTO_SIMD help @@ -486,6 +489,7 @@ config CRYPTO_NHPOLY1305 config CRYPTO_NHPOLY1305_SSE2 tristate "NHPoly1305 hash function (x86_64 SSE2 implementation)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_NHPOLY1305 help SSE2 optimized implementation of the hash function used by the @@ -494,6 +498,7 @@ config CRYPTO_NHPOLY1305_SSE2 config CRYPTO_NHPOLY1305_AVX2 tristate "NHPoly1305 hash function (x86_64 AVX2 implementation)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_NHPOLY1305 help AVX2 optimized implementation of the hash function used by the @@ -607,6 +612,7 @@ config CRYPTO_CRC32C config CRYPTO_CRC32C_INTEL tristate "CRC32c INTEL hardware acceleration" depends on X86 + depends on !KMSAN # avoid false positives from assembly select CRYPTO_HASH help In Intel processor with SSE4.2 supported, the processor will @@ -647,6 +653,7 @@ config CRYPTO_CRC32 config CRYPTO_CRC32_PCLMUL tristate "CRC32 PCLMULQDQ hardware acceleration" depends on X86 + depends on !KMSAN # avoid false positives from assembly select CRYPTO_HASH select CRC32 help @@ -712,6 +719,7 @@ config CRYPTO_BLAKE2S config CRYPTO_BLAKE2S_X86 tristate "BLAKE2s digest algorithm (x86 accelerated version)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_LIB_BLAKE2S_GENERIC select CRYPTO_ARCH_HAVE_LIB_BLAKE2S @@ -726,6 +734,7 @@ config CRYPTO_CRCT10DIF config CRYPTO_CRCT10DIF_PCLMUL tristate "CRCT10DIF PCLMULQDQ hardware acceleration" depends on X86 && 64BIT && CRC_T10DIF + depends on !KMSAN # avoid false positives from assembly select CRYPTO_HASH help For x86_64 processors with SSE4.2 and PCLMULQDQ supported, @@ -778,6 +787,7 @@ config CRYPTO_POLY1305 config CRYPTO_POLY1305_X86_64 tristate "Poly1305 authenticator algorithm (x86_64/SSE2/AVX2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_LIB_POLY1305_GENERIC select CRYPTO_ARCH_HAVE_LIB_POLY1305 help @@ -866,6 +876,7 @@ config CRYPTO_SHA1 config CRYPTO_SHA1_SSSE3 tristate "SHA1 digest algorithm (SSSE3/AVX/AVX2/SHA-NI)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SHA1 select CRYPTO_HASH help @@ -877,6 +888,7 @@ config CRYPTO_SHA1_SSSE3 config CRYPTO_SHA256_SSSE3 tristate "SHA256 digest algorithm (SSSE3/AVX/AVX2/SHA-NI)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SHA256 select CRYPTO_HASH help @@ -889,6 +901,7 @@ config CRYPTO_SHA256_SSSE3 config CRYPTO_SHA512_SSSE3 tristate "SHA512 digest algorithm (SSSE3/AVX/AVX2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SHA512 select CRYPTO_HASH help @@ -1061,6 +1074,7 @@ config CRYPTO_WP512 config CRYPTO_GHASH_CLMUL_NI_INTEL tristate "GHASH hash function (CLMUL-NI accelerated)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_CRYPTD help This is the x86_64 CLMUL-NI accelerated implementation of @@ -1111,6 +1125,7 @@ config CRYPTO_AES_TI config CRYPTO_AES_NI_INTEL tristate "AES cipher algorithms (AES-NI)" depends on X86 + depends on !KMSAN # avoid false positives from assembly select CRYPTO_AEAD select CRYPTO_LIB_AES select CRYPTO_ALGAPI @@ -1235,6 +1250,7 @@ config CRYPTO_BLOWFISH_COMMON config CRYPTO_BLOWFISH_X86_64 tristate "Blowfish cipher algorithm (x86_64)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_BLOWFISH_COMMON imply CRYPTO_CTR @@ -1265,6 +1281,7 @@ config CRYPTO_CAMELLIA config CRYPTO_CAMELLIA_X86_64 tristate "Camellia cipher algorithm (x86_64)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER imply CRYPTO_CTR help @@ -1281,6 +1298,7 @@ config CRYPTO_CAMELLIA_X86_64 config CRYPTO_CAMELLIA_AESNI_AVX_X86_64 tristate "Camellia cipher algorithm (x86_64/AES-NI/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_CAMELLIA_X86_64 select CRYPTO_SIMD @@ -1299,6 +1317,7 @@ config CRYPTO_CAMELLIA_AESNI_AVX_X86_64 config CRYPTO_CAMELLIA_AESNI_AVX2_X86_64 tristate "Camellia cipher algorithm (x86_64/AES-NI/AVX2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_CAMELLIA_AESNI_AVX_X86_64 help Camellia cipher algorithm module (x86_64/AES-NI/AVX2). @@ -1344,6 +1363,7 @@ config CRYPTO_CAST5 config CRYPTO_CAST5_AVX_X86_64 tristate "CAST5 (CAST-128) cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_CAST5 select CRYPTO_CAST_COMMON @@ -1367,6 +1387,7 @@ config CRYPTO_CAST6 config CRYPTO_CAST6_AVX_X86_64 tristate "CAST6 (CAST-256) cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_CAST6 select CRYPTO_CAST_COMMON @@ -1400,6 +1421,7 @@ config CRYPTO_DES_SPARC64 config CRYPTO_DES3_EDE_X86_64 tristate "Triple DES EDE cipher algorithm (x86-64)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_LIB_DES imply CRYPTO_CTR @@ -1457,6 +1479,7 @@ config CRYPTO_CHACHA20 config CRYPTO_CHACHA20_X86_64 tristate "ChaCha stream cipher algorithms (x86_64/SSSE3/AVX2/AVX-512VL)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_LIB_CHACHA_GENERIC select CRYPTO_ARCH_HAVE_LIB_CHACHA @@ -1500,6 +1523,7 @@ config CRYPTO_SERPENT config CRYPTO_SERPENT_SSE2_X86_64 tristate "Serpent cipher algorithm (x86_64/SSE2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_SERPENT select CRYPTO_SIMD @@ -1519,6 +1543,7 @@ config CRYPTO_SERPENT_SSE2_X86_64 config CRYPTO_SERPENT_SSE2_586 tristate "Serpent cipher algorithm (i586/SSE2)" depends on X86 && !64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_SERPENT select CRYPTO_SIMD @@ -1538,6 +1563,7 @@ config CRYPTO_SERPENT_SSE2_586 config CRYPTO_SERPENT_AVX_X86_64 tristate "Serpent cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_SERPENT select CRYPTO_SIMD @@ -1558,6 +1584,7 @@ config CRYPTO_SERPENT_AVX_X86_64 config CRYPTO_SERPENT_AVX2_X86_64 tristate "Serpent cipher algorithm (x86_64/AVX2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SERPENT_AVX_X86_64 help Serpent cipher algorithm, by Anderson, Biham & Knudsen. @@ -1699,6 +1726,7 @@ config CRYPTO_TWOFISH_586 config CRYPTO_TWOFISH_X86_64 tristate "Twofish cipher algorithm (x86_64)" depends on (X86 || UML_X86) && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_ALGAPI select CRYPTO_TWOFISH_COMMON imply CRYPTO_CTR @@ -1716,6 +1744,7 @@ config CRYPTO_TWOFISH_X86_64 config CRYPTO_TWOFISH_X86_64_3WAY tristate "Twofish cipher algorithm (x86_64, 3-way parallel)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_TWOFISH_COMMON select CRYPTO_TWOFISH_X86_64 @@ -1736,6 +1765,7 @@ config CRYPTO_TWOFISH_X86_64_3WAY config CRYPTO_TWOFISH_AVX_X86_64 tristate "Twofish cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_SIMD select CRYPTO_TWOFISH_COMMON diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig index b2a4f998c180e..fed89b6981759 100644 --- a/drivers/net/Kconfig +++ b/drivers/net/Kconfig @@ -76,6 +76,7 @@ config WIREGUARD tristate "WireGuard secure network tunnel" depends on NET && INET depends on IPV6 || !IPV6 + depends on !KMSAN # KMSAN doesn't support the crypto configs below select NET_UDP_TUNNEL select DST_CACHE select CRYPTO From patchwork Tue Apr 26 16:43:01 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23DB6C433F5 for ; Tue, 26 Apr 2022 16:45:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B7DE46B0078; Tue, 26 Apr 2022 12:45:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B28936B007B; Tue, 26 Apr 2022 12:45:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A3E596B00A3; Tue, 26 Apr 2022 12:45:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 96F106B0078 for ; Tue, 26 Apr 2022 12:45:45 -0400 (EDT) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 6CC6BC97 for ; Tue, 26 Apr 2022 16:45:45 +0000 (UTC) X-FDA: 79399606650.14.0E1AD4F Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf04.hostedemail.com (Postfix) with ESMTP id ED4D940048 for ; Tue, 26 Apr 2022 16:45:40 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id dt18-20020a170907729200b006f377ebe5cbso4559709ejc.22 for ; Tue, 26 Apr 2022 09:45:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=KgfdOYR2+eRzwy3N1nFbXjWhZjMyNN/8KPMy2HvUIiA=; b=DiXJaiB8AJMG8+A1MDOZ3VuBShHUCaL1GlKQxcD6mV0R7Av/UajF571MXN7vKcH+xJ B9d0v8KhVCzlQ+z94pkBBtVLTSnlc2yIkyIXP0B+bFCdaqtmHA9pe8/5NYuKcmD5Vy3L fA34cdtJjSamNgh5dsmbUVd7b9zA7hnjKOjdt88/6jqxc/UG3ZHi4m5Nam5rWzqbzqPm cd06DBJ3ACVxlq9XrgpCh+wVKQori1dj0NvbQ7HSZw0NjHsIf9jtgKdLkxo1euBXeZpk ExnDMib7ea+byjuGF4JqXiPnkpgw3Zh2HRRI3cVSjxCjQ97crsBXKlz1IuJlZ96GONPP Cgcw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=KgfdOYR2+eRzwy3N1nFbXjWhZjMyNN/8KPMy2HvUIiA=; b=bU9BfaEglgrMdGomJAwN7Uqx0wKUy6QeTBIfBhhNOMGl9TP6E/QYMcyjOTho+zLZ/c d9beI4x+QLXNZMh+qmdPC7IlwSnsiw+RTmnumptMZ+lNxBcbeneDkDhTRKwlWvoIoQaN qGWxJZc5Fi4c2S1AuYhWe1fHq//vkxp3X3L7IpbAt7CjnRFc067FIQcEtZp4uAU3+6sP bCVGVlh3f9F21TFlLfJmAejeg6WYXUDy6+ZgS60NQh3WMCqrmJNGwSy7p+eCUe6nQ04B rk3LAzTquotBvL5Q2haoOU8vf+25RsZpUkaBHubO9xNt4j1BhwW/hPlIJNL7cJh/t+x1 tTtg== X-Gm-Message-State: AOAM532EnEngmlYE6bomPQcMmEhCMNT+RZ6Ho51rqhkyvKtzkkvjNSs+ R/ZZHBkBXhziYPiaLdBMa1RAxhbPsGc= X-Google-Smtp-Source: ABdhPJzCT8fH32RoUVNaiZCyh2VQrj6uF5kLZlCQX+kBGFsZsTKhLxpnKoIswBSqwHZF0t1dzIj5uX6h2dU= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:26c7:b0:423:e5d6:b6c6 with SMTP id x7-20020a05640226c700b00423e5d6b6c6mr25480700edd.61.1650991543391; Tue, 26 Apr 2022 09:45:43 -0700 (PDT) Date: Tue, 26 Apr 2022 18:43:01 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-33-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 32/46] kmsan: disable physical page merging in biovec From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: njjyhkt7xownwikhgjy3brctfnhu98qd Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=DiXJaiB8; spf=pass (imf04.hostedemail.com: domain of 3tyFoYgYKCLQafcXYlaiiafY.Wigfchor-ggepUWe.ila@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3tyFoYgYKCLQafcXYlaiiafY.Wigfchor-ggepUWe.ila@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: ED4D940048 X-HE-Tag: 1650991540-899833 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN metadata for adjacent physical pages may not be adjacent, therefore accessing such pages together may lead to metadata corruption. We disable merging pages in biovec to prevent such corruptions. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Iece16041be5ee47904fbc98121b105e5be5fea5c --- block/blk.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/block/blk.h b/block/blk.h index 8ccbc6e076369..95815ac559743 100644 --- a/block/blk.h +++ b/block/blk.h @@ -93,6 +93,13 @@ static inline bool biovec_phys_mergeable(struct request_queue *q, phys_addr_t addr1 = page_to_phys(vec1->bv_page) + vec1->bv_offset; phys_addr_t addr2 = page_to_phys(vec2->bv_page) + vec2->bv_offset; + /* + * Merging adjacent physical pages may not work correctly under KMSAN + * if their metadata pages aren't adjacent. Just disable merging. + */ + if (IS_ENABLED(CONFIG_KMSAN)) + return false; + if (addr1 + vec1->bv_len != addr2) return false; if (xen_domain() && !xen_biovec_phys_mergeable(vec1, vec2->bv_page)) From patchwork Tue Apr 26 16:43:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827516 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CF6FC433F5 for ; Tue, 26 Apr 2022 16:45:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2C7D86B007B; Tue, 26 Apr 2022 12:45:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 250496B00A3; Tue, 26 Apr 2022 12:45:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0A1D26B00A4; Tue, 26 Apr 2022 12:45:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id EFF0A6B007B for ; Tue, 26 Apr 2022 12:45:47 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id C1F6C20D30 for ; Tue, 26 Apr 2022 16:45:47 +0000 (UTC) X-FDA: 79399606734.16.B524D71 Received: from mail-lf1-f74.google.com (mail-lf1-f74.google.com [209.85.167.74]) by imf16.hostedemail.com (Postfix) with ESMTP id 0E3E8180058 for ; Tue, 26 Apr 2022 16:45:43 +0000 (UTC) Received: by mail-lf1-f74.google.com with SMTP id h4-20020a0565123c8400b00471f8c2a09eso4033417lfv.10 for ; Tue, 26 Apr 2022 09:45:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GtiQESdTzDECedHCE4txXpUZH3/tDa3FQ07WXH5OYuQ=; b=UW3AswRaq6F3AUzKmXnr7cPkQ0g2XLtpFenEmpgWjfvp2fOauRXzdotOTeUNyWSyYs 4tmhw8qwzpN9wiAWPxJUMCVxV1rWcYuXS49a+1tCMi0Oc5SG780CX3I+mnAa6/MfcB77 6+gdBv5jHg6AKYvKP7kA5u9scJwMLXdv1i19bLKNouK41RBdBrSMPPJuqKL61o2W1+Ho pFIgSZPybW4jMIEebVa0EuG05Nils5rY1Bzl3+5j5IX62oqraNKPX3IyM6K8LuZYgWcI qaxMdheHooD/KCw39MnxjCE+Vv1BhdJnZkIOIVorHWlt4oOjs2wI7s6rPQyDKu5idJkG PQRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GtiQESdTzDECedHCE4txXpUZH3/tDa3FQ07WXH5OYuQ=; b=r1n/qmfF9HsrTbYrp3qP5LQJDo/NrR2FIxftdYZKcPbXs7qMRHiwMTQUACvIazyn5v 9N2yrDH6lniTxwEcSeKaHiwqqq6lej6dvYM5b097V+16IytsS+zafNSFKRYXLrufWos2 tj4tucrLuKVGrG7nRKVoPXfB0ZRRojCKNI7+Wb0Pame6jRDgn6oPflw9RaFl1hkCZ10Z z1EVsZGlABk8KIuTXVTnxOgkEUGbbpbhcHbAyDuT8OscrAZLAeO6abReIoFQh1DhEiPf Ud4wvfve2AoM7dzl5X5DRumUAN5ohFLjkqGnt1G61JyRMbjFq+vKOQlqAMbGulJab92i mGZA== X-Gm-Message-State: AOAM532v55cFbQQ4qKYypZXU+0KjleVwyugaz0ot7uETxrDboO9c/hKl d7TM78ImxKmMyiC3OSwXHbMd/BabPRw= X-Google-Smtp-Source: ABdhPJwozf4GfzZ3k/EMYH1LpIZPBrUltfR1bzZ40SvgirQ7Xof252RdKVFQ5zP9UN/h/78LotXJK8Dsdf0= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6512:20c6:b0:471:fdba:1480 with SMTP id u6-20020a05651220c600b00471fdba1480mr10896844lfr.425.1650991546042; Tue, 26 Apr 2022 09:45:46 -0700 (PDT) Date: Tue, 26 Apr 2022 18:43:02 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-34-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 33/46] kmsan: block: skip bio block merging logic for KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Eric Biggers X-Stat-Signature: 9osdondj4u7jhcb7np4fd6n1a1pmurxb Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=UW3AswRa; spf=pass (imf16.hostedemail.com: domain of 3uiFoYgYKCLcdifabodlldib.Zljifkru-jjhsXZh.lod@flex--glider.bounces.google.com designates 209.85.167.74 as permitted sender) smtp.mailfrom=3uiFoYgYKCLcdifabodlldib.Zljifkru-jjhsXZh.lod@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 0E3E8180058 X-HE-Tag: 1650991543-85135 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN doesn't allow treating adjacent memory pages as such, if they were allocated by different alloc_pages() calls. The block layer however does so: adjacent pages end up being used together. To prevent this, make page_is_mergeable() return false under KMSAN. Suggested-by: Eric Biggers Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ie29cc2464c70032347c32ab2a22e1e7a0b37b905 --- block/bio.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/block/bio.c b/block/bio.c index 4259125e16ab2..db56090c00bae 100644 --- a/block/bio.c +++ b/block/bio.c @@ -836,6 +836,8 @@ static inline bool page_is_mergeable(const struct bio_vec *bv, return false; *same_page = ((vec_end_addr & PAGE_MASK) == page_addr); + if (!*same_page && IS_ENABLED(CONFIG_KMSAN)) + return false; if (*same_page) return true; return (bv->bv_page + bv_end / PAGE_SIZE) == (page + off / PAGE_SIZE); From patchwork Tue Apr 26 16:43:03 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827517 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1221CC433F5 for ; Tue, 26 Apr 2022 16:45:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9CB296B00A3; Tue, 26 Apr 2022 12:45:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9A46D6B00A4; Tue, 26 Apr 2022 12:45:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 844B06B00A5; Tue, 26 Apr 2022 12:45:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 74D056B00A3 for ; Tue, 26 Apr 2022 12:45:50 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 4AAED20CF2 for ; Tue, 26 Apr 2022 16:45:50 +0000 (UTC) X-FDA: 79399606860.10.19C5DF5 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf11.hostedemail.com (Postfix) with ESMTP id 21F4E4004C for ; Tue, 26 Apr 2022 16:45:46 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id s8-20020adf9788000000b0020adb01dc25so2047659wrb.20 for ; Tue, 26 Apr 2022 09:45:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=KHDnEJZlYbtguNecmw4Z0aoQP18cjcWXna880BlF2jg=; b=Ras6zX6zLyLhsIDtzjqqN9y4Bp5f6rPNvg/NlhbZeOJcSZ0W0h6WszebH9Jl+tDnIj 1d8SC1JokEP9rj0/Uz/B0C8vgBdM75W2cOctkHAhkLUnzcLWel+aJ8n1Z2d27uToZyPD NXcsYuwWmfXUsoUSNdeFpNkCp8Mdmw2X2PM8Z0Xq3w/pvHZee2tgfSDClqP6w/GG0G6Z Mkg5MkTefNZEIo51rh3WSUC/xRWyYaKD3+5kRfKXRqTWJTGdKD34V/dogH3TIl3qdspb r/9R17wsy7X98UcL39rrXIx8RR/Zx48ZWvJVe4g3RHHZ1p7AGTIwvBiDniEVZixdJ1br ihtw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=KHDnEJZlYbtguNecmw4Z0aoQP18cjcWXna880BlF2jg=; b=FrwKYRSlnXVBRD57HD3b5dOe3hfyKU0LN3L8z5qcWJ84WYCOz35vh69XIiwUvxH6qs sJlv3BCFqstwIIB68RnuE3QpgLfmYwP3s6LnzdwDFqDy290c41tnN+4mVfjkvjgjkpcq ggAxW4Q3RFFr/59tq/IzgMvZsCgswItaXJBo4I0/fDekzVYHuL1BSweDYOD6t6UN2gbp 1gFBIwmkdHYPxQYadjI1CuXAXj+XRrU/SUy7F+5oVtVRN+WbVLXUJUsH+DyVJcGaH3BY SyDToRwrlhbxbGh1cXniJIUHnLqZMoQ2Kqs6vYBMwJiLFcDZm2tI7JqgdhjeF40mnELE AZuw== X-Gm-Message-State: AOAM53144e4tSiO9ELF/xS0vAHctQnLRKEyZ0UVwQRcaZc70/IZuqUF9 h4XT8aiBA1t5ZUT7KFs/pfU021w6JIQ= X-Google-Smtp-Source: ABdhPJwEbc2ldIDEpaUEaYnMNUbLZOo12zW9lcP4NsRCoKnifcBnuhUmGJEZqd7cMdPo2nqMrlaeyXx5FlQ= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:adf:e289:0:b0:1e3:14ad:75fe with SMTP id v9-20020adfe289000000b001e314ad75femr18987161wri.685.1650991548482; Tue, 26 Apr 2022 09:45:48 -0700 (PDT) Date: Tue, 26 Apr 2022 18:43:03 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-35-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 34/46] kmsan: kcov: unpoison area->list in kcov_remote_area_put() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 21F4E4004C X-Stat-Signature: fniko8q94bm6b6uo6gg35wwtp4k64p3p X-Rspam-User: Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Ras6zX6z; spf=pass (imf11.hostedemail.com: domain of 3vCFoYgYKCLkfkhcdqfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--glider.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3vCFoYgYKCLkfkhcdqfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1650991546-609463 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN does not instrument kernel/kcov.c for performance reasons (with CONFIG_KCOV=y virtually every place in the kernel invokes kcov instrumentation). Therefore the tool may miss writes from kcov.c that initialize memory. When CONFIG_DEBUG_LIST is enabled, list pointers from kernel/kcov.c are passed to instrumented helpers in lib/list_debug.c, resulting in false positives. To work around these reports, we unpoison the contents of area->list after initializing it. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Ie17f2ee47a7af58f5cdf716d585ebf0769348a5a --- kernel/kcov.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/kernel/kcov.c b/kernel/kcov.c index b3732b2105930..9e38209a7e0a9 100644 --- a/kernel/kcov.c +++ b/kernel/kcov.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -152,6 +153,12 @@ static void kcov_remote_area_put(struct kcov_remote_area *area, INIT_LIST_HEAD(&area->list); area->size = size; list_add(&area->list, &kcov_remote_areas); + /* + * KMSAN doesn't instrument this file, so it may not know area->list + * is initialized. Unpoison it explicitly to avoid reports in + * kcov_remote_area_get(). + */ + kmsan_unpoison_memory(&area->list, sizeof(struct list_head)); } static notrace bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t) From patchwork Tue Apr 26 16:43:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827518 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8EB65C433EF for ; Tue, 26 Apr 2022 16:45:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 254396B00A4; Tue, 26 Apr 2022 12:45:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1B4266B00A5; Tue, 26 Apr 2022 12:45:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0A2556B00A6; Tue, 26 Apr 2022 12:45:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id F01256B00A4 for ; Tue, 26 Apr 2022 12:45:52 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id CFF756093E for ; Tue, 26 Apr 2022 16:45:52 +0000 (UTC) X-FDA: 79399606944.26.B8384D8 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf23.hostedemail.com (Postfix) with ESMTP id 1E8CE14004F for ; Tue, 26 Apr 2022 16:45:46 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id l14-20020aa7cace000000b003f7f8e1cbbdso10587957edt.20 for ; Tue, 26 Apr 2022 09:45:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=qfycs5WHoOJISCgVkz4aOXJI06Vi4G56am9I/vqcS5Q=; b=fRwDF91Cl56NtYnfVRrrwWq+ACk34XnpCPgWDJNhxkPf0+jyG4t7SUqyZkxziScoD6 nlS+/T0kxR6PQ7wQlPhgkRdK+OAhXkwXjvWT5GdxHtziDcUCsqgfbLBiAQg7R4lxOA+S b1h3WJe9vuCUveQdnRjvIleLSdoXjd5T81AR3B/d6sysyhegK2IbBH7cJDKBoPbngE01 jvw+rllOIeEdhzy+uxIADb7nx6kl7sAhRpV9EXPjMlnyR573JuNml4rWiL8PIe9a6qEW epw0ifH48szv7cq73qPvPsJ2sXpbRFaPAhln/IK2M8zSSWNmDvdbyCHkb2G+fh3x7wwt zbNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=qfycs5WHoOJISCgVkz4aOXJI06Vi4G56am9I/vqcS5Q=; b=J+R5axg6TKask7debm931aDfFaaz4VF4naNn42Vz3ZdzZO1ifDynkxTJefNzW2BjKa Zq6KauNqmTQhQ77Fq937Ft6Ioss2msoSdyXXJb+sAT3b0TE4AO3Pgji+ty/uT9YQF6WW 7LAUVEJWjS9/w7eaQCIknZm68tCOm/6MN9eBuj8l2RAJUxKDSLkShXzGC9X6URM20may FlauL7eFeJtspJWWAhQMHmtYCX393oYq04wVLmJPZhNzEKkClmezenoSNNgIjViQJ5qo JxtFfNawXSpfo5sgRHft98+v7hXeg0CX774WuVbnflRbL7/a3PJeQNLVstmdI0q2T3gl U5Bw== X-Gm-Message-State: AOAM532JXRVzz7NyWjCgfDlnbmmieigQA9IpHuLQ8ZT4EsawXDRHnWzl aO5UoEL5ziEpivihj2Xk0NVwzKnltm4= X-Google-Smtp-Source: ABdhPJyO5IzITq1Md/msOkFe5QpCMXnvVTw7Eg5gyXSwTxjpxMvbx88JMdeRNdS+uJ66Z5Deghmk2VEGLtk= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a17:906:a08b:b0:6b9:2e20:f139 with SMTP id q11-20020a170906a08b00b006b92e20f139mr23252089ejy.463.1650991550999; Tue, 26 Apr 2022 09:45:50 -0700 (PDT) Date: Tue, 26 Apr 2022 18:43:04 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-36-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 35/46] security: kmsan: fix interoperability with auto-initialization From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: uozsxqdutnd3j6a3y5cs6nnuqd3ekfxe X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 1E8CE14004F X-Rspam-User: Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=fRwDF91C; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf23.hostedemail.com: domain of 3viFoYgYKCLshmjefshpphmf.dpnmjovy-nnlwbdl.psh@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3viFoYgYKCLshmjefshpphmf.dpnmjovy-nnlwbdl.psh@flex--glider.bounces.google.com X-HE-Tag: 1650991546-167318 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Heap and stack initialization is great, but not when we are trying uses of uninitialized memory. When the kernel is built with KMSAN, having kernel memory initialization enabled may introduce false negatives. We disable CONFIG_INIT_STACK_ALL_PATTERN and CONFIG_INIT_STACK_ALL_ZERO under CONFIG_KMSAN, making it impossible to auto-initialize stack variables in KMSAN builds. We also disable CONFIG_INIT_ON_ALLOC_DEFAULT_ON and CONFIG_INIT_ON_FREE_DEFAULT_ON to prevent accidental use of heap auto-initialization. We however still let the users enable heap auto-initialization at boot-time (by setting init_on_alloc=1 or init_on_free=1), in which case a warning is printed. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I86608dd867018683a14ae1870f1928ad925f42e9 --- mm/page_alloc.c | 4 ++++ security/Kconfig.hardening | 4 ++++ 2 files changed, 8 insertions(+) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 35b1fedb2f09c..4c89729cac7ac 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -849,6 +849,10 @@ void init_mem_debugging_and_hardening(void) else static_branch_disable(&init_on_free); + if (IS_ENABLED(CONFIG_KMSAN) && + (_init_on_alloc_enabled_early || _init_on_free_enabled_early)) + pr_info("mem auto-init: please make sure init_on_alloc and init_on_free are disabled when running KMSAN\n"); + #ifdef CONFIG_DEBUG_PAGEALLOC if (!debug_pagealloc_enabled()) return; diff --git a/security/Kconfig.hardening b/security/Kconfig.hardening index ded4d7c0d1322..d6cce64899d13 100644 --- a/security/Kconfig.hardening +++ b/security/Kconfig.hardening @@ -106,6 +106,7 @@ choice config INIT_STACK_ALL_PATTERN bool "pattern-init everything (strongest)" depends on CC_HAS_AUTO_VAR_INIT_PATTERN + depends on !KMSAN help Initializes everything on the stack (including padding) with a specific debug value. This is intended to eliminate @@ -124,6 +125,7 @@ choice config INIT_STACK_ALL_ZERO bool "zero-init everything (strongest and safest)" depends on CC_HAS_AUTO_VAR_INIT_ZERO + depends on !KMSAN help Initializes everything on the stack (including padding) with a zero value. This is intended to eliminate all @@ -218,6 +220,7 @@ config STACKLEAK_RUNTIME_DISABLE config INIT_ON_ALLOC_DEFAULT_ON bool "Enable heap memory zeroing on allocation by default" + depends on !KMSAN help This has the effect of setting "init_on_alloc=1" on the kernel command line. This can be disabled with "init_on_alloc=0". @@ -230,6 +233,7 @@ config INIT_ON_ALLOC_DEFAULT_ON config INIT_ON_FREE_DEFAULT_ON bool "Enable heap memory zeroing on free by default" + depends on !KMSAN help This has the effect of setting "init_on_free=1" on the kernel command line. This can be disabled with "init_on_free=0". From patchwork Tue Apr 26 16:43:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827519 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB3CEC433F5 for ; Tue, 26 Apr 2022 16:45:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 81DD46B00A5; Tue, 26 Apr 2022 12:45:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7CFDC6B00A6; Tue, 26 Apr 2022 12:45:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 620FC6B00A7; Tue, 26 Apr 2022 12:45:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 55CC86B00A5 for ; Tue, 26 Apr 2022 12:45:55 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 36B1626C7E for ; Tue, 26 Apr 2022 16:45:55 +0000 (UTC) X-FDA: 79399607070.20.F7F9338 Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf31.hostedemail.com (Postfix) with ESMTP id E658D2004B for ; Tue, 26 Apr 2022 16:45:46 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id qw33-20020a1709066a2100b006f001832229so9339992ejc.4 for ; Tue, 26 Apr 2022 09:45:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=UzBVkSVOyNRYl4KHuto6hQklAwc82x2pMcEIZC5XiS8=; b=owv8rcv7uy/IjMK2oDHnzdeRikUPc32v+DR6vPabqmMBgOdq0UhvmmpjftMy8ytPNt HM3P5/DQ+pT5uKlWjCFQLCWuI9NAXILUzfH3+gwTr92gjb0OskwwBjPetTXSTIPD/6OL 3KUV5a7F0Rw16/K17cTVV6HnNmzJ94eEPwwrT81RmiU5XvUiR8m/dPbUUMv64bYcnrfS 8JLEMyNSW8k05BHKUBKubnloTQr7UOc9bvvDZfi1MjSOdb0FNIpfyPVgIpYfdedOBZYB K6n0Di2ypZYYijr3xtes9AkKQrrbDkPvtqX7ow2f4ddywq7YCN1BIaM7a/np99bjjIjy k1tg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=UzBVkSVOyNRYl4KHuto6hQklAwc82x2pMcEIZC5XiS8=; b=3AZnjdjrW53n5hVmO1IDfeSt0abz9nZ95BogNH4SeWYg+i4CRYk53ssIeqbo2P0bFo Uje8owkuAgT9qQc2MC/HD1f1uzaPgCwVd7prWmGpDA+23/9lU71MBz3LGDQOy6xPImq2 wqqSJ3f1VuB3LA/t0sJocwb869m9qNuNJxhno9CuJzA6Y+sj7LS+8UQFnivL+DgxyLsw wJUSMQgXyLejsqc3zZHNCR+WXxxETlZuiF6Sj0fMNbaaToEJ9KKj5Vn0Wreso3SvBrPu fxcsGJHbjsmFqkrfx1tgvIlsvJylOoQEiRIZzotcaDGiwrp4adOQxKFC+z+hkVMtiPiV fFMA== X-Gm-Message-State: AOAM531vRdR4WhDObYg10s98CiGGbz3SKA6nNKYQHCNRm4BkpFItyvK2 FGtRKmoiPBFKtWu8DqMHW9/BX0Ul5i4= X-Google-Smtp-Source: ABdhPJyuq9+TYZNExRenypS/KX++WGny+sO+6oBDx1c907ZTGsidHormgyYiaYHSTUT0xaavjbt6mR9vHwU= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:54:b0:419:9b58:e305 with SMTP id f20-20020a056402005400b004199b58e305mr25365353edu.158.1650991553606; Tue, 26 Apr 2022 09:45:53 -0700 (PDT) Date: Tue, 26 Apr 2022 18:43:05 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-37-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 36/46] objtool: kmsan: list KMSAN API functions as uaccess-safe From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: ak9pk686zimhpspb4gcqynyerju9apco X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: E658D2004B Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=owv8rcv7; spf=pass (imf31.hostedemail.com: domain of 3wSFoYgYKCL4kpmhivksskpi.gsqpmry1-qqozego.svk@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=3wSFoYgYKCL4kpmhivksskpi.gsqpmry1-qqozego.svk@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-HE-Tag: 1650991546-635029 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN inserts API function calls in a lot of places (function entries and exits, local variables, memory accesses), so they may get called from the uaccess regions as well. KMSAN API functions are used to update the metadata (shadow/origin pages) for kernel memory accesses. The metadata pages for kernel pointers are also located in the kernel memory, so touching them is not a problem. For userspace pointers, no metadata is allocated. If an API function is supposed to read or modify the metadata, it does so for kernel pointers and ignores userspace pointers. If an API function is supposed to return a pair of metadata pointers for the instrumentation to use (like all __msan_metadata_ptr_for_TYPE_SIZE() functions do), it returns the allocated metadata for kernel pointers and special dummy buffers residing in the kernel memory for userspace pointers. As a result, none of KMSAN API functions perform userspace accesses, but since they might be called from UACCESS regions they use user_access_save/restore(). Signed-off-by: Alexander Potapenko --- v3: -- updated the patch description Link: https://linux-review.googlesource.com/id/I242bc9816273fecad4ea3d977393784396bb3c35 --- tools/objtool/check.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index bd0c2c828940a..44825a96adc7c 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -1008,6 +1008,25 @@ static const char *uaccess_safe_builtin[] = { "__sanitizer_cov_trace_cmp4", "__sanitizer_cov_trace_cmp8", "__sanitizer_cov_trace_switch", + /* KMSAN */ + "kmsan_copy_to_user", + "kmsan_report", + "kmsan_unpoison_memory", + "__msan_chain_origin", + "__msan_get_context_state", + "__msan_instrument_asm_store", + "__msan_metadata_ptr_for_load_1", + "__msan_metadata_ptr_for_load_2", + "__msan_metadata_ptr_for_load_4", + "__msan_metadata_ptr_for_load_8", + "__msan_metadata_ptr_for_load_n", + "__msan_metadata_ptr_for_store_1", + "__msan_metadata_ptr_for_store_2", + "__msan_metadata_ptr_for_store_4", + "__msan_metadata_ptr_for_store_8", + "__msan_metadata_ptr_for_store_n", + "__msan_poison_alloca", + "__msan_warning", /* UBSAN */ "ubsan_type_mismatch_common", "__ubsan_handle_type_mismatch", From patchwork Tue Apr 26 16:43:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827520 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 96856C433F5 for ; Tue, 26 Apr 2022 16:45:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 33CDF6B00A6; Tue, 26 Apr 2022 12:45:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2ECAA6B00A7; Tue, 26 Apr 2022 12:45:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 166396B00A8; Tue, 26 Apr 2022 12:45:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 0952B6B00A6 for ; Tue, 26 Apr 2022 12:45:58 -0400 (EDT) Received: from smtpin11.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id E024560B16 for ; Tue, 26 Apr 2022 16:45:57 +0000 (UTC) X-FDA: 79399607154.11.E4910AF Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf27.hostedemail.com (Postfix) with ESMTP id 34F364004A for ; Tue, 26 Apr 2022 16:45:56 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id 13-20020a170906328d00b006982d0888a4so9273394ejw.9 for ; Tue, 26 Apr 2022 09:45:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Fx98fFr6n4NVKGBzav2AZaWX3VRzhlrRB+7m3pCHiSE=; b=XHfBp7Cl9xVO2N8a97yhZbWVDGYXssOGKoCjey19oDLs3NuPvKFqJlX77OVyMj94Gt 2jjyaOrHBMMVqPlV8iHx1esMbi+0FDIhnzF7Cykz5C7cq/uxOyLSo3F2oQbehHxHtWWq J0Gr9w+xpCyeNSmgjOjL6MaF2r1Jb+P68Kxo24lt3IECVMYNwPiYSWt8f1OWxVrhhtpd NO15cOsJnLjPuQEWdNOcUO0Xa9QVwX7bjyI3Sb7JSEshKJQJE+Pg0L1D80PFDwJ2uTlH nrRZFhYPzn8Skr3KSaxneh2u7R3qXEV++ExcIhKQSHekMMa2AQX+GQFmuIwyxjASi21j HPqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Fx98fFr6n4NVKGBzav2AZaWX3VRzhlrRB+7m3pCHiSE=; b=FYXYZ99upWoSyvNFQXcSFqblJe2d/Ql4V0Ce/hO+3tDwHhqfXtNFcXUXb+o2xXNvd1 3kMUMGy4S3VovDkCTfrIFEQNdj9/B17+54MtIOaYjRhVaPeT5bKrmNoxLHEUZUUsxKhr OGru9PKFZbhjxfDE5V7Y58MTS6KcVcXZ7c4YUbA3qHpF5IHo0S1TfbQWPXFUpgXAz2T1 2Tu/1iGeZ/AbHF6OXHWIxzDVioYqWdnhGhBoEeemGqtg5QyR4xbc0+YyJF7cidZQdpKI Fw19p+8K9XJVnGce8xD+eTh2PJlWtUKIhEpBpy0wdkX7IpUEGI+QVi0+17LK1RDvqcEk 7JOw== X-Gm-Message-State: AOAM532Q+FFKrc7eZeMNvdornscg4WXbGtA90YrPFttIlzSf2iUxoI9u VxhkbpMcOhJJ7aVrS1M21uCIfoUJW8Y= X-Google-Smtp-Source: ABdhPJz3WWkOPsikmC1Af3Hb7X8sg1uvJjDWagM85J5s9I3Wdek2Bn8waW8w9Db/kDe/JuhNTPmAIxhWeNI= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:34d2:b0:423:e6c4:3e9 with SMTP id w18-20020a05640234d200b00423e6c403e9mr26328332edc.372.1650991556120; Tue, 26 Apr 2022 09:45:56 -0700 (PDT) Date: Tue, 26 Apr 2022 18:43:06 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-38-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 37/46] x86: kmsan: make READ_ONCE_TASK_STACK() return initialized values From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: 9nykzuib8hh1u9nbiiuwysijp4e68o76 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 34F364004A Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=XHfBp7Cl; spf=pass (imf27.hostedemail.com: domain of 3xCFoYgYKCMEnspklynvvnsl.jvtspu14-ttr2hjr.vyn@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=3xCFoYgYKCMEnspklynvvnsl.jvtspu14-ttr2hjr.vyn@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-HE-Tag: 1650991556-278770 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To avoid false positives, assume that reading from the task stack always produces initialized values. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I9e2350bf3e88688dd83537e12a23456480141997 --- arch/x86/include/asm/unwind.h | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/arch/x86/include/asm/unwind.h b/arch/x86/include/asm/unwind.h index 7cede4dc21f00..87acc90875b74 100644 --- a/arch/x86/include/asm/unwind.h +++ b/arch/x86/include/asm/unwind.h @@ -128,18 +128,19 @@ unsigned long unwind_recover_ret_addr(struct unwind_state *state, } /* - * This disables KASAN checking when reading a value from another task's stack, - * since the other task could be running on another CPU and could have poisoned - * the stack in the meantime. + * This disables KASAN/KMSAN checking when reading a value from another task's + * stack, since the other task could be running on another CPU and could have + * poisoned the stack in the meantime. Frame pointers are uninitialized by + * default, so for KMSAN we mark the return value initialized unconditionally. */ -#define READ_ONCE_TASK_STACK(task, x) \ -({ \ - unsigned long val; \ - if (task == current) \ - val = READ_ONCE(x); \ - else \ - val = READ_ONCE_NOCHECK(x); \ - val; \ +#define READ_ONCE_TASK_STACK(task, x) \ +({ \ + unsigned long val; \ + if (task == current && !IS_ENABLED(CONFIG_KMSAN)) \ + val = READ_ONCE(x); \ + else \ + val = READ_ONCE_NOCHECK(x); \ + val; \ }) static inline bool task_on_another_cpu(struct task_struct *task) From patchwork Tue Apr 26 16:43:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827521 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9BC98C43219 for ; Tue, 26 Apr 2022 16:46:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2EE5B6B00A7; Tue, 26 Apr 2022 12:46:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 29FCF6B00A8; Tue, 26 Apr 2022 12:46:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1178F6B00A9; Tue, 26 Apr 2022 12:46:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id 034286B00A7 for ; Tue, 26 Apr 2022 12:46:01 -0400 (EDT) Received: from smtpin31.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id D1B009E8 for ; Tue, 26 Apr 2022 16:46:00 +0000 (UTC) X-FDA: 79399607280.31.F50CE6C Received: from mail-lj1-f201.google.com (mail-lj1-f201.google.com [209.85.208.201]) by imf29.hostedemail.com (Postfix) with ESMTP id C84F4120051 for ; Tue, 26 Apr 2022 16:45:57 +0000 (UTC) Received: by mail-lj1-f201.google.com with SMTP id e3-20020a2e9303000000b00249765c005cso4828359ljh.17 for ; Tue, 26 Apr 2022 09:46:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=W69+Dls+nKFAD4lCWK/D6+Hgr/p3mJKYHKVaWr09RwM=; b=owm+BOhtu0whEqeXMLhBmz/yzdgp3Ss1TLbeF+Aik4tkycnXKmYZ3n1neSo8hb18hb BX0hjeRGCN6xXxA1vzrvFH0v9CoRUCtTXQLKrHwmCu+y1ae44cMIL85bcgh0tq9Lx1ES uePdvpJnw//wlpuLMvMpt4JuP6ldvIkHR8UsAm0u7Ixqgiie37sR8o/J92rLVnj0THtN xbPhuRh0HxLLsvpwoO6qui2q+WyPa1s+kNb5h9WxXtLlSB5Ib2oPyXAg/DQqG7CPduQ+ iUtgABXZYumPoT6y5CGoxk/KW2xPtKCKp9d94Q7dOcFmZ/oeIYyV2FFgehwWZr7RwBiW UUTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=W69+Dls+nKFAD4lCWK/D6+Hgr/p3mJKYHKVaWr09RwM=; b=Z70NF+9AiZs1VR0fZGKu/VoP5bOYoafErpSfFIdshzOdnvePkyUBIUaMv8+YyZjQDT gGHHkyg/TVutWbAOWa1EL1RessJycExVVIYTrFEYXQYEPA8ydbYIJUUxtjOQRQMb73HC oNDrSnyKashqIWdfuqwO1i9lSXX2CRB94fE/lUqxxVOFQX+KUlmMNQ3dPdprkVl0LsXG wHdbItx0FRyzA5RtJXaGL+T30+OrETZYVNAtPW4+nN73VeiD79ltJJR40GuN2vMEMIXs XXAN/JxRmPob7lSUwV5mCbfRNFh4sbt4CzYPOZkz/ih5k7KCrbZgAHURAOyYFR97wcoc PTGQ== X-Gm-Message-State: AOAM531/et6XO5ATl/oTYI08qrdFoU0OSKNmpRVcBdGXgU/kat8NIcpJ AZrtQmoZeOOmPzLb4CyK3LA68qVfwec= X-Google-Smtp-Source: ABdhPJwEO/56WzEvUU6hiW7HK6Xy45a+LFdxcYVfERHbSU9DhLBh7fMmnT9aTpw3X2NOrWJtIFseLCU8PIU= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6512:25a4:b0:471:fbe9:8893 with SMTP id bf36-20020a05651225a400b00471fbe98893mr11183684lfb.147.1650991558811; Tue, 26 Apr 2022 09:45:58 -0700 (PDT) Date: Tue, 26 Apr 2022 18:43:07 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-39-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 38/46] x86: kmsan: disable instrumentation of unsupported code From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: im8pehr6poaxo5wigmtb5mrqt8gxcqe6 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: C84F4120051 X-Rspam-User: Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=owm+BOht; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of 3xiFoYgYKCMMpurmn0pxxpun.lxvurw36-vvt4jlt.x0p@flex--glider.bounces.google.com designates 209.85.208.201 as permitted sender) smtp.mailfrom=3xiFoYgYKCMMpurmn0pxxpun.lxvurw36-vvt4jlt.x0p@flex--glider.bounces.google.com X-HE-Tag: 1650991557-818369 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Instrumenting some files with KMSAN will result in kernel being unable to link, boot or crashing at runtime for various reasons (e.g. infinite recursion caused by instrumentation hooks calling instrumented code again). Completely omit KMSAN instrumentation in the following places: - arch/x86/boot and arch/x86/realmode/rm, as KMSAN doesn't work for i386; - arch/x86/entry/vdso, which isn't linked with KMSAN runtime; - three files in arch/x86/kernel - boot problems; - arch/x86/mm/cpu_entry_area.c - recursion. Signed-off-by: Alexander Potapenko --- v2: -- moved the patch earlier in the series so that KMSAN can compile -- split off the non-x86 part into a separate patch v3: -- added a comment to lib/Makefile Link: https://linux-review.googlesource.com/id/Id5e5c4a9f9d53c24a35ebb633b814c414628d81b --- arch/x86/boot/Makefile | 1 + arch/x86/boot/compressed/Makefile | 1 + arch/x86/entry/vdso/Makefile | 3 +++ arch/x86/kernel/Makefile | 2 ++ arch/x86/kernel/cpu/Makefile | 1 + arch/x86/mm/Makefile | 2 ++ arch/x86/realmode/rm/Makefile | 1 + lib/Makefile | 2 ++ 8 files changed, 13 insertions(+) diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile index b5aecb524a8aa..d5623232b763f 100644 --- a/arch/x86/boot/Makefile +++ b/arch/x86/boot/Makefile @@ -12,6 +12,7 @@ # Sanitizer runtimes are unavailable and cannot be linked for early boot code. KASAN_SANITIZE := n KCSAN_SANITIZE := n +KMSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y # Kernel does not boot with kcov instrumentation here. diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile index 6115274fe10fc..6e2e34d2655ce 100644 --- a/arch/x86/boot/compressed/Makefile +++ b/arch/x86/boot/compressed/Makefile @@ -20,6 +20,7 @@ # Sanitizer runtimes are unavailable and cannot be linked for early boot code. KASAN_SANITIZE := n KCSAN_SANITIZE := n +KMSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile index 693f8b9031fb8..4f835eaa03ec1 100644 --- a/arch/x86/entry/vdso/Makefile +++ b/arch/x86/entry/vdso/Makefile @@ -11,6 +11,9 @@ include $(srctree)/lib/vdso/Makefile # Sanitizer runtimes are unavailable and cannot be linked here. KASAN_SANITIZE := n +KMSAN_SANITIZE_vclock_gettime.o := n +KMSAN_SANITIZE_vgetcpu.o := n + UBSAN_SANITIZE := n KCSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index c41ef42adbe8a..fcbf6cf875a90 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -33,6 +33,8 @@ KASAN_SANITIZE_sev.o := n # With some compiler versions the generated code results in boot hangs, caused # by several compilation units. To be safe, disable all instrumentation. KCSAN_SANITIZE := n +KMSAN_SANITIZE_head$(BITS).o := n +KMSAN_SANITIZE_nmi.o := n OBJECT_FILES_NON_STANDARD_test_nx.o := y diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile index 9661e3e802be5..f10a921ee7565 100644 --- a/arch/x86/kernel/cpu/Makefile +++ b/arch/x86/kernel/cpu/Makefile @@ -12,6 +12,7 @@ endif # If these files are instrumented, boot hangs during the first second. KCOV_INSTRUMENT_common.o := n KCOV_INSTRUMENT_perf_event.o := n +KMSAN_SANITIZE_common.o := n # As above, instrumenting secondary CPU boot code causes boot hangs. KCSAN_SANITIZE_common.o := n diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index fe3d3061fc116..ada726784012f 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -12,6 +12,8 @@ KASAN_SANITIZE_mem_encrypt_identity.o := n # Disable KCSAN entirely, because otherwise we get warnings that some functions # reference __initdata sections. KCSAN_SANITIZE := n +# Avoid recursion by not calling KMSAN hooks for CEA code. +KMSAN_SANITIZE_cpu_entry_area.o := n ifdef CONFIG_FUNCTION_TRACER CFLAGS_REMOVE_mem_encrypt.o = -pg diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile index 83f1b6a56449f..f614009d3e4e2 100644 --- a/arch/x86/realmode/rm/Makefile +++ b/arch/x86/realmode/rm/Makefile @@ -10,6 +10,7 @@ # Sanitizer runtimes are unavailable and cannot be linked here. KASAN_SANITIZE := n KCSAN_SANITIZE := n +KMSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. diff --git a/lib/Makefile b/lib/Makefile index caeb55f661726..444c961f2f2e1 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -269,6 +269,8 @@ obj-$(CONFIG_IRQ_POLL) += irq_poll.o CFLAGS_stackdepot.o += -fno-builtin obj-$(CONFIG_STACKDEPOT) += stackdepot.o KASAN_SANITIZE_stackdepot.o := n +# In particular, instrumenting stackdepot.c with KMSAN will result in infinite +# recursion. KMSAN_SANITIZE_stackdepot.o := n KCOV_INSTRUMENT_stackdepot.o := n From patchwork Tue Apr 26 16:43:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827522 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42B43C43217 for ; Tue, 26 Apr 2022 16:46:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D73E86B00A8; Tue, 26 Apr 2022 12:46:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D249E6B00A9; Tue, 26 Apr 2022 12:46:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C3ABF6B00AA; Tue, 26 Apr 2022 12:46:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id B595C6B00A8 for ; Tue, 26 Apr 2022 12:46:03 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 4945020D49 for ; Tue, 26 Apr 2022 16:46:03 +0000 (UTC) X-FDA: 79399607406.05.BEFFAB6 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf26.hostedemail.com (Postfix) with ESMTP id 36776140042 for ; Tue, 26 Apr 2022 16:46:01 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id w8-20020a50d788000000b00418e6810364so10519673edi.13 for ; Tue, 26 Apr 2022 09:46:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=XLE4dDKZQivTtEUon/Tj4SL+cxx+nqfawlbFCYGovJc=; b=JMsNnrgA3RJaeGbSFsIbMJk1WKHAyC9mHh9w+Ezp2I0AryPzMo0hvP3hFHcM3Fes/j 0Od6aTn5hLCVFSLoC+QEN/Y51yJ/k3To1PPqp71oGx9SidpVIg7W/FFigKzrWKt5pZNf ObR3IfkU591CrM0QCB8Yyv69AzGBOduai/CQ5ZELgXkxhQBpzhytZFiVH0+MwZEQnsa3 V3yvl4lBVXMYXdY61XOr/QmKRJgkZHWIaUhgcwCtzjh2gZfAxHgEOuB8kzeynle/cRhb dXMQ2W1fkb0VB8Nx+GeffYe9rusz/mxzvp3Q0HOhGVl2s14bRkzEjCgm6Hi/UlxZ9QPZ fd4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=XLE4dDKZQivTtEUon/Tj4SL+cxx+nqfawlbFCYGovJc=; b=5kzGqQx6VoInym7UvBFyu7OGQ0RTXsyWtVwZnHIEmSCUltHxX5ZZwxNytKRUxqVlbA ffw2nIlAKRZjCA9RoqFF6lREhqPMCTaOr9ZVMz3jU+8V7z3t9MtuaMxd7o3fxL4hzD4N OAeQOkmcyPPfWbBAtp2imHSHl0WPm4h3ID73WrM/x40GGNGbXHzQWz7QtJt9hgMrx6+V D8unZrFrnFpjgnr9umQI5Nabhu5W/MTDelbmPMJQd6kTz6VkR6tQ1lksAhL372f660pR Z7vwOPDc4xlyP3k86IMhZVMOljq5S65fJAtz8JQCf/EArro2gdS6YFBfqkycPpFjl9FF 19VQ== X-Gm-Message-State: AOAM531UWlWJftnAmZEWSuGr/oni83nZ9KR43OqV5iNMT97M0B6v4vBF EjPS6LKUC+Fl+B97ZPkKOf51WZIAq8c= X-Google-Smtp-Source: ABdhPJw2dp1WxDXIXbxKtYrp2TPHwfmzrgTwJgDoJzBSZTqGhvOW8ABWyU8M8nHDx8YckvsgOVfYsGY3TJA= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a17:907:3e21:b0:6f3:bd59:1aa0 with SMTP id hp33-20020a1709073e2100b006f3bd591aa0mr1461947ejc.682.1650991561485; Tue, 26 Apr 2022 09:46:01 -0700 (PDT) Date: Tue, 26 Apr 2022 18:43:08 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-40-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 39/46] x86: kmsan: skip shadow checks in __switch_to() From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 36776140042 X-Stat-Signature: 3auideqbkoc9c6yyw9s7t9ah9y1w5w3i Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=JMsNnrgA; spf=pass (imf26.hostedemail.com: domain of 3ySFoYgYKCMYsxupq3s00sxq.o0yxuz69-yyw7mow.03s@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3ySFoYgYKCMYsxupq3s00sxq.o0yxuz69-yyw7mow.03s@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1650991561-266796 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When instrumenting functions, KMSAN obtains the per-task state (mostly pointers to metadata for function arguments and return values) once per function at its beginning, using the `current` pointer. Every time the instrumented function calls another function, this state (`struct kmsan_context_state`) is updated with shadow/origin data of the passed and returned values. When `current` changes in the low-level arch code, instrumented code can not notice that, and will still refer to the old state, possibly corrupting it or using stale data. This may result in false positive reports. To deal with that, we need to apply __no_kmsan_checks to the functions performing context switching - this will result in skipping all KMSAN shadow checks and marking newly created values as initialized, preventing all false positive reports in those functions. False negatives are still possible, but we expect them to be rare and impersistent. Suggested-by: Marco Elver Signed-off-by: Alexander Potapenko --- v2: -- This patch was previously called "kmsan: skip shadow checks in files doing context switches". Per Mark Rutland's suggestion, we now only skip checks in low-level arch-specific code, as context switches in common code should be invisible to KMSAN. We also apply the checks to precisely the functions performing the context switch instead of the whole file. Link: https://linux-review.googlesource.com/id/I45e3ed9c5f66ee79b0409d1673d66ae419029bcb Replace KMSAN_ENABLE_CHECKS_process_64.o with __no_kmsan_checks --- arch/x86/kernel/process_64.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index e459253649be2..9952a4c7e1d20 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -553,6 +553,7 @@ void compat_start_thread(struct pt_regs *regs, u32 new_ip, u32 new_sp, bool x32) * Kprobes not supported here. Set the probe on schedule instead. * Function graph tracer not supported too. */ +__no_kmsan_checks __visible __notrace_funcgraph struct task_struct * __switch_to(struct task_struct *prev_p, struct task_struct *next_p) { From patchwork Tue Apr 26 16:43:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827523 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E82DCC433EF for ; Tue, 26 Apr 2022 16:46:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7F6626B0073; Tue, 26 Apr 2022 12:46:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7A52C6B0085; Tue, 26 Apr 2022 12:46:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 66DFE6B0087; Tue, 26 Apr 2022 12:46:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 59BED6B0073 for ; Tue, 26 Apr 2022 12:46:07 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 1DF5C20CCF for ; Tue, 26 Apr 2022 16:46:07 +0000 (UTC) X-FDA: 79399607574.22.648BDE2 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf03.hostedemail.com (Postfix) with ESMTP id 3D5622003F for ; Tue, 26 Apr 2022 16:46:02 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id co27-20020a0564020c1b00b00425ab566200so8065663edb.6 for ; Tue, 26 Apr 2022 09:46:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xs3mnLw1SF/DZ/Tq7NTZzrdEtEEoGjj1tkhOxGSNVPQ=; b=Dp8fDgewCtm7EfJ+NxSWPaqf4fjHXCb5jsKUHsqjOQlRwQkgkKZ/UWCSiVVrRs/uZW OQRNetbiqEwnG85Z0p1Sf25DydKgfbA8aQ0Bsb8u1giNx3oJsml+h06KXd12ZFl+7zlh 88EaOBCQpujgyWZhEQULbY92x5FazQAo2pGdWFAwltYHX4X7vprtu3R+YlyuxLW4bV7A kz4jOXTPx7d0TC785iMRbaV+fkvSej+QsvFkw8jbYuayp+vHkk5UYhSq7fmBnGbZjeRh ej2EyGUQvI2W9wRBjeiQzBogyryUJQVM1Tt85ycKl00a26JFtroHDFnq/8e7RZbUcfE0 4QnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xs3mnLw1SF/DZ/Tq7NTZzrdEtEEoGjj1tkhOxGSNVPQ=; b=B/yImuN1XTSr9kle5fAzfcyDc5yPLDNfQCD5lwO7OfYwvxO7oqwUp3Fg2mxhjYa04O I/01+Pwv/nPZCwvaXqJ3/mwcSbZwyd7a0Wl3KQ94UEN8rbdfIo8AoQwA9MdLd5SEQfct pdsGpmGvAaJ8bNOMQsZ6eMWNqdDUoUNCkki6yowoTXX413UFlpFkgLDSzvGp5RtLnvTE j2sgIUGh7xPKTbihVHBOgL4Qea6tL3MtbW+sXaqoZ2L2U1TD2csygItljoVdst98Ixjb H61ob7QdVI/IGgCBpnPrbG9sBwDSU1ns6D7Z6T5GuYCxLAATIJL2qPmMYhD/1lEaPahf f+Fw== X-Gm-Message-State: AOAM533+QR9qRoZjNKWbyz2WzM7nrsm09ImvKA0CYm+t0nMJ+eiHWvwz BXa4KZhx8dDFpNLK4tA/6A+EVu4pjSY= X-Google-Smtp-Source: ABdhPJyVycewaYAOLWq91OXZaqwuaaJYgCPy/6P6fQ6DCJX8eHTBHY2JtlW8ZAI35LArwnFF6hd2OCloeNo= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:aa7:c70f:0:b0:425:f70d:b34 with SMTP id i15-20020aa7c70f000000b00425f70d0b34mr7131646edq.306.1650991564200; Tue, 26 Apr 2022 09:46:04 -0700 (PDT) Date: Tue, 26 Apr 2022 18:43:09 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-41-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 40/46] x86: kmsan: handle open-coded assembly in lib/iomem.c From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: aioju4ksr5k8zeq95151obwf9tossrr3 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 3D5622003F Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Dp8fDgew; spf=pass (imf03.hostedemail.com: domain of 3zCFoYgYKCMkv0xst6v33v0t.r310x29C-11zAprz.36v@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=3zCFoYgYKCMkv0xst6v33v0t.r310x29C-11zAprz.36v@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-HE-Tag: 1650991562-113043 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN cannot intercept memory accesses within asm() statements. That's why we add kmsan_unpoison_memory() and kmsan_check_memory() to hint it how to handle memory copied from/to I/O memory. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/Icb16bf17269087e475debf07a7fe7d4bebc3df23 --- arch/x86/lib/iomem.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/arch/x86/lib/iomem.c b/arch/x86/lib/iomem.c index 3e2f33fc33de2..e0411a3774d49 100644 --- a/arch/x86/lib/iomem.c +++ b/arch/x86/lib/iomem.c @@ -1,6 +1,7 @@ #include #include #include +#include #define movs(type,to,from) \ asm volatile("movs" type:"=&D" (to), "=&S" (from):"0" (to), "1" (from):"memory") @@ -37,6 +38,8 @@ static void string_memcpy_fromio(void *to, const volatile void __iomem *from, si n-=2; } rep_movs(to, (const void *)from, n); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_memory(to, n); } static void string_memcpy_toio(volatile void __iomem *to, const void *from, size_t n) @@ -44,6 +47,8 @@ static void string_memcpy_toio(volatile void __iomem *to, const void *from, size if (unlikely(!n)) return; + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(from, n); /* Align any unaligned destination IO */ if (unlikely(1 & (unsigned long)to)) { movs("b", to, from); From patchwork Tue Apr 26 16:43:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827525 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 49FB2C433EF for ; Tue, 26 Apr 2022 16:46:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A40156B0082; Tue, 26 Apr 2022 12:46:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9F16C6B0083; Tue, 26 Apr 2022 12:46:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 891D66B0085; Tue, 26 Apr 2022 12:46:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.26]) by kanga.kvack.org (Postfix) with ESMTP id 6D3616B0082 for ; Tue, 26 Apr 2022 12:46:12 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id D695D60AB0 for ; Tue, 26 Apr 2022 16:46:08 +0000 (UTC) X-FDA: 79399607616.27.30A9A2D Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf16.hostedemail.com (Postfix) with ESMTP id 034A7180050 for ; Tue, 26 Apr 2022 16:46:04 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id n4-20020a5099c4000000b00418ed58d92fso10619285edb.0 for ; Tue, 26 Apr 2022 09:46:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+FBFHZ/D2CVM8zHnxvAf38XdMkZWjnGIg8UqYOIGTJc=; b=eLrL3mhFQT5kOKuWWPsXYa6Om2vKcm1WdEFQHStYaEyvCoapAOY8AFslQqrxPfYmIf rXp6G49IKCxSIadhwDU/svB9bjlSTioT2d+8nFtnyoxtGkFZ5hU6IZRnk3EvL9+LHqKl YImrZr+ZfLnQxJu37Qjyj/MDxN8AmGTCABbxmGUID5Rdgt4TWVzw9Gx7nZ/Cv3t0o4mj 7IL5zxZSeAJwNc5HAOQNmnltHzHmQdS3We85em43fb3OAi9L+VU3jozzjW4Z5BOBjLbl 9zuRJUkSroGkn/RtJMaMYsf10xB5Km6ymFu+cfmKhvrnUfSPObf8M//QzCvTkk4jAHil C3cA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+FBFHZ/D2CVM8zHnxvAf38XdMkZWjnGIg8UqYOIGTJc=; b=kDXtQJUVseqloKukyYCuHdbsyMJhd8TFchA10NSUYhbd+yJ3/PAUOp214tLokit1EE lr2cxVi8PLu7kDQLJbPP75njhf3DYNyjBWUe8hugyFGhsHttXQxB7+nZ24bt2+3me/IQ bzGG9goSvlmJoVx6IGKTzRrz8UYZEwjyqUJn5CVEz1ctJ0xwyw+mxjj/IPag5Lpe2e7t T8ChGopYYQdeTDWUUtcgNCH+/Xmcm4kct1FOrDJGc/jo4p1elSUuqcy60wlyRW3own9S mWM3oNdQ8XkjmaN9cwWajuF1WYDpLJYVaL6tghQSFZRO8PhLWUfkFl/eldR4aefbKyuj 0S0Q== X-Gm-Message-State: AOAM532AblTnHigCWaVIBnbL3z973j6M5jgcKqk/xHrT+7x4c7Do9MJJ XfdBNzhmU2MPbfabx+qEbZGBrianXvY= X-Google-Smtp-Source: ABdhPJzl2WkvDMC8oytisA9bFCUfSdKFjxjddbCsinvPLHHRa3ZNDj4UDz7djBQ5AhjJmhi+1lV1TeExHmQ= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:aa7:c793:0:b0:408:4a69:90b4 with SMTP id n19-20020aa7c793000000b004084a6990b4mr25741991eds.58.1650991567128; Tue, 26 Apr 2022 09:46:07 -0700 (PDT) Date: Tue, 26 Apr 2022 18:43:10 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-42-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 41/46] x86: kmsan: use __msan_ string functions where possible. From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=eLrL3mhF; spf=pass (imf16.hostedemail.com: domain of 3zyFoYgYKCMwy30vw9y66y3w.u64305CF-442Dsu2.69y@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=3zyFoYgYKCMwy30vw9y66y3w.u64305CF-442Dsu2.69y@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 034A7180050 X-Stat-Signature: 8m1qbyo7nc84ip8ft9oinsu67m7139aw X-HE-Tag: 1650991564-995188 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Unless stated otherwise (by explicitly calling __memcpy(), __memset() or __memmove()) we want all string functions to call their __msan_ versions (e.g. __msan_memcpy() instead of memcpy()), so that shadow and origin values are updated accordingly. Bootloader must still use the default string functions to avoid crashes. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I7ca9bd6b4f5c9b9816404862ae87ca7984395f33 --- arch/x86/include/asm/string_64.h | 23 +++++++++++++++++++++-- include/linux/fortify-string.h | 2 ++ 2 files changed, 23 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h index 6e450827f677a..3b87d889b6e16 100644 --- a/arch/x86/include/asm/string_64.h +++ b/arch/x86/include/asm/string_64.h @@ -11,11 +11,23 @@ function. */ #define __HAVE_ARCH_MEMCPY 1 +#if defined(__SANITIZE_MEMORY__) +#undef memcpy +void *__msan_memcpy(void *dst, const void *src, size_t size); +#define memcpy __msan_memcpy +#else extern void *memcpy(void *to, const void *from, size_t len); +#endif extern void *__memcpy(void *to, const void *from, size_t len); #define __HAVE_ARCH_MEMSET +#if defined(__SANITIZE_MEMORY__) +extern void *__msan_memset(void *s, int c, size_t n); +#undef memset +#define memset __msan_memset +#else void *memset(void *s, int c, size_t n); +#endif void *__memset(void *s, int c, size_t n); #define __HAVE_ARCH_MEMSET16 @@ -55,7 +67,13 @@ static inline void *memset64(uint64_t *s, uint64_t v, size_t n) } #define __HAVE_ARCH_MEMMOVE +#if defined(__SANITIZE_MEMORY__) +#undef memmove +void *__msan_memmove(void *dest, const void *src, size_t len); +#define memmove __msan_memmove +#else void *memmove(void *dest, const void *src, size_t count); +#endif void *__memmove(void *dest, const void *src, size_t count); int memcmp(const void *cs, const void *ct, size_t count); @@ -64,8 +82,7 @@ char *strcpy(char *dest, const char *src); char *strcat(char *dest, const char *src); int strcmp(const char *cs, const char *ct); -#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) - +#if (defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)) /* * For files that not instrumented (e.g. mm/slub.c) we * should use not instrumented version of mem* functions. @@ -73,7 +90,9 @@ int strcmp(const char *cs, const char *ct); #undef memcpy #define memcpy(dst, src, len) __memcpy(dst, src, len) +#undef memmove #define memmove(dst, src, len) __memmove(dst, src, len) +#undef memset #define memset(s, c, n) __memset(s, c, n) #ifndef __NO_FORTIFY diff --git a/include/linux/fortify-string.h b/include/linux/fortify-string.h index 295637a66c46b..fe48f77599e04 100644 --- a/include/linux/fortify-string.h +++ b/include/linux/fortify-string.h @@ -269,8 +269,10 @@ __FORTIFY_INLINE void fortify_memset_chk(__kernel_size_t size, * __builtin_object_size() must be captured here to avoid evaluating argument * side-effects further into the macro layers. */ +#ifndef CONFIG_KMSAN #define memset(p, c, s) __fortify_memset_chk(p, c, s, \ __builtin_object_size(p, 0), __builtin_object_size(p, 1)) +#endif /* * To make sure the compiler can enforce protection against buffer overflows, From patchwork Tue Apr 26 16:43:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827524 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1B9EEC433F5 for ; Tue, 26 Apr 2022 16:46:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A19BD6B0081; Tue, 26 Apr 2022 12:46:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9C9836B0082; Tue, 26 Apr 2022 12:46:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8913A6B0083; Tue, 26 Apr 2022 12:46:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 7D0E36B0081 for ; Tue, 26 Apr 2022 12:46:11 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 5CEC660F5E for ; Tue, 26 Apr 2022 16:46:11 +0000 (UTC) X-FDA: 79399607742.13.CB16390 Received: from mail-ed1-f74.google.com (mail-ed1-f74.google.com [209.85.208.74]) by imf18.hostedemail.com (Postfix) with ESMTP id AECE91C004B for ; Tue, 26 Apr 2022 16:46:06 +0000 (UTC) Received: by mail-ed1-f74.google.com with SMTP id co27-20020a0564020c1b00b00425ab566200so8065809edb.6 for ; Tue, 26 Apr 2022 09:46:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=SGOwGjYq/2PwacKh0/9At0yFevcHrzg9FDlShmi+vnk=; b=fSZ4zfvDJJmqaIzU1XL0WfStwXr6OPa8V+ga0qTz0UlsIHyG9I6Fr6ZEbs9vmLiuhO ClbMCWmC+tznTOah7YoL3txE0BplWerCcj5Pt/KeycPvVB+sF+pTOE13LG7aV8CaSZd6 CtB276fuSEgHwCIg/10uWG7rSCD98aAcUKoO+LU4U8ks8bUI5/GdxxpoNac4ZhTkb0E1 grtFS0KyEl74V0H8NCr4jzX10gmxpamQ+wZyHKObaL9HWxab3p+XWx1x2TN57EUQX6a4 MV6fXbcDTcGpFO2Dt7YM5K3PDMZSDsLSt5fDbOOZGvHfv6pU1T6bZgqLE57A40M5bQwM 0uLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=SGOwGjYq/2PwacKh0/9At0yFevcHrzg9FDlShmi+vnk=; b=b6Jh/Bet4rdd2jctK+8qEJwn2cHdAAQSFa4+ighVumGB2VxrLgdi0/OCafCdJB7bqB qR5iYMMsOJFRRDrhKKTVwXb3V3U/2LdIYMLyQj8gqdjREnW26FRkjCdI1UPMD+nC99ur uFA5F4mjKFnl5LEZxQS4auIwB+mSwW3+HZczhpFjS8vjJJs0ObJdNLN/vt9gei3sl3IF MMZxNeX7mKbVGNxdI/+WA2UrGoQWMB/wYrBSFtQWH+Xeo6IIdRAhxd86Ct2MBt7bV630 e/MlMsxqMByVnJCh+D6P3qHIkGPdWFIHW4TDBzkCbSK7ncaG2CVnoZ6tOUxfYRWBvhQ4 sGGw== X-Gm-Message-State: AOAM530jiYuTBZDuX3T1pHckoLDDLHOYWeurnxUYm9lxQmmngT5VRJRy ADzNjXONk6Ra8z6QB5sFL4M1wckASZw= X-Google-Smtp-Source: ABdhPJw9NptsR6xC76SX+4t0lKzp8AK1gzotC/X9LsmKWQ/MoVeg5tjW3Iafs+1NkdThy9tu1emAPZFww8w= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:84a:b0:423:fe99:8c53 with SMTP id b10-20020a056402084a00b00423fe998c53mr25385277edz.195.1650991569448; Tue, 26 Apr 2022 09:46:09 -0700 (PDT) Date: Tue, 26 Apr 2022 18:43:11 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-43-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 42/46] x86: kmsan: sync metadata pages on page fault From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Queue-Id: AECE91C004B X-Stat-Signature: rz7zxj681dxdfi3s1ydgrdrw4aim991s X-Rspam-User: Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=fSZ4zfvD; spf=pass (imf18.hostedemail.com: domain of 30SFoYgYKCM4052xyB08805y.w86527EH-664Fuw4.8B0@flex--glider.bounces.google.com designates 209.85.208.74 as permitted sender) smtp.mailfrom=30SFoYgYKCM4052xyB08805y.w86527EH-664Fuw4.8B0@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam09 X-HE-Tag: 1650991566-531087 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN assumes shadow and origin pages for every allocated page are accessible. For pages between [VMALLOC_START, VMALLOC_END] those metadata pages start at KMSAN_VMALLOC_SHADOW_START and KMSAN_VMALLOC_ORIGIN_START, therefore we must sync a bigger memory region. Signed-off-by: Alexander Potapenko --- v2: -- addressed reports from kernel test robot Link: https://linux-review.googlesource.com/id/Ia5bd541e54f1ecc11b86666c3ec87c62ac0bdfb8 --- arch/x86/mm/fault.c | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index d0074c6ed31a3..f2250a32a10ca 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -260,7 +260,7 @@ static noinline int vmalloc_fault(unsigned long address) } NOKPROBE_SYMBOL(vmalloc_fault); -void arch_sync_kernel_mappings(unsigned long start, unsigned long end) +static void __arch_sync_kernel_mappings(unsigned long start, unsigned long end) { unsigned long addr; @@ -284,6 +284,27 @@ void arch_sync_kernel_mappings(unsigned long start, unsigned long end) } } +void arch_sync_kernel_mappings(unsigned long start, unsigned long end) +{ + __arch_sync_kernel_mappings(start, end); +#ifdef CONFIG_KMSAN + /* + * KMSAN maintains two additional metadata page mappings for the + * [VMALLOC_START, VMALLOC_END) range. These mappings start at + * KMSAN_VMALLOC_SHADOW_START and KMSAN_VMALLOC_ORIGIN_START and + * have to be synced together with the vmalloc memory mapping. + */ + if (start >= VMALLOC_START && end < VMALLOC_END) { + __arch_sync_kernel_mappings( + start - VMALLOC_START + KMSAN_VMALLOC_SHADOW_START, + end - VMALLOC_START + KMSAN_VMALLOC_SHADOW_START); + __arch_sync_kernel_mappings( + start - VMALLOC_START + KMSAN_VMALLOC_ORIGIN_START, + end - VMALLOC_START + KMSAN_VMALLOC_ORIGIN_START); + } +#endif +} + static bool low_pfn(unsigned long pfn) { return pfn < max_low_pfn; From patchwork Tue Apr 26 16:43:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827526 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47357C433F5 for ; Tue, 26 Apr 2022 16:46:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7AAB46B0083; Tue, 26 Apr 2022 12:46:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 77E9D6B0085; Tue, 26 Apr 2022 12:46:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 646946B0087; Tue, 26 Apr 2022 12:46:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.a.hostedemail.com [64.99.140.24]) by kanga.kvack.org (Postfix) with ESMTP id 585AF6B0083 for ; Tue, 26 Apr 2022 12:46:14 -0400 (EDT) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 18A42120992 for ; Tue, 26 Apr 2022 16:46:14 +0000 (UTC) X-FDA: 79399607868.18.36E4785 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf20.hostedemail.com (Postfix) with ESMTP id 53D311C0060 for ; Tue, 26 Apr 2022 16:46:10 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id n25-20020a05600c3b9900b0038ff033b654so4011997wms.0 for ; Tue, 26 Apr 2022 09:46:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=moZadDgSRgKTD5MeByPAKBz5Srz/DkBZpKo67JL3wX4=; b=cLDKsrGhoorGl3SRzelVo1PAO7x1RBOazkv5nVbG4NDdcGaRorPOu46pd1i1ETj6hN h0UoOHWi4fDikLK9zi9pBoTbj7p+CVMiVtA02QDHib8yzyjhlFhzlbs1tKotmp8GIt+Z r2bf7wmpqHdZBIZZAh4b+ySUHf+iXX37i29FxIhHYiYKb5VpzMQRfnvof2MvgAVVQ1ZM l3T7Qhx+uQ95eC8Y0ZpGlX0jRuN6gIrOSGpaYLHVjLemQwrYlnZaTUKaYJkL5xFg5tCv Q6JZbOy6PYMZBzBVtPmEhtlM/aUknABDbunmxBkt5+imBTxAI6SQwQQpEK8i4YNCCxax ihqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=moZadDgSRgKTD5MeByPAKBz5Srz/DkBZpKo67JL3wX4=; b=cQRoq22jqhEMhdvbIfC1kPk4KDwvZY0KiXKZLkF1qLeIgF5A9RT3djZFu5JuiXw3mk DIyYNIw5nYVkmba5TU6QTnHblejHhmyqRTcCjl4ds1t3MgdCCYaH7ZarZ2Mygjvak3ta NK1TVMXCl1zcq0j040Ts9r9VvRbNaNKVUs6JYeA6/zGZkHyzNZZgu/dqvB2/IOticJcg 0EYd8wpilD8t0IZx5NXNp+ujFhLnR9IdCKMRMLCgWTD0p7kXene0S3PesKd9V39O+Waa BqiXYNyb2c9JM5QjY3BoACbLVbsbp3VeIHpyEId+nAApykIsrBT9d6EegHDjrJSkWWfT 02BA== X-Gm-Message-State: AOAM53300Fw0Kzj31TfXfGtlDx8e4XOKO0pTInLlO5FkydIczYyIfaXi fqub50jZFaoNK/UVoSRLpP/Kh7S4k84= X-Google-Smtp-Source: ABdhPJz0bSNwPRs9Vl+Yh0K5Bv3M9FkaOEwa9c2Exw9QMq+QGEH5+ihGPtHZ0GCKCcmwZaaizGJA4g9/QAo= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:adf:d1ce:0:b0:20a:e668:8927 with SMTP id b14-20020adfd1ce000000b0020ae6688927mr3156284wrd.508.1650991572003; Tue, 26 Apr 2022 09:46:12 -0700 (PDT) Date: Tue, 26 Apr 2022 18:43:12 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-44-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 43/46] x86: kasan: kmsan: support CONFIG_GENERIC_CSUM on x86, enable it for KASAN/KMSAN From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 53D311C0060 X-Stat-Signature: dbkbb19mj9w1trzjsmqqn43jzd3bk6so Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=cLDKsrGh; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf20.hostedemail.com: domain of 31CFoYgYKCNE38501E3BB381.zB985AHK-997Ixz7.BE3@flex--glider.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=31CFoYgYKCNE38501E3BB381.zB985AHK-997Ixz7.BE3@flex--glider.bounces.google.com X-Rspam-User: X-HE-Tag: 1650991570-63016 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is needed to allow memory tools like KASAN and KMSAN see the memory accesses from the checksum code. Without CONFIG_GENERIC_CSUM the tools can't see memory accesses originating from handwritten assembly code. For KASAN it's a question of detecting more bugs, for KMSAN using the C implementation also helps avoid false positives originating from seemingly uninitialized checksum values. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I3e95247be55b1112af59dbba07e8cbf34e50a581 --- arch/x86/Kconfig | 4 ++++ arch/x86/include/asm/checksum.h | 16 ++++++++++------ arch/x86/lib/Makefile | 2 ++ 3 files changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index b0142e01002e3..ee5e6fd65bf1d 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -319,6 +319,10 @@ config GENERIC_ISA_DMA def_bool y depends on ISA_DMA_API +config GENERIC_CSUM + bool + default y if KMSAN || KASAN + config GENERIC_BUG def_bool y depends on BUG diff --git a/arch/x86/include/asm/checksum.h b/arch/x86/include/asm/checksum.h index bca625a60186c..6df6ece8a28ec 100644 --- a/arch/x86/include/asm/checksum.h +++ b/arch/x86/include/asm/checksum.h @@ -1,9 +1,13 @@ /* SPDX-License-Identifier: GPL-2.0 */ -#define _HAVE_ARCH_COPY_AND_CSUM_FROM_USER 1 -#define HAVE_CSUM_COPY_USER -#define _HAVE_ARCH_CSUM_AND_COPY -#ifdef CONFIG_X86_32 -# include +#ifdef CONFIG_GENERIC_CSUM +# include #else -# include +# define _HAVE_ARCH_COPY_AND_CSUM_FROM_USER 1 +# define HAVE_CSUM_COPY_USER +# define _HAVE_ARCH_CSUM_AND_COPY +# ifdef CONFIG_X86_32 +# include +# else +# include +# endif #endif diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile index f76747862bd2e..7ba5f61d72735 100644 --- a/arch/x86/lib/Makefile +++ b/arch/x86/lib/Makefile @@ -65,7 +65,9 @@ ifneq ($(CONFIG_X86_CMPXCHG64),y) endif else obj-y += iomap_copy_64.o +ifneq ($(CONFIG_GENERIC_CSUM),y) lib-y += csum-partial_64.o csum-copy_64.o csum-wrappers_64.o +endif lib-y += clear_page_64.o copy_page_64.o lib-y += memmove_64.o memset_64.o lib-y += copy_user_64.o From patchwork Tue Apr 26 16:43:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827527 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BABF4C433EF for ; Tue, 26 Apr 2022 16:46:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5D36C6B007E; Tue, 26 Apr 2022 12:46:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5859B6B0080; Tue, 26 Apr 2022 12:46:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4031A6B0088; Tue, 26 Apr 2022 12:46:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 3225B6B007E for ; Tue, 26 Apr 2022 12:46:17 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0DD6826CA6 for ; Tue, 26 Apr 2022 16:46:17 +0000 (UTC) X-FDA: 79399607994.04.DC5569C Received: from mail-ej1-f73.google.com (mail-ej1-f73.google.com [209.85.218.73]) by imf08.hostedemail.com (Postfix) with ESMTP id 7EB9F160046 for ; Tue, 26 Apr 2022 16:46:10 +0000 (UTC) Received: by mail-ej1-f73.google.com with SMTP id go12-20020a1709070d8c00b006f009400732so9167609ejc.1 for ; Tue, 26 Apr 2022 09:46:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=qZO+Ar5AUQjZ17sp8HnLfJ3bnZDmnQ4HtehVjR2RSag=; b=Pt5+odsaPLVLA+BIkFaOGZ5z3uWJenSne3f73BJt4PbanWkfmtsct7CN0wAO2xN5gU Yhaxk8FOCzZhswyeeTNVL8OMhm9c/T3iPCztTqmlZxSpLpgMOs6YXH5MZ/fx+97iCGvz zddbne3cp5tk6W6voA6FNsecjeKs0yW1DQ4Tz30W1QM+yA9fVauckWd7D4HpMMiv8n/u VNAOuyzoM5s6YO+8DH4/mHKvxm5uilD1LTm5hSJS1BnACQmJdUQmedoVo1JHnrQYqad/ JlxuIlepoOSV3NlP0UfaMmhcA4Fr+PZ2URCMg+I0v/ZRX2C/u5IO3EAmXGa2X6k+fRQb I4UQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=qZO+Ar5AUQjZ17sp8HnLfJ3bnZDmnQ4HtehVjR2RSag=; b=zlXOr0bhWTa++kTb2rEUgccjL1n0Z8AbUpezRnJR2nwTe3v7mU2yebfsuFc/51G+kr YqU6fyeXKGHltFqfcf0Bt3qwGTbHrfnSp0FE5lWp4Qf11q2XexNigeCqT58fwiNd6isg pCs3M1Pp52EktU0ha6SBt5VgGHMJyu3x2Of8/7Q/DpPsfPmnJhVKxM5r3qj+NG9sVEbW n6o/GJRWTOCUKqYkJz+xNmLEbHC163cx1kBSouO6hH2O8lBKF8XYRpyJD/xWbdVZpiXC EmStODnQTtxQW0QDze1aayHoHOhiM2a66Xomwggu2NYNai+MAxG7jB8KEyrQihw7vo1V OmOw== X-Gm-Message-State: AOAM533b/dMYwWwTeSxOXgi14BeVlAS5BFLTHDhHau0ItodRy1ls+t0I 2b+9lJ1J/nDPS4/BsLIamq4vmrX20hE= X-Google-Smtp-Source: ABdhPJwC/1Ho3LFpscyKKysPm+tD7uHD3uvu1NdPCEt5vju65j1NkGkMRTYyimkaio71xSi48PC422JlIc4= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:2689:b0:422:15c4:e17e with SMTP id w9-20020a056402268900b0042215c4e17emr26075746edd.33.1650991574553; Tue, 26 Apr 2022 09:46:14 -0700 (PDT) Date: Tue, 26 Apr 2022 18:43:13 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-45-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 44/46] x86: fs: kmsan: disable CONFIG_DCACHE_WORD_ACCESS From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Andrey Konovalov X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 7EB9F160046 X-Stat-Signature: jebhzf8xh8sbwexfnq7n114b71au4ydr X-Rspam-User: Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Pt5+odsa; spf=pass (imf08.hostedemail.com: domain of 31iFoYgYKCNM5A723G5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--glider.bounces.google.com designates 209.85.218.73 as permitted sender) smtp.mailfrom=31iFoYgYKCNM5A723G5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1650991570-32126 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: dentry_string_cmp() calls read_word_at_a_time(), which might read uninitialized bytes to optimize string comparisons. Disabling CONFIG_DCACHE_WORD_ACCESS should prohibit this optimization, as well as (probably) similar ones. Suggested-by: Andrey Konovalov Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I4c0073224ac2897cafb8c037362c49dda9cfa133 --- arch/x86/Kconfig | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index ee5e6fd65bf1d..3209073f96415 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -128,7 +128,9 @@ config X86 select CLKEVT_I8253 select CLOCKSOURCE_VALIDATE_LAST_CYCLE select CLOCKSOURCE_WATCHDOG - select DCACHE_WORD_ACCESS + # Word-size accesses may read uninitialized data past the trailing \0 + # in strings and cause false KMSAN reports. + select DCACHE_WORD_ACCESS if !KMSAN select DYNAMIC_SIGFRAME select EDAC_ATOMIC_SCRUB select EDAC_SUPPORT From patchwork Tue Apr 26 16:43:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827529 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3BE9CC433F5 for ; Tue, 26 Apr 2022 16:46:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 830656B0089; Tue, 26 Apr 2022 12:46:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 75EF16B008A; Tue, 26 Apr 2022 12:46:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5AE7A6B008C; Tue, 26 Apr 2022 12:46:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 36E786B0089 for ; Tue, 26 Apr 2022 12:46:22 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id D6BB360AB2 for ; Tue, 26 Apr 2022 16:46:18 +0000 (UTC) X-FDA: 79399608036.12.25E8C6D Received: from mail-ej1-f74.google.com (mail-ej1-f74.google.com [209.85.218.74]) by imf07.hostedemail.com (Postfix) with ESMTP id 7775C40052 for ; Tue, 26 Apr 2022 16:46:16 +0000 (UTC) Received: by mail-ej1-f74.google.com with SMTP id x2-20020a1709065ac200b006d9b316257fso9388348ejs.12 for ; Tue, 26 Apr 2022 09:46:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Jk0NMFQ+GIouP+k3atOjYdUnEj7isi+T1qF4OKInEvY=; b=q/8YNmW2FY+UOSSefHsoVD/+tXnxdP8E+Oyh/hfdIdafdhpVQDdRSLmWDfSSmsNJKL QEvct6PQ12lO5BkzKJzp9F/jBD+fRyuCDw1W1Y+eoR9Lna4fX3FqSnRcbQNFAi9810kd 1KmXYk///irlbShN5FXTZLud1oM/MoL/kXNVhmqlso/CJrmniNY23vigbiK+1zT/oLX5 u/p0nsK9LPxyHxzvCXLM9yXvRIIjEDWc9Qa0QoPtEJBwsG2B6/rqSW8o/qagH0NO8S4i baPLMv5+6abizA7u6ohuw2x4eEH69qUDUIfIZ/8RQI8GXT+Tdau+m7YEckOF8FW3Fe6p 4Cgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Jk0NMFQ+GIouP+k3atOjYdUnEj7isi+T1qF4OKInEvY=; b=fIXnz/niS6XJszMy0Cp6PO3MgF4x8Usjgf9zBKLhCi5NJbTaG+HFesODd8uZPpbR77 WVoYycFLbFP3RPbFccpwCLtWNYgi8LE4CnrSzo3yw5FfEkc5uvfe+NHw16mfmL8yfo+r bAwjX+XX7hRk0Us3WPTFem97/wkpU++5UTQxyNP+a7j796abgMU+F4+Yc+2MMky0BdTU EF3EBuuF1m5J7MybpLNqs6iTDa4f/5fZb4HSWM4voD785reSM8r8gXHGfnTNSYal6t6N 6aoBEvumRlpU4SJbd4+M+zCMKnQ0klJLJampdSRgpY7u5r/nbdbbAgWlnGwVTCFV8ljq i8IQ== X-Gm-Message-State: AOAM5325ZXIYT01ZFW0ClwSaacrAWf5C3zO9oTasufgq9eDlZYmj294J 1tN9oaJX+A3jHmPosWB0Vvr7iskpXRo= X-Google-Smtp-Source: ABdhPJzs34r07s5FrFzj+GFFR4J7VeroU6awpjT3W6Wa0hn8rVzciQSq9AlsjYylNyT2KboobjGXSIMLGNU= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a05:6402:330b:b0:425:eded:7cfe with SMTP id e11-20020a056402330b00b00425eded7cfemr10281416eda.357.1650991577116; Tue, 26 Apr 2022 09:46:17 -0700 (PDT) Date: Tue, 26 Apr 2022 18:43:14 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-46-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 45/46] x86: kmsan: handle register passing from uninstrumented code From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Rspamd-Queue-Id: 7775C40052 X-Stat-Signature: ag6631tgznzb4mtpaoanacfocct9gw3j X-Rspam-User: Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="q/8YNmW2"; spf=pass (imf07.hostedemail.com: domain of 32SFoYgYKCNY8DA56J8GG8D6.4GEDAFMP-EECN24C.GJ8@flex--glider.bounces.google.com designates 209.85.218.74 as permitted sender) smtp.mailfrom=32SFoYgYKCNY8DA56J8GG8D6.4GEDAFMP-EECN24C.GJ8@flex--glider.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam09 X-HE-Tag: 1650991576-157562 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Replace instrumentation_begin() with instrumentation_begin_with_regs() to let KMSAN handle the non-instrumented code and unpoison pt_regs passed from the instrumented part. This is done to reduce the number of false positive reports. Signed-off-by: Alexander Potapenko --- v2: -- this patch was previously called "x86: kmsan: handle register passing from uninstrumented code". Instead of adding KMSAN-specific code to every instrumentation_begin()/instrumentation_end() section, we changed instrumentation_begin() to instrumentation_begin_with_regs() where applicable. Link: https://linux-review.googlesource.com/id/I435ec076cd21752c2f877f5da81f5eced62a2ea4 --- arch/x86/entry/common.c | 3 ++- arch/x86/include/asm/idtentry.h | 10 +++++----- arch/x86/kernel/cpu/mce/core.c | 2 +- arch/x86/kernel/kvm.c | 2 +- arch/x86/kernel/nmi.c | 2 +- arch/x86/kernel/sev.c | 4 ++-- arch/x86/kernel/traps.c | 14 +++++++------- arch/x86/mm/fault.c | 2 +- 8 files changed, 20 insertions(+), 19 deletions(-) diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c index 6c2826417b337..047d157987859 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include #include @@ -75,7 +76,7 @@ __visible noinstr void do_syscall_64(struct pt_regs *regs, int nr) add_random_kstack_offset(); nr = syscall_enter_from_user_mode(regs, nr); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); if (!do_syscall_x64(regs, nr) && !do_syscall_x32(regs, nr) && nr != -1) { /* Invalid system call, but still a system call. */ diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h index 7924f27f5c8b1..172b9b6f90628 100644 --- a/arch/x86/include/asm/idtentry.h +++ b/arch/x86/include/asm/idtentry.h @@ -53,7 +53,7 @@ __visible noinstr void func(struct pt_regs *regs) \ { \ irqentry_state_t state = irqentry_enter(regs); \ \ - instrumentation_begin(); \ + instrumentation_begin_with_regs(regs); \ __##func (regs); \ instrumentation_end(); \ irqentry_exit(regs, state); \ @@ -100,7 +100,7 @@ __visible noinstr void func(struct pt_regs *regs, \ { \ irqentry_state_t state = irqentry_enter(regs); \ \ - instrumentation_begin(); \ + instrumentation_begin_with_regs(regs); \ __##func (regs, error_code); \ instrumentation_end(); \ irqentry_exit(regs, state); \ @@ -197,7 +197,7 @@ __visible noinstr void func(struct pt_regs *regs, \ irqentry_state_t state = irqentry_enter(regs); \ u32 vector = (u32)(u8)error_code; \ \ - instrumentation_begin(); \ + instrumentation_begin_with_regs(regs); \ kvm_set_cpu_l1tf_flush_l1d(); \ run_irq_on_irqstack_cond(__##func, regs, vector); \ instrumentation_end(); \ @@ -237,7 +237,7 @@ __visible noinstr void func(struct pt_regs *regs) \ { \ irqentry_state_t state = irqentry_enter(regs); \ \ - instrumentation_begin(); \ + instrumentation_begin_with_regs(regs); \ kvm_set_cpu_l1tf_flush_l1d(); \ run_sysvec_on_irqstack_cond(__##func, regs); \ instrumentation_end(); \ @@ -264,7 +264,7 @@ __visible noinstr void func(struct pt_regs *regs) \ { \ irqentry_state_t state = irqentry_enter(regs); \ \ - instrumentation_begin(); \ + instrumentation_begin_with_regs(regs); \ __irq_enter_raw(); \ kvm_set_cpu_l1tf_flush_l1d(); \ __##func (regs); \ diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c index 981496e6bc0e4..e5acff54f7d55 100644 --- a/arch/x86/kernel/cpu/mce/core.c +++ b/arch/x86/kernel/cpu/mce/core.c @@ -1376,7 +1376,7 @@ static void queue_task_work(struct mce *m, char *msg, void (*func)(struct callba /* Handle unconfigured int18 (should never happen) */ static noinstr void unexpected_machine_check(struct pt_regs *regs) { - instrumentation_begin(); + instrumentation_begin_with_regs(regs); pr_err("CPU#%d: Unexpected int18 (Machine Check)\n", smp_processor_id()); instrumentation_end(); diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 8b1c45c9cda87..3df82a51ab1b5 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -250,7 +250,7 @@ noinstr bool __kvm_handle_async_pf(struct pt_regs *regs, u32 token) return false; state = irqentry_enter(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); /* * If the host managed to inject an async #PF into an interrupt diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c index e73f7df362f5d..5078417e16ec1 100644 --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -328,7 +328,7 @@ static noinstr void default_do_nmi(struct pt_regs *regs) __this_cpu_write(last_nmi_rip, regs->ip); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); handled = nmi_handle(NMI_LOCAL, regs); __this_cpu_add(nmi_stats.normal, handled); diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c index e6d316a01fdd4..9bfc29fc9c983 100644 --- a/arch/x86/kernel/sev.c +++ b/arch/x86/kernel/sev.c @@ -1330,7 +1330,7 @@ DEFINE_IDTENTRY_VC_KERNEL(exc_vmm_communication) irq_state = irqentry_nmi_enter(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); if (!vc_raw_handle_exception(regs, error_code)) { /* Show some debug info */ @@ -1362,7 +1362,7 @@ DEFINE_IDTENTRY_VC_USER(exc_vmm_communication) } irqentry_enter_from_user_mode(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); if (!vc_raw_handle_exception(regs, error_code)) { /* diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index 1563fb9950059..9d3c9c4de94d3 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -305,7 +305,7 @@ static noinstr bool handle_bug(struct pt_regs *regs) /* * All lies, just get the WARN/BUG out. */ - instrumentation_begin(); + instrumentation_begin_with_regs(regs); /* * Since we're emulating a CALL with exceptions, restore the interrupt * state to what it was at the exception site. @@ -336,7 +336,7 @@ DEFINE_IDTENTRY_RAW(exc_invalid_op) return; state = irqentry_enter(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); handle_invalid_op(regs); instrumentation_end(); irqentry_exit(regs, state); @@ -490,7 +490,7 @@ DEFINE_IDTENTRY_DF(exc_double_fault) #endif irqentry_nmi_enter(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); notify_die(DIE_TRAP, str, regs, error_code, X86_TRAP_DF, SIGSEGV); tsk->thread.error_code = error_code; @@ -820,14 +820,14 @@ DEFINE_IDTENTRY_RAW(exc_int3) */ if (user_mode(regs)) { irqentry_enter_from_user_mode(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); do_int3_user(regs); instrumentation_end(); irqentry_exit_to_user_mode(regs); } else { irqentry_state_t irq_state = irqentry_nmi_enter(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); if (!do_int3(regs)) die("int3", regs, 0); instrumentation_end(); @@ -1026,7 +1026,7 @@ static __always_inline void exc_debug_kernel(struct pt_regs *regs, */ unsigned long dr7 = local_db_save(); irqentry_state_t irq_state = irqentry_nmi_enter(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); /* * If something gets miswired and we end up here for a user mode @@ -1105,7 +1105,7 @@ static __always_inline void exc_debug_user(struct pt_regs *regs, */ irqentry_enter_from_user_mode(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); /* * Start the virtual/ptrace DR6 value with just the DR_STEP mask diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index f2250a32a10ca..676e394f1af5b 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -1557,7 +1557,7 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault) */ state = irqentry_enter(regs); - instrumentation_begin(); + instrumentation_begin_with_regs(regs); handle_page_fault(regs, error_code, address); instrumentation_end(); From patchwork Tue Apr 26 16:43:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 12827528 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 171A9C433FE for ; Tue, 26 Apr 2022 16:46:22 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A19B36B0088; Tue, 26 Apr 2022 12:46:21 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9CE236B0089; Tue, 26 Apr 2022 12:46:21 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8BA056B008A; Tue, 26 Apr 2022 12:46:21 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 7CB316B0088 for ; Tue, 26 Apr 2022 12:46:21 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 64C8F120782 for ; Tue, 26 Apr 2022 16:46:21 +0000 (UTC) X-FDA: 79399608162.02.76055A8 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf05.hostedemail.com (Postfix) with ESMTP id 6B1A9100056 for ; Tue, 26 Apr 2022 16:46:14 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id r30-20020a50d69e000000b00425e1e97671so3852527edi.18 for ; Tue, 26 Apr 2022 09:46:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=XPVyte1DHdZzGVqIYLZisC9WWn/NV5PTOOmKGsNYVqI=; b=tfexPWSj1PSjUb4y32u6HDb/QnE+2ju4Q3VQ+kF29tn7ADhoO3fHaJy9KBX3jTsEd8 BhBWx/VeORcURqli+p+9+1QH8obAqShcA6ErLkkeVUmnfCO3qJEDT5UirMbYVSK2rKI7 EHcv+GYbS6ck2ZGcmgknDsZroKT9WTKBy/bFVemoqQAXMDxG6HC+soFmb8T4ca/gXcrO ftqfgIZ/fqbniMJnPu4e11czzALuk3B6wUJb7Az/LpjxMDJcInKVf5CafrOSxVa2JPzU ++NO1MFfBz4ggnwPlPZhMJNmDFkkEcXO47/KdD9d+fQq4LH0V9JjN+pdxl1pwz5u4gCv AAQg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=XPVyte1DHdZzGVqIYLZisC9WWn/NV5PTOOmKGsNYVqI=; b=uk/NswhNYmGUZGnhIVh5JHoNoMKV4NoXJSg+gm5rrW6W415Dq17owixvVV9N8YYNlJ IwcR/7l9dUWYvjWA5gbxDI/V+wpLXeCgsAzD6QQCBdhXLauOyZlTdVKy1YOkVFY8Esiw 7Cv9LBVjtpm2y1cyLNWQu6A/6ehOZl+VBTNTm4jZN2FqVvE3uFPIio7uNQb0Pbp7H5CH oAYT6MFl7MdVKef/Uu1vzxi15U19Kn/p3hNM9yeB7za23WkacAblBE4bf3C1/aY78Jex TVwbxpFIXJieqcyjSB1rkwpg9QTfO1eQubaZh1va2GR2icRWjD+jBSThMacBQsol8YWx l8Ug== X-Gm-Message-State: AOAM532QGc8/RMmpu1zGoXpg16a+afiiUztuQ7Lyq57h1VJKOvWdYxKg MAWTdsIia5k+bVrAR1oiE+C5r9pq6Ys= X-Google-Smtp-Source: ABdhPJxd69pRVvXv3tLXpgiAhti/608pH1mq1OSHk1HEp8P9KBrw2Up9R6Lp+Ti78ee6VL9a6lRzmQRfcX0= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:d580:abeb:bf6d:5726]) (user=glider job=sendgmr) by 2002:a17:906:478b:b0:6db:8b6e:d5de with SMTP id cw11-20020a170906478b00b006db8b6ed5demr22938586ejc.161.1650991579678; Tue, 26 Apr 2022 09:46:19 -0700 (PDT) Date: Tue, 26 Apr 2022 18:43:15 +0200 In-Reply-To: <20220426164315.625149-1-glider@google.com> Message-Id: <20220426164315.625149-47-glider@google.com> Mime-Version: 1.0 References: <20220426164315.625149-1-glider@google.com> X-Mailer: git-send-email 2.36.0.rc2.479.g8af0fa9b8e-goog Subject: [PATCH v3 46/46] x86: kmsan: enable KMSAN builds for x86 From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org X-Stat-Signature: 6xjtj9sh1sqt3sqxoj8xgdesaqwsuczf X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 6B1A9100056 X-Rspam-User: Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=tfexPWSj; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of 32yFoYgYKCNgAFC78LAIIAF8.6IGFCHOR-GGEP46E.ILA@flex--glider.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=32yFoYgYKCNgAFC78LAIIAF8.6IGFCHOR-GGEP46E.ILA@flex--glider.bounces.google.com X-HE-Tag: 1650991574-911247 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make KMSAN usable by adding the necessary Kconfig bits. Signed-off-by: Alexander Potapenko --- Link: https://linux-review.googlesource.com/id/I1d295ce8159ce15faa496d20089d953a919c125e --- arch/x86/Kconfig | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 3209073f96415..592f5ca2017c2 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -168,6 +168,7 @@ config X86 select HAVE_ARCH_KASAN if X86_64 select HAVE_ARCH_KASAN_VMALLOC if X86_64 select HAVE_ARCH_KFENCE + select HAVE_ARCH_KMSAN if X86_64 select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT