From patchwork Wed Mar 25 16:12:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458221 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4890F14B4 for ; Wed, 25 Mar 2020 16:13:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F2F9E20409 for ; Wed, 25 Mar 2020 16:13:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Me6PcdEU" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F2F9E20409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C60666B000C; Wed, 25 Mar 2020 12:13:00 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C36D06B000D; Wed, 25 Mar 2020 12:13:00 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B74B06B000E; Wed, 25 Mar 2020 12:13:00 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0154.hostedemail.com [216.40.44.154]) by kanga.kvack.org (Postfix) with ESMTP id A10506B000C for ; Wed, 25 Mar 2020 12:13:00 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 8553F18241814 for ; Wed, 25 Mar 2020 16:13:00 +0000 (UTC) X-FDA: 76634378520.21.party96_7189d46c5e536 X-Spam-Summary: 2,0,0,7a31fb000c1b0c8c,d41d8cd98f00b204,3con7xgykcpcfkhcdqfnnfkd.bnlkhmtw-lljuzbj.nqf@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1543:1593:1594:1711:1730:1747:1777:1792:2198:2199:2393:2553:2559:2562:2892:2901:3138:3139:3140:3141:3142:3152:3353:3865:3866:3867:3868:3870:3874:4118:4321:4605:5007:6261:6653:6742:6743:7875:8603:9036:9969:10004:10400:11026:11232:11473:11658:11914:12043:12048:12114:12291:12296:12297:12438:12555:12895:13846:14096:14097:14181:14394:14659:14721:21080:21365:21444:21451:21627:21990:30034:30054:30064:30075:30090,0,RBL:209.85.128.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: party96_7189d46c5e536 X-Filterd-Recvd-Size: 7098 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:00 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id r19so1063338wmg.8 for ; Wed, 25 Mar 2020 09:12:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xQHjl+h42Qjm7r/pFlrFoKMtDZmmakvwcHtx/vx9iIE=; b=Me6PcdEUIxDZNXeS4SvnxsCuz4U4H5H1XWwIH0wCCfXOpIT3tsV7ieexj/UgtddpU9 bQgQkik9vUcimusB+8O6hkfEQEp1cVSPWJmTq1XcVlwtkWnXGOmIjp9o4d/Tg3GFVgAN VNMHvrh/o5PsIZHWblJVQFJpinzcMDRzPgwt3eBHENd1I2cMsDjIj4tQWDERksGidAn7 CH93EZEbAI79vhFBSPRZFT2za5gki2TZjCTqYJz2CPc9sb+uAvBJF+BRZn8r3wz8QLqc fpIfTef4SxipDYl+Gh3KEzNRe7DyA2t2QQhqKT/fO3yByHXn/sIuClO9SkTzX0jnlDL6 m7MQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xQHjl+h42Qjm7r/pFlrFoKMtDZmmakvwcHtx/vx9iIE=; b=Ytn10aGyM5Iy4W4HrMEygsXFi6nftMsc/p+zOzlPTbknCAA/hZtyOAyJ/SmGPdsKz6 rPQgm2eWOPBAMOY8hIMfzU2TPkmLZK+7YeVeBjN4BYg+IXeBjL4ZItlktZSP+8fNYXZi +5fN6JOvg9tmsLcUZ2cp3RQAWi/Ui57rmwNtNNWJAd1JoZIzrg7JzEi3sLCxZ+4LQLSM MQAeI/nxxEW+H09mzSGNOAUkL5DV02p+51fJOxFtgZ1XWhSNVQMEGaVkFHYiaPPEhDGm HvF5ht+eLf1sgtC6MIPJT0oDQUqnskpUYhp0JgZ0e2k6dYhKyF0iYUg3pysun/G8BhDx l3yw== X-Gm-Message-State: ANhLgQ0Ld4FRpPy1vxL0tg0MDURg2a8P30hBbTKOOUfIEswIIY8Ys+KF j0J3U3s0QkpO/pOAnFdV/rP5lXB5s40= X-Google-Smtp-Source: ADFU+vt7sSaSTOsbYTuESz8mIf07c3SC+oAyfR00EKbQ+nUGoF9WsKrXf/0FcpgJwv12fP+3hmdvWtP3re8= X-Received: by 2002:adf:e588:: with SMTP id l8mr4044294wrm.186.1585152778239; Wed, 25 Mar 2020 09:12:58 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:12 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-2-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 01/38] stackdepot: reserve 5 extra bits in depot_stack_handle_t From: glider@google.com To: Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some users (currently only KMSAN) may want to use spare bits in depot_stack_handle_t. Let them do so and provide get_dsh_extra_bits() and set_dsh_extra_bits() to access those bits. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- Change-Id: I23580dbde85908eeda0bdd8f83a8c3882ab3e012 --- include/linux/stackdepot.h | 8 ++++++++ lib/stackdepot.c | 24 +++++++++++++++++++++++- 2 files changed, 31 insertions(+), 1 deletion(-) diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h index 24d49c732341a..ac1b5a78d7f65 100644 --- a/include/linux/stackdepot.h +++ b/include/linux/stackdepot.h @@ -12,6 +12,11 @@ #define _LINUX_STACKDEPOT_H typedef u32 depot_stack_handle_t; +/* + * Number of bits in the handle that stack depot doesn't use. Users may store + * information in them. + */ +#define STACK_DEPOT_EXTRA_BITS 5 depot_stack_handle_t stack_depot_save(unsigned long *entries, unsigned int nr_entries, gfp_t gfp_flags); @@ -20,5 +25,8 @@ unsigned int stack_depot_fetch(depot_stack_handle_t handle, unsigned long **entries); unsigned int filter_irq_stacks(unsigned long *entries, unsigned int nr_entries); +depot_stack_handle_t set_dsh_extra_bits(depot_stack_handle_t handle, + unsigned int bits); +unsigned int get_dsh_extra_bits(depot_stack_handle_t handle); #endif diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 2caffc64e4c82..195ce3dc7c37e 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -40,8 +40,10 @@ #define STACK_ALLOC_ALIGN 4 #define STACK_ALLOC_OFFSET_BITS (STACK_ALLOC_ORDER + PAGE_SHIFT - \ STACK_ALLOC_ALIGN) + #define STACK_ALLOC_INDEX_BITS (DEPOT_STACK_BITS - \ - STACK_ALLOC_NULL_PROTECTION_BITS - STACK_ALLOC_OFFSET_BITS) + STACK_ALLOC_NULL_PROTECTION_BITS - \ + STACK_ALLOC_OFFSET_BITS - STACK_DEPOT_EXTRA_BITS) #define STACK_ALLOC_SLABS_CAP 8192 #define STACK_ALLOC_MAX_SLABS \ (((1LL << (STACK_ALLOC_INDEX_BITS)) < STACK_ALLOC_SLABS_CAP) ? \ @@ -54,6 +56,7 @@ union handle_parts { u32 slabindex : STACK_ALLOC_INDEX_BITS; u32 offset : STACK_ALLOC_OFFSET_BITS; u32 valid : STACK_ALLOC_NULL_PROTECTION_BITS; + u32 extra : STACK_DEPOT_EXTRA_BITS; }; }; @@ -72,6 +75,24 @@ static int next_slab_inited; static size_t depot_offset; static DEFINE_SPINLOCK(depot_lock); +depot_stack_handle_t set_dsh_extra_bits(depot_stack_handle_t handle, + u32 bits) +{ + union handle_parts parts = { .handle = handle }; + + parts.extra = bits & ((1U << STACK_DEPOT_EXTRA_BITS) - 1); + return parts.handle; +} +EXPORT_SYMBOL_GPL(set_dsh_extra_bits); + +u32 get_dsh_extra_bits(depot_stack_handle_t handle) +{ + union handle_parts parts = { .handle = handle }; + + return parts.extra; +} +EXPORT_SYMBOL_GPL(get_dsh_extra_bits); + static bool init_stack_slab(void **prealloc) { if (!*prealloc) @@ -136,6 +157,7 @@ static struct stack_record *depot_alloc_stack(unsigned long *entries, int size, stack->handle.slabindex = depot_index; stack->handle.offset = depot_offset >> STACK_ALLOC_ALIGN; stack->handle.valid = 1; + stack->handle.extra = 0; memcpy(stack->entries, entries, size * sizeof(unsigned long)); depot_offset += required_size; From patchwork Wed Mar 25 16:12:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458223 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6A007913 for ; Wed, 25 Mar 2020 16:13:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1E3A120409 for ; Wed, 25 Mar 2020 16:13:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Bnk9ia8j" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1E3A120409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2EB2C6B000D; Wed, 25 Mar 2020 12:13:04 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 2C22C6B000E; Wed, 25 Mar 2020 12:13:04 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D7F26B0010; Wed, 25 Mar 2020 12:13:04 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0067.hostedemail.com [216.40.44.67]) by kanga.kvack.org (Postfix) with ESMTP id F09666B000D for ; Wed, 25 Mar 2020 12:13:03 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D47D22BFBB for ; Wed, 25 Mar 2020 16:13:03 +0000 (UTC) X-FDA: 76634378646.15.ants20_71f69973a3717 X-Spam-Summary: 50,0,0,bc78e4b274cb0882,d41d8cd98f00b204,3dyn7xgykcpoinkfgtiqqing.eqonkpwz-oomxcem.qti@flex--glider.bounces.google.com,,RULES_HIT:41:69:152:327:355:379:541:800:960:966:967:968:973:982:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1593:1594:1605:1730:1747:1777:1792:2196:2198:2199:2200:2393:2525:2553:2560:2565:2682:2685:2693:2731:2740:2859:2901:2904:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3152:3743:3865:3866:3867:3868:3870:3871:3872:3873:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4321:4385:4559:4605:5007:6117:6119:6261:6653:6671:6742:6743:6755:7875:7903:7904:8603:8660:8957:9025:9040:9163:9969:10004:10954:11026:11232:11473:11658:11914:12043:12048:12219:12291:12296:12297:12438:12555:12683:12740:12895:12986:13007:13146:13148:13230:13846:13972:14394:14659:21063:21080:21222:21324:21365:21433:21444:21451:21611:21627:21740:21795:21796:21939:21990:30003:30012:30025:30029:30036:30045:30051:30054:30056:30064:30070 :30074:3 X-HE-Tag: ants20_71f69973a3717 X-Filterd-Recvd-Size: 21984 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:02 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id f185so858799wmf.8 for ; Wed, 25 Mar 2020 09:13:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ALdxrq6SIay2lM/ZDIKgIMpvMdHBDaQ1S8bfXYE7qAM=; b=Bnk9ia8jTCqueBZfMy5VoI7csUZGKCYUDVqBQza4UlOO+V5288n3IFqdQQp2JwDuAU Gd1v98Am8t/sYV4tW3vmK31brPSovGWm5+vidgsbuAaOBdptqtkkehiE9HB+PntW9e22 a1a0WRFPzaxSLG6TY5ygWp5AYK8/+vTY9GuZUy3p6EXh5dn/T4tVQIpyRS+XwBwkatNH iQEKL1eMoWKo+JQ/GTBoCOOJ7aXzAosJL7ozKbHPpljw19DKjrRrPHTTMMFK8NHRE73N ST0ueL0mkH9OGoM+lHNG+WvXkaTMAE7fsBRYhuXNSc2CJiYzjc5L3WsbcSICuTVGC84p mN5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ALdxrq6SIay2lM/ZDIKgIMpvMdHBDaQ1S8bfXYE7qAM=; b=JYnNzM4QwkI5DEsuadg+x/SHSThFVCpO5Qx2rDt1Mf+EqeKP76ms1yjTLb2t61uLB1 e6nfjTgWNROEwwbUiYOEcnclz+Zwsz40/8U/bpzw4ybc6pAcsRJ834bhoBuXjYRvzFIx ziXii/9bUJHVuqL0N+VZ0hKfUsnefl4h1F9dsRuPlMRHiLHExqDT06j0RuJJycGBh/it +GDI0X+pNYVF8RqA9G12PBN5uNGak5Z3+e+V8zCo6zIgvlxwfx5nkWaT4rO1LX5aYI0v iEb+qIlWLNvjl4YAKN19CPIohmxUiFdJjgsgsz5727ppWCDP8wcnX+t1untXE50VK6LB PrvA== X-Gm-Message-State: ANhLgQ3z3iH+LrCEoSKufupA5alM1RA+MFzykDySJV+fHpleDmLXHiKT LBqt0g26aWYADIYHreQVSl20iaaOppI= X-Google-Smtp-Source: ADFU+vv1v7ION2dh+Zk/BRzVERZOb/thBrii4zL7krw2zoaS+n+LdeK9QLXgx+QmMfVgcvr5ggz1XD9ms3o= X-Received: by 2002:adf:a190:: with SMTP id u16mr4259817wru.225.1585152781425; Wed, 25 Mar 2020 09:13:01 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:13 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-3-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 02/38] kmsan: add ReST documentation From: glider@google.com To: Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add Documentation/dev-tools/kmsan.rst and reference it in the dev-tools index. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- v4: - address comments by Marco Elver: - remove contractions - fix references - minor fixes Change-Id: Iac6345065e6804ef811f1124fdf779c67ff1530e --- Documentation/dev-tools/index.rst | 1 + Documentation/dev-tools/kmsan.rst | 424 ++++++++++++++++++++++++++++++ 2 files changed, 425 insertions(+) create mode 100644 Documentation/dev-tools/kmsan.rst diff --git a/Documentation/dev-tools/index.rst b/Documentation/dev-tools/index.rst index f7809c7b1ba9e..a3b9579fc810c 100644 --- a/Documentation/dev-tools/index.rst +++ b/Documentation/dev-tools/index.rst @@ -19,6 +19,7 @@ whole; patches welcome! kcov gcov kasan + kmsan ubsan kmemleak kcsan diff --git a/Documentation/dev-tools/kmsan.rst b/Documentation/dev-tools/kmsan.rst new file mode 100644 index 0000000000000..591c4809d46f3 --- /dev/null +++ b/Documentation/dev-tools/kmsan.rst @@ -0,0 +1,424 @@ +============================= +KernelMemorySanitizer (KMSAN) +============================= + +KMSAN is a dynamic memory error detector aimed at finding uses of uninitialized +memory. +It is based on compiler instrumentation, and is quite similar to the userspace +`MemorySanitizer tool`_. + +Example report +============== +Here is an example of a real KMSAN report in ``packet_bind_spkt()``:: + + ================================================================== + BUG: KMSAN: uninit-value in strlen + CPU: 0 PID: 1074 Comm: packet Not tainted 4.8.0-rc6+ #1891 + Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 + 0000000000000000 ffff88006b6dfc08 ffffffff82559ae8 ffff88006b6dfb48 + ffffffff818a7c91 ffffffff85b9c870 0000000000000092 ffffffff85b9c550 + 0000000000000000 0000000000000092 00000000ec400911 0000000000000002 + Call Trace: + [< inline >] __dump_stack lib/dump_stack.c:15 + [] dump_stack+0x238/0x290 lib/dump_stack.c:51 + [] kmsan_report+0x276/0x2e0 mm/kmsan/kmsan.c:1003 + [] __msan_warning+0x5b/0xb0 mm/kmsan/kmsan_instr.c:424 + [< inline >] strlen lib/string.c:484 + [] strlcpy+0x9d/0x200 lib/string.c:144 + [] packet_bind_spkt+0x144/0x230 net/packet/af_packet.c:3132 + [] SYSC_bind+0x40d/0x5f0 net/socket.c:1370 + [] SyS_bind+0x82/0xa0 net/socket.c:1356 + [] entry_SYSCALL_64_fastpath+0x13/0x8f arch/x86/entry/entry_64.o:? + chained origin: + [] save_stack_trace+0x27/0x50 arch/x86/kernel/stacktrace.c:67 + [< inline >] kmsan_save_stack_with_flags mm/kmsan/kmsan.c:322 + [< inline >] kmsan_save_stack mm/kmsan/kmsan.c:334 + [] kmsan_internal_chain_origin+0x118/0x1e0 mm/kmsan/kmsan.c:527 + [] __msan_set_alloca_origin4+0xc3/0x130 mm/kmsan/kmsan_instr.c:380 + [] SYSC_bind+0x129/0x5f0 net/socket.c:1356 + [] SyS_bind+0x82/0xa0 net/socket.c:1356 + [] entry_SYSCALL_64_fastpath+0x13/0x8f arch/x86/entry/entry_64.o:? + origin description: ----address@SYSC_bind (origin=00000000eb400911) + ================================================================== + +The report tells that the local variable ``address`` was created uninitialized +in ``SYSC_bind()`` (the ``bind`` system call implementation). The lower stack +trace corresponds to the place where this variable was created. + +The upper stack shows where the uninit value was used - in ``strlen()``. +It turned out that the contents of ``address`` were partially copied from the +userspace, but the buffer was not zero-terminated and contained some trailing +uninitialized bytes. + +``packet_bind_spkt()`` did not check the length of the buffer, but called +``strlcpy()`` on it, which called ``strlen()``, which started reading the +buffer byte by byte till it hit the uninitialized memory. + + + +KMSAN and Clang +=============== + +In order for KMSAN to work the kernel must be +built with Clang, which so far is the only compiler that has KMSAN support. +The kernel instrumentation pass is based on the userspace +`MemorySanitizer tool`_. Because of the instrumentation complexity it is +unlikely that any other compiler will support KMSAN soon. + +Right now the instrumentation pass supports x86_64 only. + +How to build +============ + +In order to build a kernel with KMSAN you will need a fresh Clang (10.0.0+, +trunk version r365008 or greater). Please refer to `LLVM documentation`_ +for the instructions on how to build Clang:: + + export KMSAN_CLANG_PATH=/path/to/clang + # Now configure and build the kernel with CONFIG_KMSAN enabled. + make CC=$KMSAN_CLANG_PATH + +How KMSAN works +=============== + +KMSAN shadow memory +------------------- + +KMSAN associates a metadata byte (also called shadow byte) with every byte of +kernel memory. +A bit in the shadow byte is set iff the corresponding bit of the kernel memory +byte is uninitialized. +Marking the memory uninitialized (i.e. setting its shadow bytes to 0xff) is +called poisoning, marking it initialized (setting the shadow bytes to 0x00) is +called unpoisoning. + +When a new variable is allocated on the stack, it is poisoned by default by +instrumentation code inserted by the compiler (unless it is a stack variable +that is immediately initialized). Any new heap allocation done without +``__GFP_ZERO`` is also poisoned. + +Compiler instrumentation also tracks the shadow values with the help from the +runtime library in ``mm/kmsan/``. + +The shadow value of a basic or compound type is an array of bytes of the same +length. +When a constant value is written into memory, that memory is unpoisoned. +When a value is read from memory, its shadow memory is also obtained and +propagated into all the operations which use that value. For every instruction +that takes one or more values the compiler generates code that calculates the +shadow of the result depending on those values and their shadows. + +Example:: + + int a = 0xff; + int b; + int c = a | b; + +In this case the shadow of ``a`` is ``0``, shadow of ``b`` is ``0xffffffff``, +shadow of ``c`` is ``0xffffff00``. This means that the upper three bytes of +``c`` are uninitialized, while the lower byte is initialized. + + +Origin tracking +--------------- + +Every four bytes of kernel memory also have a so-called origin assigned to +them. +This origin describes the point in program execution at which the uninitialized +value was created. Every origin is associated with a creation stack, which lets +the user figure out what is going on. + +When an uninitialized variable is allocated on stack or heap, a new origin +value is created, and that variable's origin is filled with that value. +When a value is read from memory, its origin is also read and kept together +with the shadow. For every instruction that takes one or more values the origin +of the result is one of the origins corresponding to any of the uninitialized +inputs. +If a poisoned value is written into memory, its origin is written to the +corresponding storage as well. + +Example 1:: + + int a = 0; + int b; + int c = a + b; + +In this case the origin of ``b`` is generated upon function entry, and is +stored to the origin of ``c`` right before the addition result is written into +memory. + +Several variables may share the same origin address, if they are stored in the +same four-byte chunk. +In this case every write to either variable updates the origin for all of them. + +Example 2:: + + int combine(short a, short b) { + union ret_t { + int i; + short s[2]; + } ret; + ret.s[0] = a; + ret.s[1] = b; + return ret.i; + } + +If ``a`` is initialized and ``b`` is not, the shadow of the result would be +0xffff0000, and the origin of the result would be the origin of ``b``. +``ret.s[0]`` would have the same origin, but it will be never used, because +that variable is initialized. + +If both function arguments are uninitialized, only the origin of the second +argument is preserved. + +Origin chaining +~~~~~~~~~~~~~~~ +To ease debugging, KMSAN creates a new origin for every memory store. +The new origin references both its creation stack and the previous origin the +memory location had. +This may cause increased memory consumption, so we limit the length of origin +chains in the runtime. + +Clang instrumentation API +------------------------- + +Clang instrumentation pass inserts calls to functions defined in +``mm/kmsan/kmsan_instr.c`` into the kernel code. + +Shadow manipulation +~~~~~~~~~~~~~~~~~~~ +For every memory access the compiler emits a call to a function that returns a +pair of pointers to the shadow and origin addresses of the given memory:: + + typedef struct { + void *s, *o; + } shadow_origin_ptr_t + + shadow_origin_ptr_t __msan_metadata_ptr_for_load_{1,2,4,8}(void *addr) + shadow_origin_ptr_t __msan_metadata_ptr_for_store_{1,2,4,8}(void *addr) + shadow_origin_ptr_t __msan_metadata_ptr_for_load_n(void *addr, u64 size) + shadow_origin_ptr_t __msan_metadata_ptr_for_store_n(void *addr, u64 size) + +The function name depends on the memory access size. +Each such function also checks if the shadow of the memory in the range +[``addr``, ``addr + n``) is contiguous and reports an error otherwise. + +The compiler makes sure that for every loaded value its shadow and origin +values are read from memory. +When a value is stored to memory, its shadow and origin are also stored using +the metadata pointers. + +Origin tracking +~~~~~~~~~~~~~~~ +A special function is used to create a new origin value for a local variable +and set the origin of that variable to that value:: + + void __msan_poison_alloca(u64 address, u64 size, char *descr) + +Access to per-task data +~~~~~~~~~~~~~~~~~~~~~~~~~ + +At the beginning of every instrumented function KMSAN inserts a call to +``__msan_get_context_state()``:: + + kmsan_context_state *__msan_get_context_state(void) + +``kmsan_context_state`` is declared in ``include/linux/kmsan.h``:: + + struct kmsan_context_s { + char param_tls[KMSAN_PARAM_SIZE]; + char retval_tls[RETVAL_SIZE]; + char va_arg_tls[KMSAN_PARAM_SIZE]; + char va_arg_origin_tls[KMSAN_PARAM_SIZE]; + u64 va_arg_overflow_size_tls; + depot_stack_handle_t param_origin_tls[PARAM_ARRAY_SIZE]; + depot_stack_handle_t retval_origin_tls; + depot_stack_handle_t origin_tls; + }; + +This structure is used by KMSAN to pass parameter shadows and origins between +instrumented functions. + +String functions +~~~~~~~~~~~~~~~~ + +The compiler replaces calls to ``memcpy()``/``memmove()``/``memset()`` with the +following functions. These functions are also called when data structures are +initialized or copied, making sure shadow and origin values are copied alongside +with the data:: + + void *__msan_memcpy(void *dst, void *src, u64 n) + void *__msan_memmove(void *dst, void *src, u64 n) + void *__msan_memset(void *dst, int c, size_t n) + +Error reporting +~~~~~~~~~~~~~~~ + +For each pointer dereference and each condition the compiler emits a shadow +check that calls ``__msan_warning()`` in the case a poisoned value is being +used:: + + void __msan_warning(u32 origin) + +``__msan_warning()`` causes KMSAN runtime to print an error report. + +Inline assembly instrumentation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +KMSAN instruments every inline assembly output with a call to:: + + void __msan_instrument_asm_store(u64 addr, u64 size) + +, which unpoisons the memory region. + +This approach may mask certain errors, but it also helps to avoid a lot of +false positives in bitwise operations, atomics etc. + +Sometimes the pointers passed into inline assembly do not point to valid memory. +In such cases they are ignored at runtime. + +Disabling the instrumentation +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +A function can be marked with ``__no_sanitize_memory``. +Doing so does not remove KMSAN instrumentation from it, however it makes the +compiler ignore the uninitialized values coming from the function's inputs, +and initialize the function's outputs. +The compiler will not inline functions marked with this attribute into functions +not marked with it, and vice versa. + +It is also possible to disable KMSAN for a single file (e.g. main.o):: + + KMSAN_SANITIZE_main.o := n + +or for the whole directory:: + + KMSAN_SANITIZE := n + +in the Makefile. This comes at a cost however: stack allocations from such files +and parameters of instrumented functions called from them will have incorrect +shadow/origin values. As a rule of thumb, avoid using KMSAN_SANITIZE. + +Runtime library +--------------- +The code is located in ``mm/kmsan/``. + +Per-task KMSAN state +~~~~~~~~~~~~~~~~~~~~ + +Every task_struct has an associated KMSAN task state that holds the KMSAN +context (see above) and a per-task flag disallowing KMSAN reports:: + + struct kmsan_task_state { + ... + bool allow_reporting; + struct kmsan_context_state cstate; + ... + } + + struct task_struct { + ... + struct kmsan_task_state kmsan; + ... + } + + +KMSAN contexts +~~~~~~~~~~~~~~ + +When running in a kernel task context, KMSAN uses ``current->kmsan.cstate`` to +hold the metadata for function parameters and return values. + +But in the case the kernel is running in the interrupt, softirq or NMI context, +where ``current`` is unavailable, KMSAN switches to per-cpu interrupt state:: + + DEFINE_PER_CPU(kmsan_context_state[KMSAN_NESTED_CONTEXT_MAX], + kmsan_percpu_cstate); + +Metadata allocation +~~~~~~~~~~~~~~~~~~~ +There are several places in the kernel for which the metadata is stored. + +1. Each ``struct page`` instance contains two pointers to its shadow and +origin pages:: + + struct page { + ... + struct page *shadow, *origin; + ... + }; + +Every time a ``struct page`` is allocated, the runtime library allocates two +additional pages to hold its shadow and origins. This is done by adding hooks +to ``alloc_pages()``/``free_pages()`` in ``mm/page_alloc.c``. +To avoid allocating the metadata for non-interesting pages (right now only the +shadow/origin page themselves and stackdepot storage) the +``__GFP_NO_KMSAN_SHADOW`` flag is used. + +There is a problem related to this allocation algorithm: when two contiguous +memory blocks are allocated with two different ``alloc_pages()`` calls, their +shadow pages may not be contiguous. So, if a memory access crosses the boundary +of a memory block, accesses to shadow/origin memory may potentially corrupt +other pages or read incorrect values from them. + +As a workaround, we check the access size in +``__msan_metadata_ptr_for_XXX_YYY()`` and return a pointer to a fake shadow +region in the case of an error:: + + char dummy_load_page[PAGE_SIZE] __attribute__((aligned(PAGE_SIZE))); + char dummy_store_page[PAGE_SIZE] __attribute__((aligned(PAGE_SIZE))); + +``dummy_load_page`` is zero-initialized, so reads from it always yield zeroes. +All stores to ``dummy_store_page`` are ignored. + +Unfortunately at boot time we need to allocate shadow and origin pages for the +kernel data (``.data``, ``.bss`` etc.) and percpu memory regions, the size of +which is not a power of 2. As a result, we have to allocate the metadata page by +page, so that it is also non-contiguous, although it may be perfectly valid to +access the corresponding kernel memory across page boundaries. +This can be probably fixed by allocating 1<`_. +In Proceedings of CGO 2015. + +.. _MemorySanitizer tool: https://clang.llvm.org/docs/MemorySanitizer.html +.. _LLVM documentation: https://llvm.org/docs/GettingStarted.html From patchwork Wed Mar 25 16:12:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458225 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9684F14B4 for ; Wed, 25 Mar 2020 16:13:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 63F4A2073E for ; Wed, 25 Mar 2020 16:13:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="o7f2X9yD" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 63F4A2073E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0C7226B000E; Wed, 25 Mar 2020 12:13:07 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 09FB26B0010; Wed, 25 Mar 2020 12:13:06 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE5346B0032; Wed, 25 Mar 2020 12:13:06 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0130.hostedemail.com [216.40.44.130]) by kanga.kvack.org (Postfix) with ESMTP id C6FDB6B000E for ; Wed, 25 Mar 2020 12:13:06 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A9FCB8248047 for ; Wed, 25 Mar 2020 16:13:06 +0000 (UTC) X-FDA: 76634378772.17.toes71_7269e85a52103 X-Spam-Summary: 2,0,0,cbf05d284cec9572,d41d8cd98f00b204,3ein7xgykcp0lqnijwlttlqj.htrqnsz2-rrp0fhp.twl@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1541:1593:1594:1711:1730:1747:1777:1792:2393:2553:2559:2562:3138:3139:3140:3141:3142:3152:3352:3865:3867:3868:3871:3874:4250:4321:5007:6119:6261:6653:6742:6743:7903:7904:9969:10004:10400:11026:11232:11473:11658:11914:12043:12048:12297:12438:12555:12895:12986:13069:13311:13357:13846:14096:14097:14181:14394:14659:14721:21080:21365:21433:21444:21451:21627:30054:30064:30090,0,RBL:209.85.128.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: toes71_7269e85a52103 X-Filterd-Recvd-Size: 5451 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:06 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id s14so856984wmj.9 for ; Wed, 25 Mar 2020 09:13:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ZGx4h0yBkrIi10hsQKcxrq9vFYEkrr7T4fQBSTHjMAo=; b=o7f2X9yDeq9L2bywfCdnpR8Z8mbS7aNaBrp9f7SYNVnxzRbqjUF+ssfpRDa+xbxVG+ 2+d0fDMgYrxoEUvDcLpoWnZp+U1Ya5d4HDtxhyL19C3TK17B5UAtLMKNTYk2DPVe0jWg yJRtub9CL4Z6m/3Jw4/+SgrOKdRF9sLCbeJYHSx4/0umjt6ikYN9ZIYqUZvMbKZLdD4/ yVeroNoRI4//H58YO82DijaGOlxyEiwbqy/ac+j5NIJyyCsMGpR//48rGPhyg/b6gEqW kGT5atKwuUZp842DQcj9aQxf1BzoaMuuaXfL1nOyLSkbvPgbOGennEEBQDY7CUveFe+l 97pw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ZGx4h0yBkrIi10hsQKcxrq9vFYEkrr7T4fQBSTHjMAo=; b=CqXNF1+U+ZpaDxwjVWY5Es8NXsKxkV+chfPJ/D24sc1jld0kEsysmoF7uWYWkdZTR2 Yh6JpOJH/GWAGJbT41x/THjwKZhhaBBR5aPojAi4cV6ka/Sujfpx6xxTpP/fRLy0wspz vpA5wcx/kVpqLgWqbZJpQiwePng3mgKHHQySB0vIz9xbDcYwIxjTu5W/GSqhPzM09Kfi FnUz2bZmLoYzM1aj3CX2A346y+bQghBYvkFLkcyFzpojx3ladisC7l0QPM6RCUVC7jiR uhRBZDqP9IOCUF+k9MHHk2mvUSRlVlm3urs5o+HQFBtf/I5tf9RXcbIkT0r8KPZtjmm5 i5mA== X-Gm-Message-State: ANhLgQ38wMvJiK6YTMpRzzcbV9Wv+SgcruQAq0zarVnfnq9jx5t6/PeN QAxZCdRezGxdzyUC7+oo8tL0cTfKvCY= X-Google-Smtp-Source: ADFU+vtz6+NLVJfHd0LcLbwDclIwupPM+ASnjvAs/lO4mC4CwZBNbL3jxqRRYEyRHvOj0vtn94JaJjs7x6Y= X-Received: by 2002:a5d:510d:: with SMTP id s13mr4187627wrt.110.1585152784717; Wed, 25 Mar 2020 09:13:04 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:14 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-4-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 03/38] kmsan: gfp: introduce __GFP_NO_KMSAN_SHADOW From: glider@google.com To: Vegard Nossum , Andrew Morton , Michal Hocko , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This flag is to be used by KMSAN runtime to mark that newly created memory pages don't need KMSAN metadata backing them. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Vegard Nossum Cc: Andrew Morton Cc: Michal Hocko Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- We can't decide what to do here: - do we need to conditionally define ___GFP_NO_KMSAN_SHADOW depending on CONFIG_KMSAN like LOCKDEP does? - if KMSAN is defined, and LOCKDEP is not, do we want to "compactify" the GFP bits? Change-Id: If5d0352fd5711ad103328e2c185eb885e826423a --- include/linux/gfp.h | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/include/linux/gfp.h b/include/linux/gfp.h index be2754841369e..e1ab42b5e9ce2 100644 --- a/include/linux/gfp.h +++ b/include/linux/gfp.h @@ -44,6 +44,7 @@ struct vm_area_struct; #else #define ___GFP_NOLOCKDEP 0 #endif +#define ___GFP_NO_KMSAN_SHADOW 0x1000000u /* If the above are modified, __GFP_BITS_SHIFT may need updating */ /* @@ -212,12 +213,13 @@ struct vm_area_struct; #define __GFP_NOWARN ((__force gfp_t)___GFP_NOWARN) #define __GFP_COMP ((__force gfp_t)___GFP_COMP) #define __GFP_ZERO ((__force gfp_t)___GFP_ZERO) +#define __GFP_NO_KMSAN_SHADOW ((__force gfp_t)___GFP_NO_KMSAN_SHADOW) /* Disable lockdep for GFP context tracking */ #define __GFP_NOLOCKDEP ((__force gfp_t)___GFP_NOLOCKDEP) /* Room for N __GFP_FOO bits */ -#define __GFP_BITS_SHIFT (23 + IS_ENABLED(CONFIG_LOCKDEP)) +#define __GFP_BITS_SHIFT (25) #define __GFP_BITS_MASK ((__force gfp_t)((1 << __GFP_BITS_SHIFT) - 1)) /** From patchwork Wed Mar 25 16:12:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458227 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 38F1114B4 for ; Wed, 25 Mar 2020 16:13:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F105B2073E for ; Wed, 25 Mar 2020 16:13:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cl/fD0b4" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F105B2073E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5A9FB6B0010; Wed, 25 Mar 2020 12:13:10 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 50B9F6B0032; Wed, 25 Mar 2020 12:13:10 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3FB3E6B0036; Wed, 25 Mar 2020 12:13:10 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0216.hostedemail.com [216.40.44.216]) by kanga.kvack.org (Postfix) with ESMTP id 281D16B0010 for ; Wed, 25 Mar 2020 12:13:10 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id F2CDE1820A8CE for ; Wed, 25 Mar 2020 16:13:09 +0000 (UTC) X-FDA: 76634378898.11.bee71_72ecaf6030e18 X-Spam-Summary: 2,0,0,3ae336390a428013,d41d8cd98f00b204,3e4n7xgykcaiinkfgtiqqing.eqonkpwz-oomxcem.qti@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:1981:2194:2199:2393:2559:2562:3138:3139:3140:3141:3142:3152:3353:3865:3866:3867:3868:3871:3872:3874:4250:4321:5007:6119:6261:6653:6742:6743:7875:7903:9969:10004:10400:11026:11473:11658:11914:12043:12048:12296:12297:12438:12555:12895:12986:13846:14096:14097:14181:14394:14659:14721:21080:21365:21444:21451:21627:30054:30064,0,RBL:209.85.221.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:27,LUA_SUMMARY:none X-HE-Tag: bee71_72ecaf6030e18 X-Filterd-Recvd-Size: 5657 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:09 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id e10so1372174wru.6 for ; Wed, 25 Mar 2020 09:13:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=46UM+ohp2sqrA6UyzD9THFYsIoyE+soPYY/25oM+9yU=; b=cl/fD0b44w3XiqB6DK5pDJ/d8//HKxWqCuyvXxlo8p1SrlDFq1bM26wvjZYnymmGcM B/t7A7DrUj7zihBfmwzoWBwh2N03sN/JxOkXujFIpGaNTUCCcZe9tdFsHneg/RLZQ7wz PZ4l0O982WfeOjosqoID/9LthoxC8z9l4UPMJfy01L/V+5ve0pcQFXH3PJ0ffN1Pld8k VYvmUODtAN3c3qcN2hdCTzwqbfz8jXuPnGQ6YiOwgVDolRLSR44YQ10F6EkgYlNza2hS qZNR0JA9X+WDZ+Mu1jhOMfhJaIIlSEbPpriDq0Hi3M8Gz1G41lSE+iEmitrAxR/1sK/s p/Vg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=46UM+ohp2sqrA6UyzD9THFYsIoyE+soPYY/25oM+9yU=; b=pTPI1tusV/hfK9CnFB38ix+F6XDz9ZxLx2oFdSg/ADwILc0azme5OA7I9iFr8+xh2m s6bFHuQXmfHCvDf6x+ca97kiS3QEdMKNseJRoXDwc+h+oJGIMlEHidlZykOIoFah59x2 z1Za2tLgdIF5NpfKGQZc5Ox6aqnrRyLI88WJp1/tmKaWKjGtrwJk/9y6bOVvA3VODynd U7j3r0V1X+IVvUaku7/KXJ57iYdiIbQZms7yIgEFSwmR+DPnd0VSmdVWyf2RdeJ9emqA xJV9boeqgmnKsntmmiABkGAR4xFER0hI69ix/3Z4KNIBHyu7K83J0WvajO2soZy80X+r oKgg== X-Gm-Message-State: ANhLgQ1Sm1CY+dwXLJtmuozZySQJ9qhVCbog+M3++BlvZzQ+hRJS3Tcp fSbRtRJHpFLDw7NFtjV6D6+xTQeQwKM= X-Google-Smtp-Source: ADFU+vvpRErbPXjpJ6UrQtWtcmNRi7EcD56jU8mRjuA+xMbnAexnncN3fQhcSN/K5mo69WNVdRk7T/jNQKU= X-Received: by 2002:a5d:40cd:: with SMTP id b13mr2234858wrq.405.1585152787785; Wed, 25 Mar 2020 09:13:07 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:15 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-5-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 04/38] kmsan: introduce __no_sanitize_memory and __SANITIZE_MEMORY__ From: glider@google.com To: Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: __no_sanitize_memory is a function attribute that makes KMSAN ignore the uninitialized values coming from the function's inputs, and initialize the function's outputs. Functions marked with this attribute can't be inlined into functions not marked with it, and vice versa. __SANITIZE_MEMORY__ is a macro that's defined iff the file is instrumented with KMSAN. This is not the same as CONFIG_KMSAN, which is defined for every file. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org Acked-by: Marco Elver Reviewed-by: Andrey Konovalov --- v4: - dropped an unnecessary comment as requested by Marco Elver Change-Id: I1f1672652c8392f15f7ca8ac26cd4e71f9cc1e4b --- include/linux/compiler-clang.h | 7 +++++++ include/linux/compiler-gcc.h | 5 +++++ 2 files changed, 12 insertions(+) diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h index 2cb42d8bdedc6..d4f929b4a6705 100644 --- a/include/linux/compiler-clang.h +++ b/include/linux/compiler-clang.h @@ -33,6 +33,13 @@ #define __no_sanitize_thread #endif +#if __has_feature(memory_sanitizer) +# define __SANITIZE_MEMORY__ +# define __no_sanitize_memory __attribute__((no_sanitize("kernel-memory"))) +#else +# define __no_sanitize_memory +#endif + /* * Not all versions of clang implement the the type-generic versions * of the builtin overflow checkers. Fortunately, clang implements diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h index cf294faec2f87..1121557252f88 100644 --- a/include/linux/compiler-gcc.h +++ b/include/linux/compiler-gcc.h @@ -151,6 +151,11 @@ #define __no_sanitize_thread #endif +/* + * GCC doesn't support KMSAN. + */ +#define __no_sanitize_memory + #if GCC_VERSION >= 50100 #define COMPILER_HAS_GENERIC_BUILTIN_OVERFLOW 1 #endif From patchwork Wed Mar 25 16:12:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458231 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AAE00913 for ; Wed, 25 Mar 2020 16:13:16 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 787BD20772 for ; Wed, 25 Mar 2020 16:13:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qebyHNFo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 787BD20772 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2C3086B0032; Wed, 25 Mar 2020 12:13:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 29B396B0036; Wed, 25 Mar 2020 12:13:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B03F6B0037; Wed, 25 Mar 2020 12:13:14 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0160.hostedemail.com [216.40.44.160]) by kanga.kvack.org (Postfix) with ESMTP id 0446B6B0032 for ; Wed, 25 Mar 2020 12:13:13 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id DC4BE180629FB for ; Wed, 25 Mar 2020 16:13:13 +0000 (UTC) X-FDA: 76634379066.07.gun39_73772d6689158 X-Spam-Summary: 2,0,0,2f3724392d2243b9,d41d8cd98f00b204,3f4n7xgykcaymrojkxmuumrk.iusrot03-ssq1giq.uxm@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1541:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3152:3352:3865:3866:3867:3868:3870:3871:4250:4321:5007:6261:6653:6742:6743:9036:9969:10004:10400:11026:11232:11473:11657:11658:11914:12043:12048:12296:12297:12438:12555:12895:12986:13069:13311:13357:13846:14096:14097:14181:14394:14659:14721:21080:21365:21444:21451:21627:30054:30064,0,RBL:209.85.219.202:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:26,LUA_SUMMARY:none X-HE-Tag: gun39_73772d6689158 X-Filterd-Recvd-Size: 5511 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf31.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:13 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id t7so2940539ybj.1 for ; Wed, 25 Mar 2020 09:13:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=sssZYXa/Y7b+zWJy12LmpxkXvVq7ExmyCF02tNMIUtU=; b=qebyHNFoUUhb02spBx01HkHA+vcisyly+XjKDPQoXNdEMraJsnABOtS2kpjo4aQcfh HJu8csS7xKYlJFITfUxNmQoN7TRj32xQVvpIScW3J5gYyTXGv2iw1cWZ658w5676B/IQ fjIQxpCyMyq2TjZIWMO4KJMAWMjvpcXGFPE6qEojhJuq0o1t3nv5H9ZtuzLCn3O4zYqw vjU3rOz98kelHuRbOPr+zkTTJnSuIQEQ3NzT4yxfSxh3+l9UtjzYu84huP4msnxx/Kh0 MQuuZqbWM4ay1dHSUbXPmHOy/gKRX//rLAF4Hid/1ZeoJrJ5pft3yw6wK4rQ83g1R7+n z+Aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=sssZYXa/Y7b+zWJy12LmpxkXvVq7ExmyCF02tNMIUtU=; b=NJlf20MaKCcR0IgfNFnVJdm5Vd/7asDd/JFS3Ff/MkhHs3g4AmH05TlKF3QHTDE+0L 0Gq5qlue7EfiXlTlqPrLfhNaAW/xXylh9mdeHX8j7z0FLAuRiQdrMHe0JUxneUd4EuqU +vULJ0oGgwH8Fi7BePMG7AaWxcHnVzHP+XytDSiSaEySure5/hZqZdlCM0gfRI5+8f4A dNFRTaX3vk+GhFG6w5T52uS+9U/TgXUvXDVADyZJZcZdhFm/FZC+hwQI9+6ynAFmL/KD xq5fLOHWAsYUrHDXBsqGG186R+8yKASTdIw5OEXv+pwJ5tKw1mviiDRd4obwgJlnx7tt 0e2Q== X-Gm-Message-State: ANhLgQ351Un00q7YdwdO0BJ9w2foK67WeZVCx3hxOomFwGJLqIH/eeU6 47n+eZ1/nNmO+6/KlamGSZhpoWPfXMk= X-Google-Smtp-Source: ADFU+vvadgctgUapN+dr0duN/OKqgQTCVskNi+Pe6IgIVn7kIaS1GQg626o0sKIvVDsZg4pYYoF0cwJnLv0= X-Received: by 2002:a25:1656:: with SMTP id 83mr6758788ybw.373.1585152791465; Wed, 25 Mar 2020 09:13:11 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:16 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-6-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 05/38] kmsan: reduce vmalloc space From: glider@google.com To: Vegard Nossum , Andrew Morton , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN is going to use 3/4 of existing vmalloc space to hold the metadata, therefore we lower VMALLOC_END to make sure vmalloc() doesn't allocate past the first 1/4. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Vegard Nossum Cc: Andrew Morton Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- Change-Id: Iaa5e8e0fc2aa66c956f937f5a1de6e5ef40d57cc --- arch/x86/include/asm/pgtable_64_types.h | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h index 52e5f5f2240d9..586629e204366 100644 --- a/arch/x86/include/asm/pgtable_64_types.h +++ b/arch/x86/include/asm/pgtable_64_types.h @@ -139,7 +139,22 @@ extern unsigned int ptrs_per_p4d; # define VMEMMAP_START __VMEMMAP_BASE_L4 #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */ +#ifndef CONFIG_KMSAN #define VMALLOC_END (VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1) +#else +/* + * In KMSAN builds vmalloc area is four times smaller, and the remaining 3/4 + * are used to keep the metadata for virtual pages. + */ +#define VMALLOC_QUARTER_SIZE ((VMALLOC_SIZE_TB << 40) >> 2) +#define VMALLOC_END (VMALLOC_START + VMALLOC_QUARTER_SIZE - 1) +#define VMALLOC_SHADOW_OFFSET VMALLOC_QUARTER_SIZE +#define VMALLOC_ORIGIN_OFFSET (VMALLOC_QUARTER_SIZE * 2) +#define VMALLOC_META_END (VMALLOC_END + VMALLOC_ORIGIN_OFFSET) +#define MODULES_SHADOW_START (VMALLOC_META_END + 1) +#define MODULES_ORIGIN_START (MODULES_SHADOW_START + MODULES_LEN) +#define MODULES_ORIGIN_END (MODULES_ORIGIN_START + MODULES_LEN) +#endif #define MODULES_VADDR (__START_KERNEL_map + KERNEL_IMAGE_SIZE) /* The module sections ends with the start of the fixmap */ From patchwork Wed Mar 25 16:12:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458235 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B6EC714B4 for ; Wed, 25 Mar 2020 16:13:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 418C820772 for ; Wed, 25 Mar 2020 16:13:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fV3tXyFn" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 418C820772 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9EE346B0036; Wed, 25 Mar 2020 12:13:17 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 9C70B6B0037; Wed, 25 Mar 2020 12:13:17 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7F18B6B006C; Wed, 25 Mar 2020 12:13:17 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0165.hostedemail.com [216.40.44.165]) by kanga.kvack.org (Postfix) with ESMTP id 5C6716B0036 for ; Wed, 25 Mar 2020 12:13:17 -0400 (EDT) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 3C27D55F95 for ; Wed, 25 Mar 2020 16:13:17 +0000 (UTC) X-FDA: 76634379234.13.hot40_73d93c14f4100 X-Spam-Summary: 2,0,0,0749afb87ab9dcaf,d41d8cd98f00b204,3gon7xgykcakpurmn0pxxpun.lxvurw36-vvt4jlt.x0p@flex--glider.bounces.google.com,,RULES_HIT:152:355:379:541:960:966:968:973:982:988:989:1042:1260:1277:1311:1313:1314:1345:1359:1431:1437:1513:1515:1516:1518:1521:1593:1594:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2538:2559:2562:2692:2693:2731:2892:2897:2898:2899:2901:2902:2903:2904:2911:2923:2924:2925:2926:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3872:3873:3874:4250:4321:4385:4425:4605:4641:5007:6261:6653:6742:6743:7875:7903:7904:8603:8660:8784:8957:9008:9036:9165:9969:10004:11026:11233:11657:11914:12043:12048:12219:12291:12294:12296:12297:12438:12555:12683:12895:12986:13141:13148:13149:13161:13229:13230:13846:13972:14394:14659:21067:21080:21220:21365:21433:21444:21451:21627:21772:21795:21796:21809:21939:21990:30003:30012:30034:30036:30045:30051:30054:30056:30064:30067:30070:30074:30075:30079:30080:30090,0,RBL:209.85.217.74:@flex--glider. bounces. X-HE-Tag: hot40_73d93c14f4100 X-Filterd-Recvd-Size: 71685 Received: from mail-vs1-f74.google.com (mail-vs1-f74.google.com [209.85.217.74]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:15 +0000 (UTC) Received: by mail-vs1-f74.google.com with SMTP id t27so462944vsj.0 for ; Wed, 25 Mar 2020 09:13:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=iZsuRX3905xFV6XIi4fH/CnEnbJiSQsPo1HLkZZ94VA=; b=fV3tXyFnvT7QoKJ3QyS/2uIQsWZni+b6+i6hS3aEB1ySmg3g+vSdp/L/u5CMvL5Yba FbFaohm2wHxa0HXOl+i+lxlHs9/DKuE0m2wvFQzdi+mRrDV50+XtfmKb4QTaTpL0SpRP wQzSaclIj8IjXq7RgdZWJOSfSKhp7w1HrUP71pwKAX3P+voANpP+nsael/ACv8aC+woI hUrndNI7o8NSwe33RnblXXyzC5hPnelL+IJ0SPjxVWp8IPVeUVg0n8mg9HXFIdZWb/YP kWjdlN/zzKDcbXquQyaN+o9EMxxmsaCa/Rkg5b02FTFD91HHkmKiYM+c3IpTmmLdTz9w 0awg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=iZsuRX3905xFV6XIi4fH/CnEnbJiSQsPo1HLkZZ94VA=; b=HyAAEcwdeWxbU0CBQbO87czvgn/8mlVf3Wb0/m7I0q/ziw2rokZsbr6aH1H0m79Ypr 35ofW4GxCtqdCun6cCtQntPXCuf4IvDXlKFqFtfB8hoPzrUNVxvawjiZBzVKm7+Z4tpQ 59lS+i7vuPO7Gbu7LcctpSC5hqeryVAMZJqomljz7jV+0GUkcxW51pZ3Qk5g5CE0KB1h dqk6k/2a5397xkukb0cKvK8qXyjFHddiu5rXdRUWafh2X/ZUBBd+TJSTyTeJo/PspOLD JPgI65SACoW681DFyVYy9mUPnG7Adivrt3TVCI+Zc+opiCLUmQFcfXzLA28kIwKL6JqW fNeQ== X-Gm-Message-State: ANhLgQ2fGe+stHzuKHSPo2MREuTuF7eNpXUSGqH7IvIgfQ1HrOIGiExB Io8e7tZNx9ngb85+EfR6/JYIW/meZvU= X-Google-Smtp-Source: ADFU+vtvUmdO3PXUUT2zWJ5BrLwk3uciwMgMeQsCYuSJqkU6IyGDQjmzIW7pvbpW+81fEQEuUNgRsnDlzok= X-Received: by 2002:a67:68c4:: with SMTP id d187mr3217209vsc.92.1585152794955; Wed, 25 Mar 2020 09:13:14 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:17 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-7-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 06/38] kmsan: add KMSAN runtime core From: glider@google.com To: Jens Axboe , Andy Lutomirski , Wolfram Sang , Christoph Hellwig , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch adds the core parts of KMSAN runtime and associated files: - include/linux/kmsan-checks.h: user API to poison/unpoison/check memory - include/linux/kmsan.h: declarations of KMSAN memory hooks to be referenced outside KMSAN runtime - lib/Kconfig.kmsan: declarations for CONFIG_KMSAN and CONFIG_TEST_KMSAN - mm/kmsan/Makefile: boilerplate Makefile - mm/kmsan/kmsan.h: internal KMSAN declarations - mm/kmsan/kmsan.c: core functions that operate with shadow and origin memory and perform checks, utility functions - mm/kmsan/kmsan_init.c: KMSAN initialization routines - scripts/Makefile.kmsan: CFLAGS_KMSAN The patch also adds the necessary bookkeeping bits to struct page and struct task_struct: - each struct page now contains pointers to two struct pages holding KMSAN metadata (shadow and origins) for the original struct page; - each task_struct contains a struct kmsan_task_state used to track the metadata of function parameters and return values for that task. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Jens Axboe Cc: Andy Lutomirski Cc: Wolfram Sang Cc: Christoph Hellwig Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- v2: - dropped kmsan_handle_vprintk() - use locking for single kmsan_pr_err() calls - don't try to understand we're inside printk() v3: - fix an endless loop in __msan_poison_alloca() - implement kmsan_handle_dma() - dropped kmsan_handle_i2c_transfer() - fixed compilation with UNWINDER_ORC - dropped assembly hooks for system calls v4: - splitted away some runtime parts to ease the review process - fix a lot of comments by Marco Elver and Andrey Konovalov: -- clean up headers and #defines, remove debugging code -- dropped kmsan_pr_* macros, fixed reporting code -- removed TODOs -- simplified kmsan_get_shadow_origin_ptr() - actually filter out IRQ frames using filter_irq_stacks() - simplify kmsan_get_metadata() - include build_bug.h into kmsan-checks.h - don't instrument KMSAN files with stackprotector - squashed "kmsan: add KMSAN bits to struct page and struct task_struct" into this patch as requested by Marco Elver v5: - s/kmsan_softirq/kmsan_context everywhere (spotted by kbuild test robot ) Change-Id: I4b3a7aba6d5804afac4f5f7274cadf8675b6e119 --- arch/x86/Kconfig | 1 + include/linux/kmsan-checks.h | 127 ++++++++ include/linux/kmsan.h | 335 +++++++++++++++++++++ include/linux/mm_types.h | 9 + include/linux/sched.h | 5 + lib/Kconfig.debug | 2 + lib/Kconfig.kmsan | 22 ++ mm/kmsan/Makefile | 11 + mm/kmsan/kmsan.c | 547 +++++++++++++++++++++++++++++++++++ mm/kmsan/kmsan.h | 161 +++++++++++ mm/kmsan/kmsan_init.c | 79 +++++ mm/kmsan/kmsan_report.c | 143 +++++++++ mm/kmsan/kmsan_shadow.c | 456 +++++++++++++++++++++++++++++ mm/kmsan/kmsan_shadow.h | 30 ++ scripts/Makefile.kmsan | 12 + 15 files changed, 1940 insertions(+) create mode 100644 include/linux/kmsan-checks.h create mode 100644 include/linux/kmsan.h create mode 100644 lib/Kconfig.kmsan create mode 100644 mm/kmsan/Makefile create mode 100644 mm/kmsan/kmsan.c create mode 100644 mm/kmsan/kmsan.h create mode 100644 mm/kmsan/kmsan_init.c create mode 100644 mm/kmsan/kmsan_report.c create mode 100644 mm/kmsan/kmsan_shadow.c create mode 100644 mm/kmsan/kmsan_shadow.h create mode 100644 scripts/Makefile.kmsan diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 8d298164dda2a..376c13480def2 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -140,6 +140,7 @@ config X86 select HAVE_ARCH_KASAN if X86_64 select HAVE_ARCH_KASAN_VMALLOC if X86_64 select HAVE_ARCH_KCSAN if X86_64 + select HAVE_ARCH_KMSAN if X86_64 select HAVE_ARCH_KGDB select HAVE_ARCH_MMAP_RND_BITS if MMU select HAVE_ARCH_MMAP_RND_COMPAT_BITS if MMU && COMPAT diff --git a/include/linux/kmsan-checks.h b/include/linux/kmsan-checks.h new file mode 100644 index 0000000000000..2e4b8001e8d96 --- /dev/null +++ b/include/linux/kmsan-checks.h @@ -0,0 +1,127 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * KMSAN checks to be used for one-off annotations in subsystems. + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#ifndef _LINUX_KMSAN_CHECKS_H +#define _LINUX_KMSAN_CHECKS_H + +#include +#include + +#ifdef CONFIG_KMSAN + +/* + * Helper functions that mark the return value initialized. + * Note that Clang ignores the inline attribute in the cases when a no_sanitize + * function is called from an instrumented one. For the same reason these + * functions may not be declared __always_inline - in that case they dissolve in + * the callers and KMSAN won't be able to notice they should not be + * instrumented. + */ + +__no_sanitize_memory +static inline u8 KMSAN_INIT_1(u8 value) +{ + return value; +} + +__no_sanitize_memory +static inline u16 KMSAN_INIT_2(u16 value) +{ + return value; +} + +__no_sanitize_memory +static inline u32 KMSAN_INIT_4(u32 value) +{ + return value; +} + +__no_sanitize_memory +static inline u64 KMSAN_INIT_8(u64 value) +{ + return value; +} + +/** + * KMSAN_INIT_VALUE - Make the value initialized. + * @val: 1-, 2-, 4- or 8-byte integer that may be treated as uninitialized by + * KMSAN's. + * + * Return: value of @val that KMSAN treats as initialized. + */ +#define KMSAN_INIT_VALUE(val) \ + ({ \ + typeof(val) __ret; \ + switch (sizeof(val)) { \ + case 1: \ + *(u8 *)&__ret = KMSAN_INIT_1((u8)val); \ + break; \ + case 2: \ + *(u16 *)&__ret = KMSAN_INIT_2((u16)val);\ + break; \ + case 4: \ + *(u32 *)&__ret = KMSAN_INIT_4((u32)val);\ + break; \ + case 8: \ + *(u64 *)&__ret = KMSAN_INIT_8((u64)val);\ + break; \ + default: \ + BUILD_BUG_ON(1); \ + } \ + __ret; \ + }) /**/ + +/** + * kmsan_poison_shadow() - Mark the memory range as uninitialized. + * @address: address to start with. + * @size: size of buffer to poison. + * @flags: GFP flags for allocations done by this function. + * + * Until other data is written to this range, KMSAN will treat it as + * uninitialized. Error reports for this memory will reference the call site of + * kmsan_poison_shadow() as origin. + */ +void kmsan_poison_shadow(const void *address, size_t size, gfp_t flags); + +/** + * kmsan_unpoison_shadow() - Mark the memory range as initialized. + * @address: address to start with. + * @size: size of buffer to unpoison. + * + * Until other data is written to this range, KMSAN will treat it as + * initialized. + */ +void kmsan_unpoison_shadow(const void *address, size_t size); + +/** + * kmsan_check_memory() - Check the memory range for being initialized. + * @address: address to start with. + * @size: size of buffer to check. + * + * If any piece of the given range is marked as uninitialized, KMSAN will report + * an error. + */ +void kmsan_check_memory(const void *address, size_t size); + +#else + +#define KMSAN_INIT_VALUE(value) (value) + +static inline void kmsan_poison_shadow(const void *address, size_t size, + gfp_t flags) {} +static inline void kmsan_unpoison_shadow(const void *address, size_t size) {} +static inline void kmsan_check_memory(const void *address, size_t size) {} + +#endif + +#endif /* _LINUX_KMSAN_CHECKS_H */ diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h new file mode 100644 index 0000000000000..071e75f426f7a --- /dev/null +++ b/include/linux/kmsan.h @@ -0,0 +1,335 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * KMSAN API for subsystems. + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ +#ifndef LINUX_KMSAN_H +#define LINUX_KMSAN_H + +#include +#include +#include +#include +#include + +struct page; +struct kmem_cache; +struct task_struct; +struct sk_buff; +struct urb; + +#ifdef CONFIG_KMSAN + +/* These constants are defined in the MSan LLVM instrumentation pass. */ +#define KMSAN_RETVAL_SIZE 800 +#define KMSAN_PARAM_SIZE 800 +#define KMSAN_PARAM_ARRAY_SIZE (KMSAN_PARAM_SIZE / sizeof(depot_stack_handle_t)) + +struct kmsan_context_state { + char param_tls[KMSAN_PARAM_SIZE]; + char retval_tls[KMSAN_RETVAL_SIZE]; + char va_arg_tls[KMSAN_PARAM_SIZE]; + char va_arg_origin_tls[KMSAN_PARAM_SIZE]; + u64 va_arg_overflow_size_tls; + depot_stack_handle_t param_origin_tls[KMSAN_PARAM_ARRAY_SIZE]; + depot_stack_handle_t retval_origin_tls; + depot_stack_handle_t origin_tls; +}; + +#undef KMSAN_PARAM_ARRAY_SIZE +#undef KMSAN_PARAM_SIZE +#undef KMSAN_RETVAL_SIZE + +struct kmsan_task_state { + bool allow_reporting; + struct kmsan_context_state cstate; +}; + +/** + * kmsan_initialize_shadow() - Initialize KMSAN shadow at boot time. + * + * Allocate and initialize KMSAN metadata for early allocations. + */ +void __init kmsan_initialize_shadow(void); + +/** + * kmsan_initialize() - Initialize KMSAN state and enable KMSAN. + */ +void __init kmsan_initialize(void); + +/** + * kmsan_task_create() - Initialize KMSAN state for the task. + * @task: task to initialize. + */ +void kmsan_task_create(struct task_struct *task); + +/** + * kmsan_task_exit() - Notify KMSAN that a task has exited. + * @task: task about to finish. + */ +void kmsan_task_exit(struct task_struct *task); + +/** + * kmsan_alloc_page() - Notify KMSAN about an alloc_pages() call. + * @page: struct page pointer returned by alloc_pages(). + * @order: order of allocated struct page. + * @flags: GFP flags used by alloc_pages() + * + * Return: + * * 0 - Ok + * * -ENOMEM - allocation failure + * + * KMSAN allocates metadata (shadow and origin pages) for @page and marks + * 1<<@order pages starting at @page as uninitialized, unless @flags contain + * __GFP_ZERO. + */ +int kmsan_alloc_page(struct page *page, unsigned int order, gfp_t flags); + +/** + * kmsan_free_page() - Notify KMSAN about a free_pages() call. + * @page: struct page pointer passed to free_pages(). + * @order: order of deallocated struct page. + * + * KMSAN deallocates the metadata pages for the given struct page. + */ +void kmsan_free_page(struct page *page, unsigned int order); + +/** + * kmsan_split_page() - Notify KMSAN about a split_page() call. + * @page: struct page pointer passed to split_page(). + * @order: order of split struct page. + * + * KMSAN splits the metadata pages for the given struct page, so that they + * can be deallocated separately. + */ +void kmsan_split_page(struct page *page, unsigned int order); + +/** + * kmsan_copy_page_meta() - Copy KMSAN metadata between two pages. + * @dst: destination page. + * @src: source page. + * + * KMSAN copies the contents of metadata pages for @src into the metadata pages + * for @dst. If @dst has no associated metadata pages, nothing happens. + * If @src has no associated metadata pages, @dst metadata pages are unpoisoned. + */ +void kmsan_copy_page_meta(struct page *dst, struct page *src); + +/** + * kmsan_gup_pgd_range() - Notify KMSAN about a gup_pgd_range() call. + * @pages: array of struct page pointers. + * @nr: array size. + * + * gup_pgd_range() creates new pages, some of which may belong to the userspace + * memory. In that case these pages should be initialized. + */ +void kmsan_gup_pgd_range(struct page **pages, int nr); + +/** + * kmsan_slab_alloc() - Notify KMSAN about a slab allocation. + * @s: slab cache the object belongs to. + * @object: object pointer. + * @flags: GFP flags passed to the allocator. + * + * Depending on cache flags and GFP flags, KMSAN sets up the metadata of the + * newly created object, marking it initialized or uninitialized. + */ +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags); + +/** + * kmsan_slab_free() - Notify KMSAN about a slab deallocation. + * @s: slab cache the object belongs to. + * @object: object pointer. + * + * KMSAN marks the freed object as uninitialized. + */ +void kmsan_slab_free(struct kmem_cache *s, void *object); + +/** + * kmsan_kmalloc_large() - Notify KMSAN about a large slab allocation. + * @ptr: object pointer. + * @size: object size. + * @flags: GFP flags passed to the allocator. + * + * Similar to kmsan_slab_alloc(), but for large allocations. + */ +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags); + +/** + * kmsan_kfree_large() - Notify KMSAN about a large slab deallocation. + * @ptr: object pointer. + * + * Similar to kmsan_slab_free(), but for large allocations. + */ +void kmsan_kfree_large(const void *ptr); + +/** + * kmsan_vmap_page_range_noflush() - Notify KMSAN about a vmap. + * @start: start address of vmapped range. + * @end: end address of vmapped range. + * @prot: page protection flags used for vmap. + * @pages: array of pages. + * + * KMSAN maps shadow and origin pages of @pages into contiguous ranges in + * vmalloc metadata address range. + */ +void kmsan_vmap_page_range_noflush(unsigned long start, unsigned long end, + pgprot_t prot, struct page **pages); + +/** + * kmsan_vunmap_page_range() - Notify KMSAN about a vunmap. + * @addr: start address of vunmapped range. + * @end: end address of vunmapped range. + * + * KMSAN unmaps the contiguous metadata ranges created by + * kmsan_vmap_page_range_noflush(). + */ +void kmsan_vunmap_page_range(unsigned long addr, unsigned long end); + +/** + * kmsan_ioremap_page_range() - Notify KMSAN about a ioremap_page_range() call. + * @addr: range start. + * @end: range end. + * @phys_addr: physical range start. + * @prot: page protection flags used for ioremap_page_range(). + * + * KMSAN creates new metadata pages for the physical pages mapped into the + * virtual memory. + */ +void kmsan_ioremap_page_range(unsigned long addr, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot); + +/** + * kmsan_iounmap_page_range() - Notify KMSAN about a iounmap_page_range() call. + * @start: range start. + * @end: range end. + * + * KMSAN unmaps the metadata pages for the given range and, unlike for + * vunmap_page_range(), also deallocates them. + */ +void kmsan_iounmap_page_range(unsigned long start, unsigned long end); + +/** + * kmsan_context_enter() - Notify KMSAN about a context entry. + * + * This function should be called whenever the kernel leaves the current task + * and enters an IRQ, softirq or NMI context. KMSAN will switch the task state + * to a per-thread storage. + */ +void kmsan_context_enter(void); + +/** + * kmsan_context_exit() - Notify KMSAN about a context exit. + * + * This function should be called when the kernel leaves the previously entered + * context. + */ +void kmsan_context_exit(void); + +/** + * kmsan_copy_to_user() - Notify KMSAN about a data transfer to userspace. + * @to: destination address in the userspace. + * @from: source address in the kernel. + * @to_copy: number of bytes to copy. + * @left: number of bytes not copied. + * + * If this is a real userspace data transfer, KMSAN checks the bytes that were + * actually copied to ensure there was no information leak. If @to belongs to + * the kernel space (which is possible for compat syscalls), KMSAN just copies + * the metadata. + */ +void kmsan_copy_to_user(const void *to, const void *from, size_t to_copy, + size_t left); + +/** + * kmsan_check_skb() - Check an sk_buff for being initialized. + * + * KMSAN checks the memory belonging to a socket buffer and reports an error if + * contains uninitialized values. + */ +void kmsan_check_skb(const struct sk_buff *skb); + +/** + * kmsan_handle_dma() - Handle a DMA data transfer. + * @address: buffer address. + * @size: buffer size. + * @direction: one of possible dma_data_direction values. + * + * Depending on @direction, KMSAN: + * * checks the buffer, if it is copied to device; + * * initializes the buffer, if it is copied from device; + * * does both, if this is a DMA_BIDIRECTIONAL transfer. + */ +void kmsan_handle_dma(const void *address, size_t size, + enum dma_data_direction direction); + +/** + * kmsan_handle_urb() - Handle a USB data transfer. + * @urb: struct urb pointer. + * @is_out: data transfer direction (true means output to hardware) + * + * If @is_out is true, KMSAN checks the transfer buffer of @urb. Otherwise, + * KMSAN initializes the transfer buffer. + */ +void kmsan_handle_urb(const struct urb *urb, bool is_out); + +#else + +static inline void __init kmsan_initialize_shadow(void) { } +static inline void __init kmsan_initialize(void) { } + +static inline void kmsan_task_create(struct task_struct *task) {} +static inline void kmsan_task_exit(struct task_struct *task) {} + +static inline int kmsan_alloc_page(struct page *page, unsigned int order, + gfp_t flags) +{ + return 0; +} +static inline void kmsan_free_page(struct page *page, unsigned int order) {} +static inline void kmsan_split_page(struct page *page, unsigned int order) {} +static inline void kmsan_copy_page_meta(struct page *dst, struct page *src) {} +static inline void kmsan_gup_pgd_range(struct page **pages, int nr) {} + +static inline void kmsan_slab_alloc(struct kmem_cache *s, void *object, + gfp_t flags) {} +static inline void kmsan_slab_free(struct kmem_cache *s, void *object) {} +static inline void kmsan_kmalloc_large(const void *ptr, size_t size, + gfp_t flags) {} +static inline void kmsan_kfree_large(const void *ptr) {} + +static inline void kmsan_vmap_page_range_noflush(unsigned long start, + unsigned long end, + pgprot_t prot, + struct page **pages) {} +static inline void kmsan_vunmap_page_range(unsigned long start, + unsigned long end) {} + +static inline void kmsan_ioremap_page_range(unsigned long start, + unsigned long end, + phys_addr_t phys_addr, + pgprot_t prot) {} +static inline void kmsan_iounmap_page_range(unsigned long start, + unsigned long end) {} + +static inline void kmsan_context_enter(void) {} +static inline void kmsan_context_exit(void) {} + +static inline void kmsan_copy_to_user( + const void *to, const void *from, size_t to_copy, size_t left) {} + +static inline void kmsan_check_skb(const struct sk_buff *skb) {} +static inline void kmsan_handle_dma(const void *address, size_t size, + enum dma_data_direction direction) {} +static inline void kmsan_handle_urb(const struct urb *urb, bool is_out) {} + +#endif + +#endif /* LINUX_KMSAN_H */ diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 4aba6c0c2ba80..ba8d5808259bc 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -216,6 +216,15 @@ struct page { not kmapped, ie. highmem) */ #endif /* WANT_PAGE_VIRTUAL */ +#ifdef CONFIG_KMSAN + /* + * Bits in struct page are scarce, so the LSB in *shadow is used to + * indicate whether the page should be ignored by KMSAN or not. + */ + struct page *shadow; + struct page *origin; +#endif + #ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS int _last_cpupid; #endif diff --git a/include/linux/sched.h b/include/linux/sched.h index 983389c3c26d1..208bff758b9cd 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -15,6 +15,7 @@ #include #include #include +#include #include #include #include @@ -1199,6 +1200,10 @@ struct task_struct { struct kcsan_ctx kcsan_ctx; #endif +#ifdef CONFIG_KMSAN + struct kmsan_task_state kmsan; +#endif + #ifdef CONFIG_FUNCTION_GRAPH_TRACER /* Index of current stored address in ret_stack: */ int curr_ret_stack; diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 9f6e6edbd9949..e6f251b83437e 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -823,6 +823,8 @@ config DEBUG_STACKOVERFLOW source "lib/Kconfig.kasan" +source "lib/Kconfig.kmsan" + endmenu # "Memory Debugging" config DEBUG_SHIRQ diff --git a/lib/Kconfig.kmsan b/lib/Kconfig.kmsan new file mode 100644 index 0000000000000..187dddfcf2201 --- /dev/null +++ b/lib/Kconfig.kmsan @@ -0,0 +1,22 @@ +config HAVE_ARCH_KMSAN + bool + +if HAVE_ARCH_KMSAN + +config KMSAN + bool "KMSAN: detector of uninitialized memory use" + depends on SLUB && !KASAN + select STACKDEPOT + help + KMSAN is a dynamic detector of uses of uninitialized memory in the + kernel. It is based on compiler instrumentation provided by Clang + and thus requires Clang 10.0.0+ to build. + +config TEST_KMSAN + tristate "Module for testing KMSAN for bug detection" + depends on m && KMSAN + help + Test module that can trigger various uses of uninitialized memory + detectable by KMSAN. + +endif diff --git a/mm/kmsan/Makefile b/mm/kmsan/Makefile new file mode 100644 index 0000000000000..a9778eb8a46a1 --- /dev/null +++ b/mm/kmsan/Makefile @@ -0,0 +1,11 @@ +obj-y := kmsan.o kmsan_instr.o kmsan_init.o kmsan_entry.o kmsan_hooks.o kmsan_report.o kmsan_shadow.o + +# KMSAN runtime functions may enable UACCESS checks, so build them without +# stackprotector to avoid objtool warnings. +CFLAGS_kmsan_instr.o := $(call cc-option, -fno-stack-protector) +CFLAGS_kmsan_shadow.o := $(call cc-option, -fno-stack-protector) +CFLAGS_kmsan_hooks.o := $(call cc-option, -fno-stack-protector) + +KMSAN_SANITIZE := n +KCOV_INSTRUMENT := n +UBSAN_SANITIZE := n diff --git a/mm/kmsan/kmsan.c b/mm/kmsan/kmsan.c new file mode 100644 index 0000000000000..037f8b5f33a57 --- /dev/null +++ b/mm/kmsan/kmsan.c @@ -0,0 +1,547 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN runtime library. + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../slab.h" +#include "kmsan.h" + +#define KMSAN_STACK_DEPTH 64 +#define MAX_CHAIN_DEPTH 7 + +/* + * Some kernel asm() calls mention the non-existing |__force_order| variable + * in the asm constraints to preserve the order of accesses to control + * registers. KMSAN turns those mentions into actual memory accesses, therefore + * the variable is now required to link the kernel. + */ +unsigned long __force_order; +EXPORT_SYMBOL(__force_order); + +bool kmsan_ready; +/* + * According to Documentation/x86/kernel-stacks, kernel code can run on the + * following stacks: + * - regular task stack - when executing the task code + * - interrupt stack - when handling external hardware interrupts and softirqs + * - NMI stack + * 0 is for regular interrupts, 1 for softirqs, 2 for NMI. + * Because interrupts may nest, trying to use a new context for every new + * interrupt. + */ +/* [0] for dummy per-CPU context. */ +DEFINE_PER_CPU(struct kmsan_context_state[KMSAN_NESTED_CONTEXT_MAX], + kmsan_percpu_cstate); +/* 0 for task context, |i>0| for kmsan_context_state[i]. */ +DEFINE_PER_CPU(int, kmsan_context_level); +DEFINE_PER_CPU(int, kmsan_in_runtime_cnt); + +struct kmsan_context_state *kmsan_task_context_state(void) +{ + int cpu = smp_processor_id(); + int level = this_cpu_read(kmsan_context_level); + struct kmsan_context_state *ret; + + if (!kmsan_ready || kmsan_in_runtime()) { + ret = &per_cpu(kmsan_percpu_cstate[0], cpu); + __memset(ret, 0, sizeof(struct kmsan_context_state)); + return ret; + } + + if (!level) + ret = ¤t->kmsan.cstate; + else + ret = &per_cpu(kmsan_percpu_cstate[level], cpu); + return ret; +} + +void kmsan_internal_task_create(struct task_struct *task) +{ + struct kmsan_task_state *state = &task->kmsan; + + __memset(state, 0, sizeof(struct kmsan_task_state)); + state->allow_reporting = true; +} + +void kmsan_internal_memset_shadow(void *addr, int b, size_t size, + bool checked) +{ + void *shadow_start; + u64 page_offset, address = (u64)addr; + size_t to_fill; + + BUG_ON(!metadata_is_contiguous(addr, size, META_SHADOW)); + while (size) { + page_offset = address % PAGE_SIZE; + to_fill = min(PAGE_SIZE - page_offset, (u64)size); + shadow_start = kmsan_get_metadata((void *)address, to_fill, + META_SHADOW); + if (!shadow_start) { + if (checked) + panic("%s: not memsetting %d bytes starting at %px, because the shadow is NULL\n", + __func__, to_fill, address); + /* Otherwise just move on. */ + } else { + __memset(shadow_start, b, to_fill); + } + address += to_fill; + size -= to_fill; + } +} + +void kmsan_internal_poison_shadow(void *address, size_t size, + gfp_t flags, unsigned int poison_flags) +{ + bool checked = poison_flags & KMSAN_POISON_CHECK; + depot_stack_handle_t handle; + u32 extra_bits = kmsan_extra_bits(/*depth*/0, + poison_flags & KMSAN_POISON_FREE); + + kmsan_internal_memset_shadow(address, -1, size, checked); + handle = kmsan_save_stack_with_flags(flags, extra_bits); + kmsan_set_origin_checked(address, size, handle, checked); +} + +void kmsan_internal_unpoison_shadow(void *address, size_t size, bool checked) +{ + kmsan_internal_memset_shadow(address, 0, size, checked); + kmsan_set_origin_checked(address, size, 0, checked); +} + +depot_stack_handle_t kmsan_save_stack_with_flags(gfp_t flags, + unsigned int reserved) +{ + depot_stack_handle_t handle; + unsigned long entries[KMSAN_STACK_DEPTH]; + unsigned int nr_entries; + + nr_entries = stack_trace_save(entries, KMSAN_STACK_DEPTH, 0); + nr_entries = filter_irq_stacks(entries, nr_entries); + + /* Don't sleep (see might_sleep_if() in __alloc_pages_nodemask()). */ + flags &= ~__GFP_DIRECT_RECLAIM; + + handle = stack_depot_save(entries, nr_entries, flags); + return set_dsh_extra_bits(handle, reserved); +} + +/* + * Depending on the value of is_memmove, this serves as both a memcpy and a + * memmove implementation. + * + * As with the regular memmove, do the following: + * - if src and dst don't overlap, use memcpy(); + * - if src and dst overlap: + * - if src > dst, use memcpy(); + * - if src < dst, use reverse-memcpy. + * Why this is correct: + * - problems may arise if for some part of the overlapping region we + * overwrite its shadow with a new value before copying it somewhere. + * But there's a 1:1 mapping between the kernel memory and its shadow, + * therefore if this doesn't happen with the kernel memory it can't happen + * with the shadow. + */ +static void kmsan_memcpy_memmove_metadata(void *dst, void *src, size_t n, + bool is_memmove) +{ + void *shadow_src, *shadow_dst; + depot_stack_handle_t *origin_src, *origin_dst; + int src_slots, dst_slots, i, iter, step, skip_bits; + depot_stack_handle_t old_origin = 0, chain_origin, new_origin = 0; + u32 *align_shadow_src, shadow; + bool backwards; + + shadow_dst = kmsan_get_metadata(dst, n, META_SHADOW); + if (!shadow_dst) + return; + BUG_ON(!metadata_is_contiguous(dst, n, META_SHADOW)); + + shadow_src = kmsan_get_metadata(src, n, META_SHADOW); + if (!shadow_src) { + /* + * |src| is untracked: zero out destination shadow, ignore the + * origins, we're done. + */ + __memset(shadow_dst, 0, n); + return; + } + BUG_ON(!metadata_is_contiguous(src, n, META_SHADOW)); + + if (is_memmove) + __memmove(shadow_dst, shadow_src, n); + else + __memcpy(shadow_dst, shadow_src, n); + + origin_dst = kmsan_get_metadata(dst, n, META_ORIGIN); + origin_src = kmsan_get_metadata(src, n, META_ORIGIN); + BUG_ON(!origin_dst || !origin_src); + BUG_ON(!metadata_is_contiguous(dst, n, META_ORIGIN)); + BUG_ON(!metadata_is_contiguous(src, n, META_ORIGIN)); + src_slots = (ALIGN((u64)src + n, ORIGIN_SIZE) - + ALIGN_DOWN((u64)src, ORIGIN_SIZE)) / ORIGIN_SIZE; + dst_slots = (ALIGN((u64)dst + n, ORIGIN_SIZE) - + ALIGN_DOWN((u64)dst, ORIGIN_SIZE)) / ORIGIN_SIZE; + BUG_ON(!src_slots || !dst_slots); + BUG_ON((src_slots < 1) || (dst_slots < 1)); + BUG_ON((src_slots - dst_slots > 1) || (dst_slots - src_slots < -1)); + + backwards = is_memmove && (dst > src); + i = backwards ? min(src_slots, dst_slots) - 1 : 0; + iter = backwards ? -1 : 1; + + align_shadow_src = (u32 *)ALIGN_DOWN((u64)shadow_src, ORIGIN_SIZE); + for (step = 0; step < min(src_slots, dst_slots); step++, i += iter) { + BUG_ON(i < 0); + shadow = align_shadow_src[i]; + if (i == 0) { + /* + * If |src| isn't aligned on ORIGIN_SIZE, don't + * look at the first |src % ORIGIN_SIZE| bytes + * of the first shadow slot. + */ + skip_bits = ((u64)src % ORIGIN_SIZE) * 8; + shadow = (shadow << skip_bits) >> skip_bits; + } + if (i == src_slots - 1) { + /* + * If |src + n| isn't aligned on + * ORIGIN_SIZE, don't look at the last + * |(src + n) % ORIGIN_SIZE| bytes of the + * last shadow slot. + */ + skip_bits = (((u64)src + n) % ORIGIN_SIZE) * 8; + shadow = (shadow >> skip_bits) << skip_bits; + } + /* + * Overwrite the origin only if the corresponding + * shadow is nonempty. + */ + if (origin_src[i] && (origin_src[i] != old_origin) && shadow) { + old_origin = origin_src[i]; + chain_origin = kmsan_internal_chain_origin(old_origin); + /* + * kmsan_internal_chain_origin() may return + * NULL, but we don't want to lose the previous + * origin value. + */ + if (chain_origin) + new_origin = chain_origin; + else + new_origin = old_origin; + } + if (shadow) + origin_dst[i] = new_origin; + else + origin_dst[i] = 0; + } +} + +void kmsan_memcpy_metadata(void *dst, void *src, size_t n) +{ + kmsan_memcpy_memmove_metadata(dst, src, n, /*is_memmove*/false); +} + +void kmsan_memmove_metadata(void *dst, void *src, size_t n) +{ + kmsan_memcpy_memmove_metadata(dst, src, n, /*is_memmove*/true); +} + +depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id) +{ + depot_stack_handle_t handle; + unsigned long entries[3]; + u64 magic = KMSAN_CHAIN_MAGIC_ORIGIN_FULL; + int depth = 0; + static int skipped; + u32 extra_bits; + bool uaf; + + if (!id) + return id; + /* + * Make sure we have enough spare bits in |id| to hold the UAF bit and + * the chain depth. + */ + BUILD_BUG_ON((1 << STACK_DEPOT_EXTRA_BITS) <= (MAX_CHAIN_DEPTH << 1)); + + extra_bits = get_dsh_extra_bits(id); + depth = kmsan_depth_from_eb(extra_bits); + uaf = kmsan_uaf_from_eb(extra_bits); + + if (depth >= MAX_CHAIN_DEPTH) { + skipped++; + if (skipped % 10000 == 0) { + pr_warn("not chained %d origins\n", skipped); + dump_stack(); + kmsan_print_origin(id); + } + return id; + } + depth++; + extra_bits = kmsan_extra_bits(depth, uaf); + + entries[0] = magic + depth; + entries[1] = kmsan_save_stack_with_flags(GFP_ATOMIC, extra_bits); + entries[2] = id; + handle = stack_depot_save(entries, ARRAY_SIZE(entries), GFP_ATOMIC); + return set_dsh_extra_bits(handle, extra_bits); +} + +void kmsan_write_aligned_origin(void *var, size_t size, u32 origin) +{ + u32 *var_cast = (u32 *)var; + int i; + + BUG_ON((u64)var_cast % ORIGIN_SIZE); + BUG_ON(size % ORIGIN_SIZE); + for (i = 0; i < size / ORIGIN_SIZE; i++) + var_cast[i] = origin; +} + +void kmsan_internal_set_origin(void *addr, int size, u32 origin) +{ + void *origin_start; + u64 address = (u64)addr, page_offset; + size_t to_fill, pad = 0; + + if (!IS_ALIGNED(address, ORIGIN_SIZE)) { + pad = address % ORIGIN_SIZE; + address -= pad; + size += pad; + } + + while (size > 0) { + page_offset = address % PAGE_SIZE; + to_fill = min(PAGE_SIZE - page_offset, (u64)size); + /* write at least ORIGIN_SIZE bytes */ + to_fill = ALIGN(to_fill, ORIGIN_SIZE); + BUG_ON(!to_fill); + origin_start = kmsan_get_metadata((void *)address, to_fill, + META_ORIGIN); + address += to_fill; + size -= to_fill; + if (!origin_start) + /* Can happen e.g. if the memory is untracked. */ + continue; + kmsan_write_aligned_origin(origin_start, to_fill, origin); + } +} + +void kmsan_set_origin_checked(void *addr, int size, u32 origin, bool checked) +{ + if (checked && !metadata_is_contiguous(addr, size, META_ORIGIN)) + panic("%s: WARNING: not setting origin for %d bytes starting at %px, because the metadata is incontiguous\n", + __func__, size, addr); + kmsan_internal_set_origin(addr, size, origin); +} + +struct page *vmalloc_to_page_or_null(void *vaddr) +{ + struct page *page; + + if (!kmsan_internal_is_vmalloc_addr(vaddr) && + !kmsan_internal_is_module_addr(vaddr)) + return NULL; + page = vmalloc_to_page(vaddr); + if (pfn_valid(page_to_pfn(page))) + return page; + else + return NULL; +} + +void kmsan_internal_check_memory(void *addr, size_t size, const void *user_addr, + int reason) +{ + unsigned long irq_flags; + unsigned long addr64 = (unsigned long)addr; + unsigned char *shadow = NULL; + depot_stack_handle_t *origin = NULL; + depot_stack_handle_t cur_origin = 0, new_origin = 0; + int cur_off_start = -1; + int i, chunk_size; + size_t pos = 0; + + BUG_ON(!metadata_is_contiguous(addr, size, META_SHADOW)); + if (size <= 0) + return; + while (pos < size) { + chunk_size = min(size - pos, + PAGE_SIZE - ((addr64 + pos) % PAGE_SIZE)); + shadow = kmsan_get_metadata((void *)(addr64 + pos), chunk_size, + META_SHADOW); + if (!shadow) { + /* + * This page is untracked. If there were uninitialized + * bytes before, report them. + */ + if (cur_origin) { + irq_flags = kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, + cur_off_start, pos - 1, user_addr, + reason); + kmsan_leave_runtime(irq_flags); + } + cur_origin = 0; + cur_off_start = -1; + pos += chunk_size; + continue; + } + for (i = 0; i < chunk_size; i++) { + if (!shadow[i]) { + /* + * This byte is unpoisoned. If there were + * poisoned bytes before, report them. + */ + if (cur_origin) { + irq_flags = kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, + cur_off_start, pos + i - 1, + user_addr, reason); + kmsan_leave_runtime(irq_flags); + } + cur_origin = 0; + cur_off_start = -1; + continue; + } + origin = kmsan_get_metadata((void *)(addr64 + pos + i), + chunk_size - i, META_ORIGIN); + BUG_ON(!origin); + new_origin = *origin; + /* + * Encountered new origin - report the previous + * uninitialized range. + */ + if (cur_origin != new_origin) { + if (cur_origin) { + irq_flags = kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, + cur_off_start, pos + i - 1, + user_addr, reason); + kmsan_leave_runtime(irq_flags); + } + cur_origin = new_origin; + cur_off_start = pos + i; + } + } + pos += chunk_size; + } + BUG_ON(pos != size); + if (cur_origin) { + irq_flags = kmsan_enter_runtime(); + kmsan_report(cur_origin, addr, size, cur_off_start, pos - 1, + user_addr, reason); + kmsan_leave_runtime(irq_flags); + } +} + +bool metadata_is_contiguous(void *addr, size_t size, bool is_origin) +{ + u64 cur_addr = (u64)addr, next_addr; + char *cur_meta = NULL, *next_meta = NULL; + depot_stack_handle_t *origin_p; + bool all_untracked = false; + const char *fname = is_origin ? "origin" : "shadow"; + + if (!size) + return true; + + /* The whole range belongs to the same page. */ + if (ALIGN_DOWN(cur_addr + size - 1, PAGE_SIZE) == + ALIGN_DOWN(cur_addr, PAGE_SIZE)) + return true; + cur_meta = kmsan_get_metadata((void *)cur_addr, 1, is_origin); + if (!cur_meta) + all_untracked = true; + for (next_addr = cur_addr + PAGE_SIZE; next_addr < (u64)addr + size; + cur_addr = next_addr, + cur_meta = next_meta, + next_addr += PAGE_SIZE) { + next_meta = kmsan_get_metadata((void *)next_addr, 1, is_origin); + if (!next_meta) { + if (!all_untracked) + goto report; + continue; + } + if ((u64)cur_meta == ((u64)next_meta - PAGE_SIZE)) + continue; + goto report; + } + return true; + +report: + pr_err("BUG: attempting to access two shadow page ranges.\n"); + dump_stack(); + pr_err("\n"); + pr_err("Access of size %d at %px.\n", size, addr); + pr_err("Addresses belonging to different ranges: %px and %px\n", + cur_addr, next_addr); + pr_err("page[0].%s: %px, page[1].%s: %px\n", + fname, cur_meta, fname, next_meta); + origin_p = kmsan_get_metadata(addr, 1, META_ORIGIN); + if (origin_p) { + pr_err("Origin: %08x\n", *origin_p); + kmsan_print_origin(*origin_p); + } else { + pr_err("Origin: unavailable\n"); + } + return false; +} + +/* + * Dummy replacement for __builtin_return_address() which may crash without + * frame pointers. + */ +void *kmsan_internal_return_address(int arg) +{ +#ifdef CONFIG_UNWINDER_FRAME_POINTER + switch (arg) { + case 1: + return __builtin_return_address(1); + case 2: + return __builtin_return_address(2); + default: + BUG(); + } +#else + unsigned long entries[1]; + + stack_trace_save(entries, 1, arg); + return (void *)entries[0]; +#endif +} + +bool kmsan_internal_is_module_addr(void *vaddr) +{ + return ((u64)vaddr >= MODULES_VADDR) && ((u64)vaddr < MODULES_END); +} + +bool kmsan_internal_is_vmalloc_addr(void *addr) +{ + return ((u64)addr >= VMALLOC_START) && ((u64)addr < VMALLOC_END); +} diff --git a/mm/kmsan/kmsan.h b/mm/kmsan/kmsan.h new file mode 100644 index 0000000000000..9568b0005b5e7 --- /dev/null +++ b/mm/kmsan/kmsan.h @@ -0,0 +1,161 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * KMSAN internal declarations. + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#ifndef __MM_KMSAN_KMSAN_H +#define __MM_KMSAN_KMSAN_H + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kmsan_shadow.h" + +#define KMSAN_MAGIC_MASK 0xffffffffff00 +#define KMSAN_ALLOCA_MAGIC_ORIGIN 0x4110c4071900 +#define KMSAN_CHAIN_MAGIC_ORIGIN_FULL 0xd419170cba00 + +#define KMSAN_POISON_NOCHECK 0x0 +#define KMSAN_POISON_CHECK 0x1 +#define KMSAN_POISON_FREE 0x2 + +#define ORIGIN_SIZE 4 + +#define META_SHADOW (false) +#define META_ORIGIN (true) + +#define KMSAN_NESTED_CONTEXT_MAX (8) +/* [0] for dummy per-CPU context */ +DECLARE_PER_CPU(struct kmsan_context_state[KMSAN_NESTED_CONTEXT_MAX], + kmsan_percpu_cstate); +/* 0 for task context, |i>0| for kmsan_context_state[i]. */ +DECLARE_PER_CPU(int, kmsan_context_level); + +extern spinlock_t report_lock; +extern bool kmsan_ready; + +void kmsan_print_origin(depot_stack_handle_t origin); +void kmsan_report(depot_stack_handle_t origin, + void *address, int size, int off_first, int off_last, + const void *user_addr, int reason); + +enum KMSAN_BUG_REASON { + REASON_ANY, + REASON_COPY_TO_USER, + REASON_USE_AFTER_FREE, + REASON_SUBMIT_URB, +}; + +/* + * When a compiler hook is invoked, it may make a call to instrumented code + * and eventually call itself recursively. To avoid that, we protect the + * runtime entry points with kmsan_enter_runtime()/kmsan_leave_runtime() and + * exit the hook if kmsan_in_runtime() is true. But when an interrupt occurs + * inside the runtime, the hooks won’t run either, which may lead to errors. + * Therefore we have to disable interrupts inside the runtime. + */ +DECLARE_PER_CPU(int, kmsan_in_runtime_cnt); + +static __always_inline bool kmsan_in_runtime(void) +{ + return this_cpu_read(kmsan_in_runtime_cnt); +} + +static __always_inline unsigned long kmsan_enter_runtime(void) +{ + int level; + unsigned long irq_flags; + + preempt_disable(); + local_irq_save(irq_flags); + stop_nmi(); + level = this_cpu_inc_return(kmsan_in_runtime_cnt); + BUG_ON(level > 1); + return irq_flags; +} + +static __always_inline void kmsan_leave_runtime(unsigned long irq_flags) +{ + int level = this_cpu_dec_return(kmsan_in_runtime_cnt); + + if (level) + panic("kmsan_in_runtime: %d\n", level); + restart_nmi(); + local_irq_restore(irq_flags); + preempt_enable(); +} + +void kmsan_memcpy_metadata(void *dst, void *src, size_t n); +void kmsan_memmove_metadata(void *dst, void *src, size_t n); + +depot_stack_handle_t kmsan_save_stack(void); +depot_stack_handle_t kmsan_save_stack_with_flags(gfp_t flags, + unsigned int extra_bits); + +/* + * Pack and unpack the origin chain depth and UAF flag to/from the extra bits + * provided by the stack depot. + * The UAF flag is stored in the lowest bit, followed by the depth in the upper + * bits. + * set_dsh_extra_bits() is responsible for clamping the value. + */ +static __always_inline unsigned int kmsan_extra_bits(unsigned int depth, + bool uaf) +{ + return (depth << 1) | uaf; +} + +static __always_inline bool kmsan_uaf_from_eb(unsigned int extra_bits) +{ + return extra_bits & 1; +} + +static __always_inline unsigned int kmsan_depth_from_eb(unsigned int extra_bits) +{ + return extra_bits >> 1; +} + +void kmsan_internal_poison_shadow(void *address, size_t size, gfp_t flags, + unsigned int poison_flags); +void kmsan_internal_unpoison_shadow(void *address, size_t size, bool checked); +void kmsan_internal_memset_shadow(void *address, int b, size_t size, + bool checked); +depot_stack_handle_t kmsan_internal_chain_origin(depot_stack_handle_t id); +void kmsan_write_aligned_origin(void *var, size_t size, u32 origin); + +void kmsan_internal_task_create(struct task_struct *task); +void kmsan_internal_set_origin(void *addr, int size, u32 origin); +void kmsan_set_origin_checked(void *addr, int size, u32 origin, bool checked); + +struct kmsan_context_state *kmsan_task_context_state(void); + +bool metadata_is_contiguous(void *addr, size_t size, bool is_origin); +void kmsan_internal_check_memory(void *addr, size_t size, const void *user_addr, + int reason); + +struct page *vmalloc_to_page_or_null(void *vaddr); + +/* Declared in mm/vmalloc.c */ +void __vunmap_page_range(unsigned long addr, unsigned long end); +int __vmap_page_range_noflush(unsigned long start, unsigned long end, + pgprot_t prot, struct page **pages); + +void *kmsan_internal_return_address(int arg); +bool kmsan_internal_is_module_addr(void *vaddr); +bool kmsan_internal_is_vmalloc_addr(void *addr); + +#endif /* __MM_KMSAN_KMSAN_H */ diff --git a/mm/kmsan/kmsan_init.c b/mm/kmsan/kmsan_init.c new file mode 100644 index 0000000000000..12c84efa70ff9 --- /dev/null +++ b/mm/kmsan/kmsan_init.c @@ -0,0 +1,79 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN initialization routines. + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include "kmsan.h" + +#include +#include +#include +#include + +#define NUM_FUTURE_RANGES 128 +struct start_end_pair { + void *start, *end; +}; + +static struct start_end_pair start_end_pairs[NUM_FUTURE_RANGES] __initdata; +static int future_index __initdata; + +/* + * Record a range of memory for which the metadata pages will be created once + * the page allocator becomes available. + */ +static void __init kmsan_record_future_shadow_range(void *start, void *end) +{ + BUG_ON(future_index == NUM_FUTURE_RANGES); + BUG_ON((start >= end) || !start || !end); + start_end_pairs[future_index].start = start; + start_end_pairs[future_index].end = end; + future_index++; +} + +/* + * Initialize the shadow for existing mappings during kernel initialization. + * These include kernel text/data sections, NODE_DATA and future ranges + * registered while creating other data (e.g. percpu). + * + * Allocations via memblock can be only done before slab is initialized. + */ +void __init kmsan_initialize_shadow(void) +{ + int nid; + u64 i; + const size_t nd_size = roundup(sizeof(pg_data_t), PAGE_SIZE); + phys_addr_t p_start, p_end; + + for_each_reserved_mem_region(i, &p_start, &p_end) + kmsan_record_future_shadow_range(phys_to_virt(p_start), + phys_to_virt(p_end+1)); + /* Allocate shadow for .data */ + kmsan_record_future_shadow_range(_sdata, _edata); + + for_each_online_node(nid) + kmsan_record_future_shadow_range( + NODE_DATA(nid), (char *)NODE_DATA(nid) + nd_size); + + for (i = 0; i < future_index; i++) + kmsan_init_alloc_meta_for_range(start_end_pairs[i].start, + start_end_pairs[i].end); +} +EXPORT_SYMBOL(kmsan_initialize_shadow); + +void __init kmsan_initialize(void) +{ + /* Assuming current is init_task */ + kmsan_internal_task_create(current); + pr_info("Starting KernelMemorySanitizer\n"); + kmsan_ready = true; +} +EXPORT_SYMBOL(kmsan_initialize); diff --git a/mm/kmsan/kmsan_report.c b/mm/kmsan/kmsan_report.c new file mode 100644 index 0000000000000..7455fa7d10bb2 --- /dev/null +++ b/mm/kmsan/kmsan_report.c @@ -0,0 +1,143 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN error reporting routines. + * + * Copyright (C) 2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include +#include +#include + +#include "kmsan.h" + +DEFINE_SPINLOCK(report_lock); + +void kmsan_print_origin(depot_stack_handle_t origin) +{ + unsigned long *entries = NULL, *chained_entries = NULL; + unsigned long nr_entries, chained_nr_entries, magic; + char *descr = NULL; + void *pc1 = NULL, *pc2 = NULL; + depot_stack_handle_t head; + + if (!origin) + return; + + while (true) { + nr_entries = stack_depot_fetch(origin, &entries); + magic = nr_entries ? (entries[0] & KMSAN_MAGIC_MASK) : 0; + if ((nr_entries == 4) && (magic == KMSAN_ALLOCA_MAGIC_ORIGIN)) { + descr = (char *)entries[1]; + pc1 = (void *)entries[2]; + pc2 = (void *)entries[3]; + pr_err("Local variable %s created at:\n", descr); + pr_err(" %pS\n", pc1); + pr_err(" %pS\n", pc2); + break; + } + if ((nr_entries == 3) && + (magic == KMSAN_CHAIN_MAGIC_ORIGIN_FULL)) { + head = entries[1]; + origin = entries[2]; + pr_err("Uninit was stored to memory at:\n"); + chained_nr_entries = + stack_depot_fetch(head, &chained_entries); + stack_trace_print(chained_entries, chained_nr_entries, + 0); + pr_err("\n"); + continue; + } + pr_err("Uninit was created at:\n"); + if (nr_entries) + stack_trace_print(entries, nr_entries, 0); + else + pr_err("(stack is not available)\n"); + break; + } +} + +/** + * kmsan_report() - Report a use of uninitialized value. + * @origin: Stack ID of the uninitialized value. + * @address: Address at which the memory access happens. + * @size: Memory access size. + * @off_first: Offset (from @address) of the first byte to be reported. + * @off_last: Offset (from @address) of the last byte to be reported. + * @user_addr: When non-NULL, denotes the userspace address to which the kernel + * is leaking data. + * @reason: Error type from KMSAN_BUG_REASON enum. + * + * kmsan_report() prints an error message for a consequent group of bytes + * sharing the same origin. If an uninitialized value is used in a comparison, + * this function is called once without specifying the addresses. When checking + * a memory range, KMSAN may call kmsan_report() multiple times with the same + * @address, @size, @user_addr and @reason, but different @off_first and + * @off_last corresponding to different @origin values. + */ +void kmsan_report(depot_stack_handle_t origin, + void *address, int size, int off_first, int off_last, + const void *user_addr, int reason) +{ + unsigned long flags; + bool is_uaf; + char *bug_type = NULL; + + if (!kmsan_ready) + return; + if (!current->kmsan.allow_reporting) + return; + if (!origin) + return; + + current->kmsan.allow_reporting = false; + spin_lock_irqsave(&report_lock, flags); + pr_err("=====================================================\n"); + is_uaf = kmsan_uaf_from_eb(get_dsh_extra_bits(origin)); + switch (reason) { + case REASON_ANY: + bug_type = is_uaf ? "use-after-free" : "uninit-value"; + break; + case REASON_COPY_TO_USER: + bug_type = is_uaf ? "kernel-infoleak-after-free" : + "kernel-infoleak"; + break; + case REASON_SUBMIT_URB: + bug_type = is_uaf ? "kernel-usb-infoleak-after-free" : + "kernel-usb-infoleak"; + break; + } + pr_err("BUG: KMSAN: %s in %pS\n", + bug_type, kmsan_internal_return_address(2)); + dump_stack(); + pr_err("\n"); + + kmsan_print_origin(origin); + + if (size) { + pr_err("\n"); + if (off_first == off_last) + pr_err("Byte %d of %d is uninitialized\n", + off_first, size); + else + pr_err("Bytes %d-%d of %d are uninitialized\n", + off_first, off_last, size); + } + if (address) + pr_err("Memory access of size %d starts at %px\n", + size, address); + if (user_addr && reason == REASON_COPY_TO_USER) + pr_err("Data copied to user address %px\n", user_addr); + pr_err("=====================================================\n"); + add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); + spin_unlock_irqrestore(&report_lock, flags); + if (panic_on_warn) + panic("panic_on_warn set ...\n"); + current->kmsan.allow_reporting = true; +} diff --git a/mm/kmsan/kmsan_shadow.c b/mm/kmsan/kmsan_shadow.c new file mode 100644 index 0000000000000..bcd4f1faa7a67 --- /dev/null +++ b/mm/kmsan/kmsan_shadow.c @@ -0,0 +1,456 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN shadow implementation. + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "kmsan.h" +#include "kmsan_shadow.h" + +#define shadow_page_for(page) ((page)->shadow) + +#define origin_page_for(page) ((page)->origin) + +#define shadow_ptr_for(page) (page_address((page)->shadow)) + +#define origin_ptr_for(page) (page_address((page)->origin)) + +#define has_shadow_page(page) (!!((page)->shadow)) + +#define has_origin_page(page) (!!((page)->origin)) + +#define set_no_shadow_origin_page(page) \ + do { \ + (page)->shadow = NULL; \ + (page)->origin = NULL; \ + } while (0) /**/ + +#define is_ignored_page(page) (!!(((u64)((page)->shadow)) % 2)) + +#define ignore_page(pg) \ + ((pg)->shadow = (struct page *)((u64)((pg)->shadow) | 1)) + +DEFINE_PER_CPU(char[CPU_ENTRY_AREA_SIZE], cpu_entry_area_shadow); +DEFINE_PER_CPU(char[CPU_ENTRY_AREA_SIZE], cpu_entry_area_origin); + +/* + * Dummy load and store pages to be used when the real metadata is unavailable. + * There are separate pages for loads and stores, so that every load returns a + * zero, and every store doesn't affect other loads. + */ +char dummy_load_page[PAGE_SIZE] __aligned(PAGE_SIZE); +char dummy_store_page[PAGE_SIZE] __aligned(PAGE_SIZE); + +/* + * Taken from arch/x86/mm/physaddr.h to avoid using an instrumented version. + */ +static int kmsan_phys_addr_valid(unsigned long addr) +{ +#ifdef CONFIG_PHYS_ADDR_T_64BIT + return !(addr >> boot_cpu_data.x86_phys_bits); +#else + return 1; +#endif +} + +/* + * Taken from arch/x86/mm/physaddr.c to avoid using an instrumented version. + */ +static bool kmsan_virt_addr_valid(void *addr) +{ + unsigned long x = (unsigned long)addr; + unsigned long y = x - __START_KERNEL_map; + + /* use the carry flag to determine if x was < __START_KERNEL_map */ + if (unlikely(x > y)) { + x = y + phys_base; + + if (y >= KERNEL_IMAGE_SIZE) + return false; + } else { + x = y + (__START_KERNEL_map - PAGE_OFFSET); + + /* carry flag will be set if starting x was >= PAGE_OFFSET */ + if ((x > y) || !kmsan_phys_addr_valid(x)) + return false; + } + + return pfn_valid(x >> PAGE_SHIFT); +} + +static unsigned long vmalloc_meta(void *addr, bool is_origin) +{ + unsigned long addr64 = (unsigned long)addr, off; + + BUG_ON(is_origin && !IS_ALIGNED(addr64, ORIGIN_SIZE)); + if (kmsan_internal_is_vmalloc_addr(addr)) + return addr64 + (is_origin ? VMALLOC_ORIGIN_OFFSET + : VMALLOC_SHADOW_OFFSET); + if (kmsan_internal_is_module_addr(addr)) { + off = addr64 - MODULES_VADDR; + return off + (is_origin ? MODULES_ORIGIN_START + : MODULES_SHADOW_START); + } + return 0; +} + +static void *get_cea_meta_or_null(void *addr, bool is_origin) +{ + int cpu = smp_processor_id(); + int off; + char *metadata_array; + + if (((u64)addr < CPU_ENTRY_AREA_BASE) || + ((u64)addr >= (CPU_ENTRY_AREA_BASE + CPU_ENTRY_AREA_MAP_SIZE))) + return NULL; + off = (char *)addr - (char *)get_cpu_entry_area(cpu); + if ((off < 0) || (off >= CPU_ENTRY_AREA_SIZE)) + return NULL; + metadata_array = is_origin ? cpu_entry_area_origin : + cpu_entry_area_shadow; + return &per_cpu(metadata_array[off], cpu); +} + +static struct page *virt_to_page_or_null(void *vaddr) +{ + if (kmsan_virt_addr_valid(vaddr)) + return virt_to_page(vaddr); + else + return NULL; +} + +struct shadow_origin_ptr kmsan_get_shadow_origin_ptr(void *address, u64 size, + bool store) +{ + struct shadow_origin_ptr ret; + void *shadow; + + if (size > PAGE_SIZE) + panic("size too big in %s(%px, %d, %d)\n", + __func__, address, size, store); + + if (!kmsan_ready || kmsan_in_runtime()) + goto return_dummy; + + BUG_ON(!metadata_is_contiguous(address, size, META_SHADOW)); + shadow = kmsan_get_metadata(address, size, META_SHADOW); + if (!shadow) + goto return_dummy; + + ret.s = shadow; + ret.o = kmsan_get_metadata(address, size, META_ORIGIN); + return ret; + +return_dummy: + if (store) { + ret.s = dummy_store_page; + ret.o = dummy_store_page; + } else { + ret.s = dummy_load_page; + ret.o = dummy_load_page; + } + return ret; +} + +/* + * Obtain the shadow or origin pointer for the given address, or NULL if there's + * none. The caller must check the return value for being non-NULL if needed. + * The return value of this function should not depend on whether we're in the + * runtime or not. + */ +void *kmsan_get_metadata(void *address, size_t size, bool is_origin) +{ + struct page *page; + void *ret; + u64 addr = (u64)address, pad, off; + + if (is_origin && !IS_ALIGNED(addr, ORIGIN_SIZE)) { + pad = addr % ORIGIN_SIZE; + addr -= pad; + size += pad; + } + address = (void *)addr; + if (kmsan_internal_is_vmalloc_addr(address) || + kmsan_internal_is_module_addr(address)) + return (void *)vmalloc_meta(address, is_origin); + + ret = get_cea_meta_or_null(address, is_origin); + if (ret) + return ret; + + page = virt_to_page_or_null(address); + if (!page) + return NULL; + if (is_ignored_page(page)) + return NULL; + if (!has_shadow_page(page) || !has_origin_page(page)) + return NULL; + off = addr % PAGE_SIZE; + + ret = (is_origin ? origin_ptr_for(page) : shadow_ptr_for(page)) + off; + return ret; +} + +void __init kmsan_init_alloc_meta_for_range(void *start, void *end) +{ + u64 addr, size; + struct page *page; + void *shadow, *origin; + struct page *shadow_p, *origin_p; + + start = (void *)ALIGN_DOWN((u64)start, PAGE_SIZE); + size = ALIGN((u64)end - (u64)start, PAGE_SIZE); + shadow = memblock_alloc(size, PAGE_SIZE); + origin = memblock_alloc(size, PAGE_SIZE); + for (addr = 0; addr < size; addr += PAGE_SIZE) { + page = virt_to_page_or_null((char *)start + addr); + shadow_p = virt_to_page_or_null((char *)shadow + addr); + set_no_shadow_origin_page(shadow_p); + shadow_page_for(page) = shadow_p; + origin_p = virt_to_page_or_null((char *)origin + addr); + set_no_shadow_origin_page(origin_p); + origin_page_for(page) = origin_p; + } +} + +/* Called from mm/memory.c */ +void kmsan_copy_page_meta(struct page *dst, struct page *src) +{ + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + if (!has_shadow_page(src)) { + kmsan_internal_unpoison_shadow(page_address(dst), PAGE_SIZE, + /*checked*/false); + return; + } + if (!has_shadow_page(dst)) + return; + if (is_ignored_page(src)) { + ignore_page(dst); + return; + } + + irq_flags = kmsan_enter_runtime(); + __memcpy(shadow_ptr_for(dst), shadow_ptr_for(src), + PAGE_SIZE); + BUG_ON(!has_origin_page(src) || !has_origin_page(dst)); + __memcpy(origin_ptr_for(dst), origin_ptr_for(src), + PAGE_SIZE); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_copy_page_meta); + +/* Helper function to allocate page metadata. */ +static int kmsan_internal_alloc_meta_for_pages(struct page *page, + unsigned int order, + gfp_t flags, int node) +{ + struct page *shadow, *origin; + int pages = 1 << order; + int i; + bool initialized = (flags & __GFP_ZERO) || !kmsan_ready; + depot_stack_handle_t handle; + + if (flags & __GFP_NO_KMSAN_SHADOW) { + for (i = 0; i < pages; i++) + set_no_shadow_origin_page(&page[i]); + return 0; + } + + /* + * We always want metadata allocations to succeed and to finish fast. + */ + flags = GFP_ATOMIC; + if (initialized) + flags |= __GFP_ZERO; + shadow = alloc_pages_node(node, flags | __GFP_NO_KMSAN_SHADOW, order); + if (!shadow) { + for (i = 0; i < pages; i++) { + set_no_shadow_origin_page(&page[i]); + set_no_shadow_origin_page(&page[i]); + } + return -ENOMEM; + } + if (!initialized) + __memset(page_address(shadow), -1, PAGE_SIZE * pages); + + origin = alloc_pages_node(node, flags | __GFP_NO_KMSAN_SHADOW, order); + /* Assume we've allocated the origin. */ + if (!origin) { + __free_pages(shadow, order); + for (i = 0; i < pages; i++) + set_no_shadow_origin_page(&page[i]); + return -ENOMEM; + } + + if (!initialized) { + handle = kmsan_save_stack_with_flags(flags, /*extra_bits*/0); + /* + * Addresses are page-aligned, pages are contiguous, so it's ok + * to just fill the origin pages with |handle|. + */ + for (i = 0; i < PAGE_SIZE * pages / sizeof(handle); i++) { + ((depot_stack_handle_t *)page_address(origin))[i] = + handle; + } + } + + for (i = 0; i < pages; i++) { + shadow_page_for(&page[i]) = &shadow[i]; + set_no_shadow_origin_page(shadow_page_for(&page[i])); + origin_page_for(&page[i]) = &origin[i]; + set_no_shadow_origin_page(origin_page_for(&page[i])); + } + return 0; +} + +/* Called from mm/page_alloc.c */ +int kmsan_alloc_page(struct page *page, unsigned int order, gfp_t flags) +{ + unsigned long irq_flags; + int ret; + + if (kmsan_in_runtime()) + return 0; + irq_flags = kmsan_enter_runtime(); + ret = kmsan_internal_alloc_meta_for_pages(page, order, flags, -1); + kmsan_leave_runtime(irq_flags); + return ret; +} + +/* Called from mm/page_alloc.c */ +void kmsan_free_page(struct page *page, unsigned int order) +{ + struct page *shadow, *origin, *cur_page; + int pages = 1 << order; + int i; + unsigned long irq_flags; + + if (!shadow_page_for(page)) { + for (i = 0; i < pages; i++) { + cur_page = &page[i]; + BUG_ON(shadow_page_for(cur_page)); + } + return; + } + + if (!kmsan_ready) { + for (i = 0; i < pages; i++) { + cur_page = &page[i]; + set_no_shadow_origin_page(cur_page); + } + return; + } + + irq_flags = kmsan_enter_runtime(); + shadow = shadow_page_for(&page[0]); + origin = origin_page_for(&page[0]); + + for (i = 0; i < pages; i++) { + BUG_ON(has_shadow_page(shadow_page_for(&page[i]))); + BUG_ON(has_shadow_page(origin_page_for(&page[i]))); + set_no_shadow_origin_page(&page[i]); + } + BUG_ON(has_shadow_page(shadow)); + __free_pages(shadow, order); + + BUG_ON(has_shadow_page(origin)); + __free_pages(origin, order); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_free_page); + +/* Called from mm/page_alloc.c */ +void kmsan_split_page(struct page *page, unsigned int order) +{ + struct page *shadow, *origin; + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + + irq_flags = kmsan_enter_runtime(); + if (!has_shadow_page(&page[0])) { + BUG_ON(has_origin_page(&page[0])); + kmsan_leave_runtime(irq_flags); + return; + } + shadow = shadow_page_for(&page[0]); + split_page(shadow, order); + + origin = origin_page_for(&page[0]); + split_page(origin, order); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_split_page); + +/* Called from mm/vmalloc.c */ +void kmsan_vmap_page_range_noflush(unsigned long start, unsigned long end, + pgprot_t prot, struct page **pages) +{ + int nr, i, mapped; + struct page **s_pages, **o_pages; + unsigned long shadow_start, shadow_end, origin_start, origin_end; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + shadow_start = vmalloc_meta((void *)start, META_SHADOW); + if (!shadow_start) + return; + + BUG_ON(start >= end); + nr = (end - start) / PAGE_SIZE; + s_pages = kcalloc(nr, sizeof(struct page *), GFP_KERNEL); + o_pages = kcalloc(nr, sizeof(struct page *), GFP_KERNEL); + if (!s_pages || !o_pages) + goto ret; + for (i = 0; i < nr; i++) { + s_pages[i] = shadow_page_for(pages[i]); + o_pages[i] = origin_page_for(pages[i]); + } + prot = __pgprot(pgprot_val(prot) | _PAGE_NX); + prot = PAGE_KERNEL; + + shadow_end = vmalloc_meta((void *)end, META_SHADOW); + origin_start = vmalloc_meta((void *)start, META_ORIGIN); + origin_end = vmalloc_meta((void *)end, META_ORIGIN); + mapped = __vmap_page_range_noflush(shadow_start, shadow_end, + prot, s_pages); + BUG_ON(mapped != nr); + flush_tlb_kernel_range(shadow_start, shadow_end); + mapped = __vmap_page_range_noflush(origin_start, origin_end, + prot, o_pages); + BUG_ON(mapped != nr); + flush_tlb_kernel_range(origin_start, origin_end); +ret: + kfree(s_pages); + kfree(o_pages); +} + +void kmsan_ignore_page(struct page *page, int order) +{ + int i; + + for (i = 0; i < 1 << order; i++) + ignore_page(&page[i]); +} diff --git a/mm/kmsan/kmsan_shadow.h b/mm/kmsan/kmsan_shadow.h new file mode 100644 index 0000000000000..eaa7f771b6a52 --- /dev/null +++ b/mm/kmsan/kmsan_shadow.h @@ -0,0 +1,30 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * KMSAN shadow API. + * + * This should be agnostic to shadow implementation details. + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#ifndef __MM_KMSAN_KMSAN_SHADOW_H +#define __MM_KMSAN_KMSAN_SHADOW_H + +#include /* for CPU_ENTRY_AREA_MAP_SIZE */ + +struct shadow_origin_ptr { + void *s, *o; +}; + +struct shadow_origin_ptr kmsan_get_shadow_origin_ptr(void *addr, u64 size, + bool store); +void *kmsan_get_metadata(void *addr, size_t size, bool is_origin); +void __init kmsan_init_alloc_meta_for_range(void *start, void *end); + +#endif /* __MM_KMSAN_KMSAN_SHADOW_H */ diff --git a/scripts/Makefile.kmsan b/scripts/Makefile.kmsan new file mode 100644 index 0000000000000..8b3844b66b228 --- /dev/null +++ b/scripts/Makefile.kmsan @@ -0,0 +1,12 @@ +ifdef CONFIG_KMSAN + +CFLAGS_KMSAN := -fsanitize=kernel-memory + +ifeq ($(call cc-option, $(CFLAGS_KMSAN) -Werror),) + ifneq ($(CONFIG_COMPILE_TEST),y) + $(warning Cannot use CONFIG_KMSAN: \ + -fsanitize=kernel-memory is not supported by compiler) + endif +endif + +endif From patchwork Wed Mar 25 16:12:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458241 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 10000913 for ; Wed, 25 Mar 2020 16:13:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B5D4120777 for ; Wed, 25 Mar 2020 16:13:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Wrd4m9KK" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B5D4120777 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6E1246B0037; Wed, 25 Mar 2020 12:13:20 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6BA7F6B006C; Wed, 25 Mar 2020 12:13:20 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5CE7C6B006E; Wed, 25 Mar 2020 12:13:20 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0078.hostedemail.com [216.40.44.78]) by kanga.kvack.org (Postfix) with ESMTP id 462DA6B0037 for ; Wed, 25 Mar 2020 12:13:20 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 24B68180CADCD for ; Wed, 25 Mar 2020 16:13:20 +0000 (UTC) X-FDA: 76634379360.25.plate70_74603851c442d X-Spam-Summary: 2,0,0,41ce79441db70497,d41d8cd98f00b204,3hon7xgykca0tyvqr4t11tyr.p1zyv07a-zzx8npx.14t@flex--glider.bounces.google.com,,RULES_HIT:1:2:41:152:355:379:541:800:960:966:968:973:988:989:1042:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1593:1594:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2538:2553:2559:2562:2693:2911:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3871:3872:3873:3874:4050:4250:4321:4385:4425:5007:6119:6261:6653:6742:6743:7875:7903:8603:8660:9108:9969:10004:11026:11473:11657:11658:11914:12043:12048:12291:12295:12297:12438:12555:12683:12895:12986:13148:13161:13229:13230:13846:14096:14097:14394:14659:21080:21365:21433:21444:21451:21627:21772:21796:21990:30003:30036:30045:30054:30064:30067:30090,0,RBL:209.85.221.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMA RY:none X-HE-Tag: plate70_74603851c442d X-Filterd-Recvd-Size: 11417 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf34.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:19 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id f8so1378051wrp.1 for ; Wed, 25 Mar 2020 09:13:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=7Dtu31tIDHcw+zCR3bJfo8J+sswtZptCukakiPro7Cg=; b=Wrd4m9KKfuatTTvlkbvSy8EmiDj7JgzKy9nGVH2jt9xkaIu9JJezGDffEzzh0AO0C+ FIjbWWLNZycvjeNlY4YEwzW07ihOOzTKAlSszTwmbSFJ3gdhjwk5rl+rr29IJGJlNmjZ RuecbYKO49zlRBKIxM/OnA1UzixRo1CLO9bFqYxl/FPqYDycCXC7R94zgEpYT+7PJkj9 XSF17SjSq20JeZx+MbIX2vEhplkE0Lf7aKSd9tLibZcnhndRnX3IW8y8YiGZ+KgEorvX hZ11yrCdU2qU8WGG92ED4Yisb5xHRBCwMVKwHL5u6EfuL+1GLViQxousXg3jLbssdjp9 dYJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=7Dtu31tIDHcw+zCR3bJfo8J+sswtZptCukakiPro7Cg=; b=pBvKWRGu2T561+/PgpiL2sC0M2ccJBlip4NPvmzVch4oM5t8NemeIVifeZxEFVtpOd EBHnvS7YqtEMl4fye0QNx1IYv6h1t1lwTFNpGDGgeeaeCTZ+bFsNLU/QMorAxcr+RqO9 QWd8pQ3WHpJh9NrL3WkO0bMPXcxN7qTeDXpsIu3Npu9MO+pWS92cQL9ZJERy7LidjrjC lTtnNIdlrZ1eehHmFg/+f/ekq0e5S73Nl+GyWD5nW573/tt8XfcCJkpBJpe33o8FY5G7 bK0HxjVX0dgaCDfEac853H2QYuGiKd1VCWxxTGWh79t3Lad99FGFHBbd+7L+Fu+GzboY oDiQ== X-Gm-Message-State: ANhLgQ01ufcU+JutKY2t3jpsjuEoX/nV2pE5gWl/fde5bbxwEhIfDRaR GfCssWOgBE3gRVho5/+0tFUjlINma5s= X-Google-Smtp-Source: ADFU+vtoM3+tpdJx12Tp4R20F1l9cPNGCR9KbiLcDae2PgxIQoHw5qOSO4Ho/3ZF7jgk/PXmoV/H0JKlRsk= X-Received: by 2002:adf:bc04:: with SMTP id s4mr4190860wrg.244.1585152798148; Wed, 25 Mar 2020 09:13:18 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:18 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-8-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 07/38] kmsan: KMSAN compiler API implementation From: glider@google.com To: Jens Axboe , Andy Lutomirski , Wolfram Sang , Christoph Hellwig , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kmsan_instr.c contains the functions called by KMSAN instrumentation. These include functions that: - return shadow/origin pointers for memory accesses; - poison and unpoison local variables; - provide KMSAN context state to pass metadata for function arguments; - perform string operations (mem*) on metadata; - tell KMSAN to report an error. This patch has been split away from the rest of KMSAN runtime to simplify the review process. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Jens Axboe Cc: Andy Lutomirski Cc: Wolfram Sang Cc: Christoph Hellwig Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- v4: - split this patch away as requested by Andrey Konovalov - removed redundant address checks when copying shadow - fix __msan_memmove prototype Change-Id: I826272ed2ebe8ab8ef61a9d4cccdcf07c7b6b499 --- mm/kmsan/kmsan_instr.c | 229 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 229 insertions(+) create mode 100644 mm/kmsan/kmsan_instr.c diff --git a/mm/kmsan/kmsan_instr.c b/mm/kmsan/kmsan_instr.c new file mode 100644 index 0000000000000..0de8aafac5101 --- /dev/null +++ b/mm/kmsan/kmsan_instr.c @@ -0,0 +1,229 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN compiler API. + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include "kmsan.h" +#include +#include + +static bool is_bad_asm_addr(void *addr, u64 size, bool is_store) +{ + if ((u64)addr < TASK_SIZE) + return true; + if (!kmsan_get_metadata(addr, size, META_SHADOW)) + return true; + return false; +} + +struct shadow_origin_ptr __msan_metadata_ptr_for_load_n(void *addr, u64 size) +{ + return kmsan_get_shadow_origin_ptr(addr, size, /*store*/false); +} +EXPORT_SYMBOL(__msan_metadata_ptr_for_load_n); + +struct shadow_origin_ptr __msan_metadata_ptr_for_store_n(void *addr, u64 size) +{ + return kmsan_get_shadow_origin_ptr(addr, size, /*store*/true); +} +EXPORT_SYMBOL(__msan_metadata_ptr_for_store_n); + +#define DECLARE_METADATA_PTR_GETTER(size) \ +struct shadow_origin_ptr __msan_metadata_ptr_for_load_##size(void *addr) \ +{ \ + return kmsan_get_shadow_origin_ptr(addr, size, /*store*/false); \ +} \ +EXPORT_SYMBOL(__msan_metadata_ptr_for_load_##size); \ + \ +struct shadow_origin_ptr __msan_metadata_ptr_for_store_##size(void *addr) \ +{ \ + return kmsan_get_shadow_origin_ptr(addr, size, /*store*/true); \ +} \ +EXPORT_SYMBOL(__msan_metadata_ptr_for_store_##size) + +DECLARE_METADATA_PTR_GETTER(1); +DECLARE_METADATA_PTR_GETTER(2); +DECLARE_METADATA_PTR_GETTER(4); +DECLARE_METADATA_PTR_GETTER(8); + +void __msan_instrument_asm_store(void *addr, u64 size) +{ + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + /* + * Most of the accesses are below 32 bytes. The two exceptions so far + * are clwb() (64 bytes) and FPU state (512 bytes). + * It's unlikely that the assembly will touch more than 512 bytes. + */ + if (size > 512) { + WARN_ONCE(1, "assembly store size too big: %d\n", size); + size = 8; + } + if (is_bad_asm_addr(addr, size, /*is_store*/true)) + return; + irq_flags = kmsan_enter_runtime(); + /* Unpoisoning the memory on best effort. */ + kmsan_internal_unpoison_shadow(addr, size, /*checked*/false); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(__msan_instrument_asm_store); + +void *__msan_memmove(void *dst, const void *src, size_t n) +{ + void *result; + + result = __memmove(dst, src, n); + if (!n) + /* Some people call memmove() with zero length. */ + return result; + if (!kmsan_ready || kmsan_in_runtime()) + return result; + + kmsan_memmove_metadata(dst, (void *)src, n); + + return result; +} +EXPORT_SYMBOL(__msan_memmove); + +void *__msan_memmove_nosanitize(void *dst, void *src, u64 n) +{ + return __memmove(dst, src, n); +} +EXPORT_SYMBOL(__msan_memmove_nosanitize); + +void *__msan_memcpy(void *dst, const void *src, u64 n) +{ + void *result; + + result = __memcpy(dst, src, n); + if (!n) + /* Some people call memcpy() with zero length. */ + return result; + + if (!kmsan_ready || kmsan_in_runtime()) + return result; + + kmsan_memcpy_metadata(dst, (void *)src, n); + + return result; +} +EXPORT_SYMBOL(__msan_memcpy); + +void *__msan_memcpy_nosanitize(void *dst, void *src, u64 n) +{ + return __memcpy(dst, src, n); +} +EXPORT_SYMBOL(__msan_memcpy_nosanitize); + +void *__msan_memset(void *dst, int c, size_t n) +{ + void *result; + unsigned long irq_flags; + + result = __memset(dst, c, n); + if (!kmsan_ready || kmsan_in_runtime()) + return result; + + irq_flags = kmsan_enter_runtime(); + /* + * Clang doesn't pass parameter metadata here, so it is impossible to + * use shadow of @c to set up the shadow for @dst. + */ + kmsan_internal_unpoison_shadow(dst, n, /*checked*/false); + kmsan_leave_runtime(irq_flags); + + return result; +} +EXPORT_SYMBOL(__msan_memset); + +void *__msan_memset_nosanitize(void *dst, int c, size_t n) +{ + return __memset(dst, c, n); +} +EXPORT_SYMBOL(__msan_memset_nosanitize); + +depot_stack_handle_t __msan_chain_origin(depot_stack_handle_t origin) +{ + depot_stack_handle_t ret = 0; + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return ret; + + /* Creating new origins may allocate memory. */ + irq_flags = kmsan_enter_runtime(); + ret = kmsan_internal_chain_origin(origin); + kmsan_leave_runtime(irq_flags); + return ret; +} +EXPORT_SYMBOL(__msan_chain_origin); + +void __msan_poison_alloca(void *address, u64 size, char *descr) +{ + depot_stack_handle_t handle; + unsigned long entries[4]; + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + + kmsan_internal_memset_shadow(address, -1, size, /*checked*/true); + + entries[0] = KMSAN_ALLOCA_MAGIC_ORIGIN; + entries[1] = (u64)descr; + entries[2] = (u64)__builtin_return_address(0); + entries[3] = (u64)kmsan_internal_return_address(1); + + /* stack_depot_save() may allocate memory. */ + irq_flags = kmsan_enter_runtime(); + handle = stack_depot_save(entries, ARRAY_SIZE(entries), GFP_ATOMIC); + kmsan_leave_runtime(irq_flags); + kmsan_internal_set_origin(address, size, handle); +} +EXPORT_SYMBOL(__msan_poison_alloca); + +void __msan_unpoison_alloca(void *address, u64 size) +{ + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + + irq_flags = kmsan_enter_runtime(); + kmsan_internal_unpoison_shadow(address, size, /*checked*/true); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(__msan_unpoison_alloca); + +void __msan_warning(u32 origin) +{ + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + irq_flags = kmsan_enter_runtime(); + kmsan_report(origin, /*address*/0, /*size*/0, + /*off_first*/0, /*off_last*/0, /*user_addr*/0, REASON_ANY); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(__msan_warning); + +struct kmsan_context_state *__msan_get_context_state(void) +{ + struct kmsan_context_state *ret; + + ret = kmsan_task_context_state(); + BUG_ON(!ret); + return ret; +} +EXPORT_SYMBOL(__msan_get_context_state); From patchwork Wed Mar 25 16:12:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458243 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 66389913 for ; Wed, 25 Mar 2020 16:13:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0A4E920409 for ; Wed, 25 Mar 2020 16:13:29 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QF/C4SH2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0A4E920409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EF81D6B006C; Wed, 25 Mar 2020 12:13:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id ECFF96B006E; Wed, 25 Mar 2020 12:13:23 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DBE9D6B0070; Wed, 25 Mar 2020 12:13:23 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id C11D06B006C for ; Wed, 25 Mar 2020 12:13:23 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id ACD838248047 for ; Wed, 25 Mar 2020 16:13:23 +0000 (UTC) X-FDA: 76634379486.27.shop53_74dd1a8d50b2e X-Spam-Summary: 2,0,0,70bbef59bba4aba7,d41d8cd98f00b204,3iyn7xgykcbaw1ytu7w44w1u.s421y3ad-220bqs0.47w@flex--glider.bounces.google.com,,RULES_HIT:41:152:327:355:379:541:800:960:966:968:973:981:988:989:1042:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1593:1594:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2538:2553:2559:2562:2693:2901:2911:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3872:3873:3874:4250:4321:4385:4425:4605:5007:6119:6261:6653:6742:6743:7875:7903:8603:8660:9969:10004:11026:11233:11473:11657:11658:11914:12043:12048:12291:12296:12297:12438:12555:12683:12895:12986:13148:13230:13846:14096:14097:14394:14659:21080:21365:21433:21444:21451:21627:21772:21795:21939:21990:30003:30012:30034:30045:30046:30051:30054:30064:30067:30070:30079:30090,0,RBL:209.85.221.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none ,Custom_ X-HE-Tag: shop53_74dd1a8d50b2e X-Filterd-Recvd-Size: 21338 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:22 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id l17so1377599wro.3 for ; Wed, 25 Mar 2020 09:13:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=/dVKJJqMDx7mPMhb7GMC/kWpg0kueRYQWUDrf7a+hZc=; b=QF/C4SH2Ehssgx08ic593kWuOej6cZUQWWqFx1xtweuVEhBJoZ4K2QW8lK5s9qZcwo lFbqIlOEUkfSWPD514hJUir5FKyUpEnVnNnqA7FsAfPe1qKRfU9sThGQsLyquTRyTNmz oOk6EklNmW9arLASapN8N4R9c9SnGXV4vXEoP2KpUq+nYm6MeBd0OuWLknbBg5zOSppL 0e9JIzCWFZxmi36WHNLJy949VcEmk7m/2Lq8vu2OtRgDGOasouOQulipz1f0qC9W/IBt tfZ4xSQdyAcBzsTa+tSxhjQyfLIk82HhhBuLkjaSqLwQcfgMWp4PNIFvgiZqeNDasScc ovQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=/dVKJJqMDx7mPMhb7GMC/kWpg0kueRYQWUDrf7a+hZc=; b=YDIeZm/x9oiw/By0bT8Q2HkGB++nD55kseTlvg4KRT2CxG6jb81aj5Lb1wn9ISEgcp 6+x+R0MActjYGrpjjMyylmogRcO0HeMVREvceCc/iVkguHVr4T2dCbV/r91UnJwDX2py sAeWRRHgsZ4P7tE+TaJkO9LQqjEP9+j8UOcBI+9ci61i/rtQSsbOy8pH5HWYeMlP+HGl xUkgUmaS2JawsqXQ+2VtknP+4PukYJIaL/f5z8qer5SRc1VOMwLmpAeEOuPdTftJk5Yh HVlPXdB0okK3k16Vms80EZvelpytYKkyGBtoiOcdlHBC/R0+Z/I1Z58Ni40bz3xaT+Aw P+Uw== X-Gm-Message-State: ANhLgQ2wWFBIRXfpg5lEZrEackXbIoXiQskVlEvTEn2BucnivpE1bNmX BbceBHgtyrVJPTfXu4ve2WwrAbj5p/E= X-Google-Smtp-Source: ADFU+vs+Dz89lAXf05zZdSMDJ2C4sOf1bVq2dGaEvzKqGaSWI4h53O8VS/VPCIeb+abjsCvYJQAwdtmPJ58= X-Received: by 2002:adf:f309:: with SMTP id i9mr4585883wro.0.1585152801381; Wed, 25 Mar 2020 09:13:21 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:19 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-9-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 08/38] kmsan: add KMSAN hooks for kernel subsystems From: glider@google.com To: Jens Axboe , Andy Lutomirski , Wolfram Sang , Christoph Hellwig , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch provides hooks that subsystems use to notify KMSAN about changes in the kernel state. Such changes include: - page operations (allocation, deletion, splitting, mapping); - memory allocation and deallocation; - entering and leaving IRQ/NMI/softirq contexts; - copying data between kernel, userspace and hardware. This patch has been split away from the rest of KMSAN runtime to simplify the review process. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Jens Axboe Cc: Andy Lutomirski Cc: Wolfram Sang Cc: Christoph Hellwig Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- v4: - fix a lot of comments by Marco Elver and Andrey Konovalov: - clean up headers and #defines, remove debugging code - simplified KMSAN entry hooks - fixed kmsan_check_skb() Change-Id: I99d1f34f26bef122897cb840dac8d5b34d2b6a80 --- arch/x86/include/asm/kmsan.h | 93 ++++++++ mm/kmsan/kmsan_entry.c | 38 ++++ mm/kmsan/kmsan_hooks.c | 416 +++++++++++++++++++++++++++++++++++ 3 files changed, 547 insertions(+) create mode 100644 arch/x86/include/asm/kmsan.h create mode 100644 mm/kmsan/kmsan_entry.c create mode 100644 mm/kmsan/kmsan_hooks.c diff --git a/arch/x86/include/asm/kmsan.h b/arch/x86/include/asm/kmsan.h new file mode 100644 index 0000000000000..f924f29f90f97 --- /dev/null +++ b/arch/x86/include/asm/kmsan.h @@ -0,0 +1,93 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Assembly bits to safely invoke KMSAN hooks from .S files. + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ +#ifndef _ASM_X86_KMSAN_H +#define _ASM_X86_KMSAN_H + +#ifdef CONFIG_KMSAN + +#ifdef __ASSEMBLY__ +.macro KMSAN_PUSH_REGS + pushq %rax + pushq %rcx + pushq %rdx + pushq %rdi + pushq %rsi + pushq %r8 + pushq %r9 + pushq %r10 + pushq %r11 +.endm + +.macro KMSAN_POP_REGS + popq %r11 + popq %r10 + popq %r9 + popq %r8 + popq %rsi + popq %rdi + popq %rdx + popq %rcx + popq %rax + +.endm + +.macro KMSAN_CALL_HOOK fname + KMSAN_PUSH_REGS + call \fname + KMSAN_POP_REGS +.endm + +.macro KMSAN_CONTEXT_ENTER + KMSAN_CALL_HOOK kmsan_context_enter +.endm + +.macro KMSAN_CONTEXT_EXIT + KMSAN_CALL_HOOK kmsan_context_exit +.endm + +#define KMSAN_INTERRUPT_ENTER KMSAN_CONTEXT_ENTER +#define KMSAN_INTERRUPT_EXIT KMSAN_CONTEXT_EXIT + +#define KMSAN_SOFTIRQ_ENTER KMSAN_CONTEXT_ENTER +#define KMSAN_SOFTIRQ_EXIT KMSAN_CONTEXT_EXIT + +#define KMSAN_NMI_ENTER KMSAN_CONTEXT_ENTER +#define KMSAN_NMI_EXIT KMSAN_CONTEXT_EXIT + +#define KMSAN_IST_ENTER(shift_ist) KMSAN_CONTEXT_ENTER +#define KMSAN_IST_EXIT(shift_ist) KMSAN_CONTEXT_EXIT + +.macro KMSAN_UNPOISON_PT_REGS + KMSAN_CALL_HOOK kmsan_unpoison_pt_regs +.endm + +#else +#error this header must be included into an assembly file +#endif + +#else /* ifdef CONFIG_KMSAN */ + +#define KMSAN_INTERRUPT_ENTER +#define KMSAN_INTERRUPT_EXIT +#define KMSAN_SOFTIRQ_ENTER +#define KMSAN_SOFTIRQ_EXIT +#define KMSAN_NMI_ENTER +#define KMSAN_NMI_EXIT +#define KMSAN_SYSCALL_ENTER +#define KMSAN_SYSCALL_EXIT +#define KMSAN_IST_ENTER(shift_ist) +#define KMSAN_IST_EXIT(shift_ist) +#define KMSAN_UNPOISON_PT_REGS + +#endif /* ifdef CONFIG_KMSAN */ +#endif /* ifndef _ASM_X86_KMSAN_H */ diff --git a/mm/kmsan/kmsan_entry.c b/mm/kmsan/kmsan_entry.c new file mode 100644 index 0000000000000..7af31642cd451 --- /dev/null +++ b/mm/kmsan/kmsan_entry.c @@ -0,0 +1,38 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN hooks for entry_64.S + * + * Copyright (C) 2018-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include "kmsan.h" + +void kmsan_context_enter(void) +{ + int level = this_cpu_inc_return(kmsan_context_level); + + BUG_ON(level >= KMSAN_NESTED_CONTEXT_MAX); +} +EXPORT_SYMBOL(kmsan_context_enter); + +void kmsan_context_exit(void) +{ + int level = this_cpu_dec_return(kmsan_context_level); + + BUG_ON(level < 0); +} +EXPORT_SYMBOL(kmsan_context_exit); + +void kmsan_unpoison_pt_regs(struct pt_regs *regs) +{ + if (!kmsan_ready || kmsan_in_runtime()) + return; + kmsan_internal_unpoison_shadow(regs, sizeof(*regs), /*checked*/true); +} +EXPORT_SYMBOL(kmsan_unpoison_pt_regs); diff --git a/mm/kmsan/kmsan_hooks.c b/mm/kmsan/kmsan_hooks.c new file mode 100644 index 0000000000000..8ddfd91b1d115 --- /dev/null +++ b/mm/kmsan/kmsan_hooks.c @@ -0,0 +1,416 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * KMSAN hooks for kernel subsystems. + * + * These functions handle creation of KMSAN metadata for memory allocations. + * + * Copyright (C) 2018-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../slab.h" +#include "kmsan.h" + +/* + * The functions may call back to instrumented code, which, in turn, may call + * these hooks again. To avoid re-entrancy, we use __GFP_NO_KMSAN_SHADOW. + * Instrumented functions shouldn't be called under + * kmsan_enter_runtime()/kmsan_leave_runtime(), because this will lead to + * skipping effects of functions like memset() inside instrumented code. + */ + +/* Called from kernel/kthread.c, kernel/fork.c */ +void kmsan_task_create(struct task_struct *task) +{ + unsigned long irq_flags; + + if (!task) + return; + irq_flags = kmsan_enter_runtime(); + kmsan_internal_task_create(task); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_task_create); + +/* Called from kernel/exit.c */ +void kmsan_task_exit(struct task_struct *task) +{ + struct kmsan_task_state *state = &task->kmsan; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + + state->allow_reporting = false; +} +EXPORT_SYMBOL(kmsan_task_exit); + +/* Called from mm/slub.c */ +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags) +{ + unsigned long irq_flags; + + if (unlikely(object == NULL)) + return; + if (!kmsan_ready || kmsan_in_runtime()) + return; + /* + * There's a ctor or this is an RCU cache - do nothing. The memory + * status hasn't changed since last use. + */ + if (s->ctor || (s->flags & SLAB_TYPESAFE_BY_RCU)) + return; + + irq_flags = kmsan_enter_runtime(); + if (flags & __GFP_ZERO) + kmsan_internal_unpoison_shadow(object, s->object_size, + KMSAN_POISON_CHECK); + else + kmsan_internal_poison_shadow(object, s->object_size, flags, + KMSAN_POISON_CHECK); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_slab_alloc); + +/* Called from mm/slub.c */ +void kmsan_slab_free(struct kmem_cache *s, void *object) +{ + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + + /* RCU slabs could be legally used after free within the RCU period */ + if (unlikely(s->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON))) + return; + /* + * If there's a constructor, freed memory must remain in the same state + * till the next allocation. We cannot save its state to detect + * use-after-free bugs, instead we just keep it unpoisoned. + */ + if (s->ctor) + return; + irq_flags = kmsan_enter_runtime(); + kmsan_internal_poison_shadow(object, s->object_size, + GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_slab_free); + +/* Called from mm/slub.c */ +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) +{ + unsigned long irq_flags; + + if (unlikely(ptr == NULL)) + return; + if (!kmsan_ready || kmsan_in_runtime()) + return; + irq_flags = kmsan_enter_runtime(); + if (flags & __GFP_ZERO) + kmsan_internal_unpoison_shadow((void *)ptr, size, + /*checked*/true); + else + kmsan_internal_poison_shadow((void *)ptr, size, flags, + KMSAN_POISON_CHECK); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_kmalloc_large); + +/* Called from mm/slub.c */ +void kmsan_kfree_large(const void *ptr) +{ + struct page *page; + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + irq_flags = kmsan_enter_runtime(); + page = virt_to_head_page((void *)ptr); + BUG_ON(ptr != page_address(page)); + kmsan_internal_poison_shadow( + (void *)ptr, PAGE_SIZE << compound_order(page), GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_kfree_large); + +static unsigned long vmalloc_shadow(unsigned long addr) +{ + return (unsigned long)kmsan_get_metadata((void *)addr, 1, META_SHADOW); +} + +static unsigned long vmalloc_origin(unsigned long addr) +{ + return (unsigned long)kmsan_get_metadata((void *)addr, 1, META_ORIGIN); +} + +/* Called from mm/vmalloc.c */ +void kmsan_vunmap_page_range(unsigned long start, unsigned long end) +{ + __vunmap_page_range(vmalloc_shadow(start), vmalloc_shadow(end)); + __vunmap_page_range(vmalloc_origin(start), vmalloc_origin(end)); +} +EXPORT_SYMBOL(kmsan_vunmap_page_range); + +/* Called from lib/ioremap.c */ +/* + * This function creates new shadow/origin pages for the physical pages mapped + * into the virtual memory. If those physical pages already had shadow/origin, + * those are ignored. + */ +void kmsan_ioremap_page_range(unsigned long start, unsigned long end, + phys_addr_t phys_addr, pgprot_t prot) +{ + unsigned long irq_flags; + struct page *shadow, *origin; + int i, nr; + unsigned long off = 0; + gfp_t gfp_mask = GFP_KERNEL | __GFP_ZERO | __GFP_NO_KMSAN_SHADOW; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + + nr = (end - start) / PAGE_SIZE; + irq_flags = kmsan_enter_runtime(); + for (i = 0; i < nr; i++, off += PAGE_SIZE) { + shadow = alloc_pages(gfp_mask, 1); + origin = alloc_pages(gfp_mask, 1); + __vmap_page_range_noflush(vmalloc_shadow(start + off), + vmalloc_shadow(start + off + PAGE_SIZE), + prot, &shadow); + __vmap_page_range_noflush(vmalloc_origin(start + off), + vmalloc_origin(start + off + PAGE_SIZE), + prot, &origin); + } + flush_cache_vmap(vmalloc_shadow(start), vmalloc_shadow(end)); + flush_cache_vmap(vmalloc_origin(start), vmalloc_origin(end)); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_ioremap_page_range); + +void kmsan_iounmap_page_range(unsigned long start, unsigned long end) +{ + int i, nr; + struct page *shadow, *origin; + unsigned long v_shadow, v_origin; + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + + nr = (end - start) / PAGE_SIZE; + irq_flags = kmsan_enter_runtime(); + v_shadow = (unsigned long)vmalloc_shadow(start); + v_origin = (unsigned long)vmalloc_origin(start); + for (i = 0; i < nr; i++, v_shadow += PAGE_SIZE, v_origin += PAGE_SIZE) { + shadow = vmalloc_to_page_or_null((void *)v_shadow); + origin = vmalloc_to_page_or_null((void *)v_origin); + __vunmap_page_range(v_shadow, v_shadow + PAGE_SIZE); + __vunmap_page_range(v_origin, v_origin + PAGE_SIZE); + if (shadow) + __free_pages(shadow, 1); + if (origin) + __free_pages(origin, 1); + } + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_iounmap_page_range); + +/* Called from include/linux/uaccess.h, include/linux/uaccess.h */ +void kmsan_copy_to_user(const void *to, const void *from, + size_t to_copy, size_t left) +{ + if (!kmsan_ready || kmsan_in_runtime()) + return; + /* + * At this point we've copied the memory already. It's hard to check it + * before copying, as the size of actually copied buffer is unknown. + */ + + /* copy_to_user() may copy zero bytes. No need to check. */ + if (!to_copy) + return; + /* Or maybe copy_to_user() failed to copy anything. */ + if (to_copy == left) + return; + if ((u64)to < TASK_SIZE) { + /* This is a user memory access, check it. */ + kmsan_internal_check_memory((void *)from, to_copy - left, to, + REASON_COPY_TO_USER); + return; + } + /* Otherwise this is a kernel memory access. This happens when a compat + * syscall passes an argument allocated on the kernel stack to a real + * syscall. + * Don't check anything, just copy the shadow of the copied bytes. + */ + kmsan_memcpy_metadata((void *)to, (void *)from, to_copy - left); +} +EXPORT_SYMBOL(kmsan_copy_to_user); + +void kmsan_gup_pgd_range(struct page **pages, int nr) +{ + int i; + void *page_addr; + + /* + * gup_pgd_range() has just created a number of new pages that KMSAN + * treats as uninitialized. In the case they belong to the userspace + * memory, unpoison the corresponding kernel pages. + */ + for (i = 0; i < nr; i++) { + page_addr = page_address(pages[i]); + if (((u64)page_addr < TASK_SIZE) && + ((u64)page_addr + PAGE_SIZE < TASK_SIZE)) + kmsan_unpoison_shadow(page_addr, PAGE_SIZE); + } + +} +EXPORT_SYMBOL(kmsan_gup_pgd_range); + +/* Helper function to check an SKB. */ +void kmsan_check_skb(const struct sk_buff *skb) +{ + struct sk_buff *frag_iter; + int i; + skb_frag_t *f; + u32 p_off, p_len, copied; + struct page *p; + u8 *vaddr; + + if (!skb || !skb->len) + return; + + kmsan_internal_check_memory(skb->data, skb_headlen(skb), 0, REASON_ANY); + if (skb_is_nonlinear(skb)) { + for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) { + f = &skb_shinfo(skb)->frags[i]; + + skb_frag_foreach_page(f, skb_frag_off(f), + skb_frag_size(f), + p, p_off, p_len, copied) { + + vaddr = kmap_atomic(p); + kmsan_internal_check_memory(vaddr + p_off, + p_len, /*user_addr*/ 0, + REASON_ANY); + kunmap_atomic(vaddr); + } + } + } + skb_walk_frags(skb, frag_iter) + kmsan_check_skb(frag_iter); +} +EXPORT_SYMBOL(kmsan_check_skb); + +/* Helper function to check an URB. */ +void kmsan_handle_urb(const struct urb *urb, bool is_out) +{ + if (!urb) + return; + if (is_out) + kmsan_internal_check_memory(urb->transfer_buffer, + urb->transfer_buffer_length, + /*user_addr*/ 0, REASON_SUBMIT_URB); + else + kmsan_internal_unpoison_shadow(urb->transfer_buffer, + urb->transfer_buffer_length, + /*checked*/false); +} +EXPORT_SYMBOL(kmsan_handle_urb); + +static void kmsan_handle_dma_page(const void *addr, size_t size, + enum dma_data_direction dir) +{ + switch (dir) { + case DMA_BIDIRECTIONAL: + kmsan_internal_check_memory((void *)addr, size, /*user_addr*/0, + REASON_ANY); + kmsan_internal_unpoison_shadow((void *)addr, size, + /*checked*/false); + break; + case DMA_TO_DEVICE: + kmsan_internal_check_memory((void *)addr, size, /*user_addr*/0, + REASON_ANY); + break; + case DMA_FROM_DEVICE: + kmsan_internal_unpoison_shadow((void *)addr, size, + /*checked*/false); + break; + case DMA_NONE: + break; + } +} + +/* Helper function to handle DMA data transfers. */ +void kmsan_handle_dma(const void *addr, size_t size, + enum dma_data_direction dir) +{ + u64 page_offset, to_go, uaddr = (u64)addr; + + /* + * The kernel may occasionally give us adjacent DMA pages not belonging + * to the same allocation. Process them separately to avoid triggering + * internal KMSAN checks. + */ + while (size > 0) { + page_offset = uaddr % PAGE_SIZE; + to_go = min(PAGE_SIZE - page_offset, (u64)size); + kmsan_handle_dma_page((void *)uaddr, to_go, dir); + uaddr += to_go; + size -= to_go; + } +} +EXPORT_SYMBOL(kmsan_handle_dma); + +/* Functions from kmsan-checks.h follow. */ +void kmsan_poison_shadow(const void *address, size_t size, gfp_t flags) +{ + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + irq_flags = kmsan_enter_runtime(); + /* The users may want to poison/unpoison random memory. */ + kmsan_internal_poison_shadow((void *)address, size, flags, + KMSAN_POISON_NOCHECK); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_poison_shadow); + +void kmsan_unpoison_shadow(const void *address, size_t size) +{ + unsigned long irq_flags; + + if (!kmsan_ready || kmsan_in_runtime()) + return; + + irq_flags = kmsan_enter_runtime(); + /* The users may want to poison/unpoison random memory. */ + kmsan_internal_unpoison_shadow((void *)address, size, + KMSAN_POISON_NOCHECK); + kmsan_leave_runtime(irq_flags); +} +EXPORT_SYMBOL(kmsan_unpoison_shadow); + +void kmsan_check_memory(const void *addr, size_t size) +{ + return kmsan_internal_check_memory((void *)addr, size, /*user_addr*/ 0, + REASON_ANY); +} +EXPORT_SYMBOL(kmsan_check_memory); From patchwork Wed Mar 25 16:12:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458245 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 81E9A913 for ; Wed, 25 Mar 2020 16:13:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4F4F020772 for ; Wed, 25 Mar 2020 16:13:33 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="v0WIpM1p" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4F4F020772 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C0E146B006E; Wed, 25 Mar 2020 12:13:26 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B73546B0070; Wed, 25 Mar 2020 12:13:26 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AB42D6B0071; Wed, 25 Mar 2020 12:13:26 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0057.hostedemail.com [216.40.44.57]) by kanga.kvack.org (Postfix) with ESMTP id 924EC6B006E for ; Wed, 25 Mar 2020 12:13:26 -0400 (EDT) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 72B7476E06 for ; Wed, 25 Mar 2020 16:13:26 +0000 (UTC) X-FDA: 76634379612.24.look72_7552829772707 X-Spam-Summary: 2,0,0,2a7d7101fcd7af4d,d41d8cd98f00b204,3jin7xgykcbmz41wxaz77z4x.v75416dg-553etv3.7az@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1534:1541:1593:1594:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:3138:3139:3140:3141:3142:3152:3352:3876:3877:5007:6114:6261:6642:6653:6742:6743:9969:10004:10400:11026:11473:11658:11914:12043:12048:12297:12438:12555:12895:13069:13255:13311:13357:13846:14181:14394:14659:14721:21080:21365:21444:21451:21627:21990:30054:30064:30070,0,RBL:209.85.128.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: look72_7552829772707 X-Filterd-Recvd-Size: 4907 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf28.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:26 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id n25so864398wmi.5 for ; Wed, 25 Mar 2020 09:13:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=9BX0n1WXQ92lZ3MjH0wp89YONUx6z9n/+dehjK5QHNw=; b=v0WIpM1pWYkwfW5PVwQd91qMoqKoPvBlxk5SXEQsjIfMMxOtPM+uW/juD+jySvaulH LxFZBZE5K3u+fvVj3DCY4O/N1ONJ5YbXA9KOV+zNMLXnfYuoPorKqPcTkBK1lyckfFX2 H/Kt/afRihVhzJ6Tv3Sou0/zrBr5u6Ikl9oGlSL0/0Yc9R7mvKcikFH+dUf8h243Ygqg mKyTBa355YAJi6M3FX58ZmvqhVV8Mo0pRKuRN5lrWwNH22+m/Vp2rkgPhxclEWfTFt9U 42yL5U4Kn5z3LfW3oh87zTyqHvRleumqy/sJw+kN2KMopP+XH/vST09b1fl3T7qjxnEe WZrg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9BX0n1WXQ92lZ3MjH0wp89YONUx6z9n/+dehjK5QHNw=; b=WcL7i8LmJM00IPNoPk41IuJF/1ii9l5bch5mIDLJ5ld+q17y0Bu/ETzEGAFLqiZVeV 7Ejwuknn5Yw3i5Sf6gGpbg5cAHsCqiS1F/N0cgvJPM7YyjCqZ1qWvZDzJB/SkBjRuzmX OsxbQmz5lsHmUdfv4WOtsXh56rFAynPYTo653rs1mrkAwfQVUIyf2ETE9dPiJ5aByN2V oBU12jE/qEk75zEF+ujQ9tI7G2QHElei62g/2Iw5WDRfB63wuk6je7VVWnlnfEfu2Jcx Feanzc4qr6nAscqmBCJ1Gn7wVyTeC4H4pHPlmdP6E+G0DJgrz0c3hmfPu2ukRnIfcqQN +AYw== X-Gm-Message-State: ANhLgQ3kXv8HNga2D9Ps+WlMYE0LN7qwCKHyqGxP3+hHEjUhypM37TY4 Tfeg3exxhDxtrFSr5sH2Whv3c+/8a1A= X-Google-Smtp-Source: ADFU+vvKSCumhgQOV6hReLOBw8ulh/JizQ4iZVSUzNz7t4lCcDMEQL61jMkY5H7aaud2KNndSv2xEhILz+8= X-Received: by 2002:adf:f452:: with SMTP id f18mr1995316wrp.222.1585152804647; Wed, 25 Mar 2020 09:13:24 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:20 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-10-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 09/38] kmsan: stackdepot: don't allocate KMSAN metadata for stackdepot From: glider@google.com To: Andrey Ryabinin , Jens Axboe , Andy Lutomirski , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , Christoph Hellwig , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: We assume an uninitialized value couldn't come from stackdepot, so we don't track stackdepot allocations with KMSAN. Signed-off-by: Alexander Potapenko Cc: Andrey Ryabinin Cc: Jens Axboe Cc: Andy Lutomirski Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: Christoph Hellwig Cc: linux-mm@kvack.org Reviewed-by: Andrey Konovalov --- v4: - set __GFP_NO_KMSAN_SHADOW explicitly for allocations Change-Id: Ic3ec9b3dff3fff2732d874508a3582fb26ff0b1f --- lib/stackdepot.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/stackdepot.c b/lib/stackdepot.c index 195ce3dc7c37e..ba584910ad66b 100644 --- a/lib/stackdepot.c +++ b/lib/stackdepot.c @@ -297,7 +297,7 @@ depot_stack_handle_t stack_depot_save(unsigned long *entries, */ alloc_flags &= ~GFP_ZONEMASK; alloc_flags &= (GFP_ATOMIC | GFP_KERNEL); - alloc_flags |= __GFP_NOWARN; + alloc_flags |= (__GFP_NOWARN | __GFP_NO_KMSAN_SHADOW); page = alloc_pages(alloc_flags, STACK_ALLOC_ORDER); if (page) prealloc = page_address(page); From patchwork Wed Mar 25 16:12:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458247 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 43349913 for ; Wed, 25 Mar 2020 16:13:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 109AA2073E for ; Wed, 25 Mar 2020 16:13:37 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YFP5gPzS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 109AA2073E Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0A23B6B0070; Wed, 25 Mar 2020 12:13:30 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 028E96B0071; Wed, 25 Mar 2020 12:13:29 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EA9866B0072; Wed, 25 Mar 2020 12:13:29 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0183.hostedemail.com [216.40.44.183]) by kanga.kvack.org (Postfix) with ESMTP id D19926B0070 for ; Wed, 25 Mar 2020 12:13:29 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A51E6824805A for ; Wed, 25 Mar 2020 16:13:29 +0000 (UTC) X-FDA: 76634379738.26.swim31_75c5a9accdb3e X-Spam-Summary: 2,0,0,6e08e14771e1c862,d41d8cd98f00b204,3j4n7xgykcby274z0d2aa270.ya8749gj-886hwy6.ad2@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:968:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1541:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:2901:3138:3139:3140:3141:3142:3152:3352:3866:3867:3871:3872:3873:3874:4250:4321:5007:6261:6653:6742:6743:7875:7903:8603:9969:10004:10400:11026:11473:11658:11914:12043:12048:12297:12438:12555:12895:12986:13069:13311:13357:13846:14181:14394:14659:14721:21080:21365:21444:21451:21627:21990:30054:30064:30070,0,RBL:209.85.221.74:@flex--glider.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: swim31_75c5a9accdb3e X-Filterd-Recvd-Size: 5301 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:29 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id e10so1383064wrm.2 for ; Wed, 25 Mar 2020 09:13:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+xpmqEKFkfsQ8mvB0TOG9SWpDteUMVfpNR7Bt4fBgZY=; b=YFP5gPzSxttH7PjQ7iuPi0g2Yq1PlDzpOniH0JdIj86Y/etBjnpPA2zYbrd0nCQT1L 8TWIBQvUjjy186431RGRX+TiXc+9J7KzuRtthXyKmq7HhcWj7XLzjNO0c2tdMRPH3VMl rnXg+YFF18DReFKVmDm+ZYDc6zF2cJCO88el9ycdFTdyXKoKdLPx7ZuM/WEUwPFXBcB7 C4RmGJUc15muSnKTXot4sLOmWL/wRXbfZGa589MX1XtKk9kJfOBAKb8S/d/G4OGWCt5+ yeFYKfchasSG3llovngjeLouKMyCf6wfLj5u1tbX3XOBrLdWI3Y44mEGwuqp9TRiIf7+ I1uQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+xpmqEKFkfsQ8mvB0TOG9SWpDteUMVfpNR7Bt4fBgZY=; b=h+f1eavg1rz2j/x7N2HjNc50Jj+bZSVxb3mh8+ArWwodKw68ht3Ve4EeG2puA4iEeQ XEBY/0nqiy2rEfi/uv4WdTqfgGf0/uVEcY5nl9mfXyqLGakBcQ3eDEwIqF9heRP1uDnq nNbyPkiYs+pTOzBIkkZANDdzol9F0qLxS5GKVGe3XCSvuV5unmd3Bql9ORH7MGbzcTtF m5tWTnYn9lJDHTa4LE+2q3aMGOAqmjXcmmMuzYLRRjr+fmwuwew+5v673sES0e7LL3u8 62zikTUnhJwCpadRcROb5njfFvGys5SEDYwdaWTFIol9OEvIrxyc3Q1OVxUoctUN2wYl TvZw== X-Gm-Message-State: ANhLgQ2IA3KtSE284Y3N8GV5FM7Nq3fSjQJJLIVsaA1T7CpK9kTEweYk VToyG8IExsPwDGZxGoG1JwoerhzGPT0= X-Google-Smtp-Source: ADFU+vs3i6YkiixUU6bPaz96DbgaVU2Df6UadPQZZ1DhcaC7ck7GDFUsHt9vG398C7jRiirXIpmRRp9p0Do= X-Received: by 2002:adf:fb0a:: with SMTP id c10mr4212389wrr.272.1585152807722; Wed, 25 Mar 2020 09:13:27 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:21 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-11-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 10/38] kmsan: define READ_ONCE_NOCHECK() From: glider@google.com To: Mark Rutland , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: READ_ONCE_NOCHECK() is already used by KASAN to ignore memory accesses from e.g. stack unwinders. Define READ_ONCE_NOCHECK() for KMSAN so that it returns initialized values. This helps defeat false positives from leftover stack contents. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Mark Rutland Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org Reviewed-by: Andrey Konovalov --- v3: - removed unnecessary #ifdef as requested by Mark Rutland v4: - added an #include as requested by Marco Elver Change-Id: Ib38369ba038ab3b581d8e45b81036c3304fb79cb --- include/linux/compiler.h | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/include/linux/compiler.h b/include/linux/compiler.h index f504edebd5d71..c6c67729729e3 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -279,6 +279,7 @@ void __write_once_size(volatile void *p, void *res, int size) */ #include #include +#include #define __READ_ONCE(x, check) \ ({ \ @@ -294,9 +295,9 @@ void __write_once_size(volatile void *p, void *res, int size) /* * Use READ_ONCE_NOCHECK() instead of READ_ONCE() if you need - * to hide memory access from KASAN. + * to hide memory access from KASAN or KMSAN. */ -#define READ_ONCE_NOCHECK(x) __READ_ONCE(x, 0) +#define READ_ONCE_NOCHECK(x) KMSAN_INIT_VALUE(__READ_ONCE(x, 0)) static __no_kasan_or_inline unsigned long read_word_at_a_time(const void *addr) From patchwork Wed Mar 25 16:12:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458253 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CC92414B4 for ; Wed, 25 Mar 2020 16:13:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9A0F320409 for ; Wed, 25 Mar 2020 16:13:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Xc70D+cV" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9A0F320409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 50D266B0071; Wed, 25 Mar 2020 12:13:33 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4E44D6B0072; Wed, 25 Mar 2020 12:13:33 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3D26D6B0073; Wed, 25 Mar 2020 12:13:33 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0028.hostedemail.com [216.40.44.28]) by kanga.kvack.org (Postfix) with ESMTP id 24FDD6B0071 for ; Wed, 25 Mar 2020 12:13:33 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E7E48181F24BF for ; Wed, 25 Mar 2020 16:13:32 +0000 (UTC) X-FDA: 76634379864.15.game06_764058ed0504d X-Spam-Summary: 2,0,0,d2bd54bb4022dba1,d41d8cd98f00b204,3k4n7xgykcbo6b834h6ee6b4.2ecb8dkn-ccal02a.eh6@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3152:3353:3865:3867:3868:3870:3874:4250:4321:5007:6261:6653:6742:6743:7875:7903:9969:10004:10400:11026:11473:11657:11658:11914:12043:12048:12114:12296:12297:12438:12555:12895:12986:13846:14181:14394:14659:14721:21080:21365:21444:21451:21627:21990:30054:30064:30070,0,RBL:209.85.128.74:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: game06_764058ed0504d X-Filterd-Recvd-Size: 5611 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:32 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id g9so2664298wmh.1 for ; Wed, 25 Mar 2020 09:13:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=r7vWvtvdqE3Z9xHwFz+7JS8RRlA0WqNLTVNjzNGYEtc=; b=Xc70D+cVw9n/9kGBj03TExsdP7n1vAT60jT2bQrjyEMrqGpVaqiPpvXRdU098bdZol Tl7PM5BfAs1dMGyISLjyrExGZJsYBBL7hs4W0ztfn4KsuBkU5W9FzDyzdHIrXQzUKuZO h+pGyMcXdl7xhZY8+JYbxkvEnv3d3mJgCvCUFT0TZ7dyTmKsS+OdR/GZwBYhNomNRxsT jiqtyvkFTKpBfc7TG68NGASpkpNZtM1ZZQJ9EkCkuAiXIBeKW9Z/Y2czXM5XpLTGVw+7 RBg5/i1Q805JTaUtZw/ixaXmSuXxGQlbCapMNyeAGIUrJWNmtt0phwDbaE8MeLxoJ3ar sh2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=r7vWvtvdqE3Z9xHwFz+7JS8RRlA0WqNLTVNjzNGYEtc=; b=f8Sl/9Fb7xXwZUdbykbkMzGs+yzQzK+XNZxpeigHPhPrpzgORP3kQzZjLfb7Eqjg4F 8HDHbVun3iP1I2g0HN95slJ/Euh47ID9Jtl58HqtwqbQ+U1Nf9TZeAqLeFDvA+7F8Dkp PiY/PAeUlzwOat+W+6KBClstAYGh15e+sfqPPnGgM6Ft/9MrXh2WeORuhVyiHKXQJxSq 8c9pe6Z7Ksk2SZK2LY9Jrz44cHn9KHOFCtwBjahMiqPdAv8uMLXOXnDqBX8PERGxEPfL QBZBmv5cG6qF74DwJAzSeaWdoS7SlzprM81aav2KVjVI4kaz1I+Q/82Tu9aaf7nj48QT jg+w== X-Gm-Message-State: ANhLgQ0BEqo6DzzuOMDhMZu5EIA0zRe/1p20+xKEE/Gc4t9RYsHJIx0Z VzHHQoH1C3zidauKsHn+MtPO/QjEILs= X-Google-Smtp-Source: ADFU+vugn8qR3xrks8fjiGYMa9i61dKIOQ3sIYJl83DGAdYPEXhNknnLjQk84Ek4sj0eyNmBiXdxY7g2nOs= X-Received: by 2002:adf:b3d4:: with SMTP id x20mr3905146wrd.269.1585152811036; Wed, 25 Mar 2020 09:13:31 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:22 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-12-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 11/38] kmsan: make READ_ONCE_TASK_STACK() return initialized values From: glider@google.com To: Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: To avoid false positives, assume that reading from the task stack always produces initialized values. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org Acked-by: Marco Elver Reviewed-by: Andrey Konovalov --- v4: - added an #include as requested by Marco Elver Change-Id: Ie73e5a41fdc8195699928e65f5cbe0d3d3c9e2fa --- arch/x86/include/asm/unwind.h | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/arch/x86/include/asm/unwind.h b/arch/x86/include/asm/unwind.h index 499578f7e6d7b..82c3bceb9999c 100644 --- a/arch/x86/include/asm/unwind.h +++ b/arch/x86/include/asm/unwind.h @@ -4,6 +4,7 @@ #include #include +#include #include #include @@ -100,9 +101,10 @@ void unwind_module_init(struct module *mod, void *orc_ip, size_t orc_ip_size, #endif /* - * This disables KASAN checking when reading a value from another task's stack, - * since the other task could be running on another CPU and could have poisoned - * the stack in the meantime. + * This disables KASAN/KMSAN checking when reading a value from another task's + * stack, since the other task could be running on another CPU and could have + * poisoned the stack in the meantime. Frame pointers are uninitialized by + * default, so for KMSAN we mark the return value initialized unconditionally. */ #define READ_ONCE_TASK_STACK(task, x) \ ({ \ @@ -111,7 +113,7 @@ void unwind_module_init(struct module *mod, void *orc_ip, size_t orc_ip_size, val = READ_ONCE(x); \ else \ val = READ_ONCE_NOCHECK(x); \ - val; \ + KMSAN_INIT_VALUE(val); \ }) static inline bool task_on_another_cpu(struct task_struct *task) From patchwork Wed Mar 25 16:12:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458255 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 924F714B4 for ; Wed, 25 Mar 2020 16:13:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4547220409 for ; Wed, 25 Mar 2020 16:13:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Nke1rrgA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4547220409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 582876B0072; Wed, 25 Mar 2020 12:13:36 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 50BD36B0073; Wed, 25 Mar 2020 12:13:36 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 383F06B0074; Wed, 25 Mar 2020 12:13:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0223.hostedemail.com [216.40.44.223]) by kanga.kvack.org (Postfix) with ESMTP id 210566B0072 for ; Wed, 25 Mar 2020 12:13:36 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0E32F83E8A for ; Wed, 25 Mar 2020 16:13:36 +0000 (UTC) X-FDA: 76634380032.15.magic67_76b66b787e85e X-Spam-Summary: 2,0,0,9f9125dcd3be754a,d41d8cd98f00b204,3lon7xgykcb09eb67k9hh9e7.5hfebgnq-ffdo35d.hk9@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3152:3353:3867:3868:3870:3874:4321:5007:6117:6261:6653:6742:6743:7901:7903:9969:10004:10400:11026:11232:11473:11658:11914:12043:12048:12291:12296:12297:12438:12555:12895:12986:13846:14096:14097:14181:14394:14659:14721:21080:21365:21444:21451:21627:21990:30003:30054:30064,0,RBL:209.85.221.201:@flex--glider.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: magic67_76b66b787e85e X-Filterd-Recvd-Size: 5607 Received: from mail-vk1-f201.google.com (mail-vk1-f201.google.com [209.85.221.201]) by imf08.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:35 +0000 (UTC) Received: by mail-vk1-f201.google.com with SMTP id t138so957850vkt.17 for ; Wed, 25 Mar 2020 09:13:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Dty9U5lEq/JO4Z1gO2OONUzqNNcvuqvvEN3in6yhbGM=; b=Nke1rrgAXYVPzJszS0k2Yq/3K2/LuEKPUSBsmc1kBzGvsV/rH2AsZesOaEGHrQwBhP U9KFjn65kCxXKLHqx9OxjskDKNZKawmSfu/c6lguW1N5IUgDRBe1qpQoDLK6CYi8cZz/ F0gmk0QnY0YPPSC/fileT4Z/ObF0dD19N8c1YW0RvaiEMxZYW4TPBDW3H1v6bHkuTwkA XkKqJnxbQeSbw//rmc1EjdvXaFAaLq9HsdDUg7pBfbeaRz129FSmAYUgKhfXPxSowAFh hLDhHcGwvzabrqKvGm9pIT1Uthz9p7uRRxLcWdZ7p64VNxGi0Y6K+PyD/WyI8Li8iFL7 qxfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Dty9U5lEq/JO4Z1gO2OONUzqNNcvuqvvEN3in6yhbGM=; b=e+PbDhwqj7MlkSDvTsMT2ON55kEt9fxoaDEzIxuDdGkha5SAY+Y/HFYm06BrAAR//V 6I3AmhQFw79xH3lYroAUzcriFGutDILdIkQZNiUjDqAGBzujU4xDmdOfGbPLm3mdp0X0 fKEmXYpcOx76Hp8G4gWL6rzW9T9w3rwsWSOkQP4yfXZDso6MBSVdpZMev+bptJz5CF9h OGNAzQ9b0PO0oSLI7CHky348UX5zdGKtLXTSqvV4ZXYDWKO8ZQvwn3a4S732ZkjaurF4 uLh5muZFCzzTp4gGasc42NuGwk4tvC6wZm3njIgglVRH5HC0XQ8AaCf6YVxawZJnQxS2 qZgQ== X-Gm-Message-State: ANhLgQ3LY89WEJxeOEWhWZjEAWs92z6poZEyklABPdKPypvFy6xTvnQ5 Mvhj1BNKQMgFmgi9A+O1UKP5aMqE2RU= X-Google-Smtp-Source: ADFU+vs/Cz560/OUQZI4xKP+/UNFKeoTP5Lkh7T10q1Rfq/qtD0hWzzM8zVLOJAMhUrXHbucQ3Y5sHeWNb8= X-Received: by 2002:ab0:770d:: with SMTP id z13mr3005021uaq.58.1585152814296; Wed, 25 Mar 2020 09:13:34 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:23 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-13-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 12/38] kmsan: x86: sync metadata pages on page fault From: glider@google.com To: Ingo Molnar , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN assumes shadow and origin pages for every allocated page are accessible. For pages in vmalloc region those metadata pages reside in [VMALLOC_END, VMALLOC_META_END), therefore we must sync a bigger memory region. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Ingo Molnar Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org Reviewed-by: Andrey Konovalov --- Change-Id: I0d54855489870ef1180b37fe2120b601da464bf7 --- arch/x86/mm/fault.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index a51df516b87bf..d22e373fa2124 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -331,11 +331,21 @@ static void dump_pagetable(unsigned long address) void vmalloc_sync_mappings(void) { +#ifndef CONFIG_KMSAN /* * 64-bit mappings might allocate new p4d/pud pages * that need to be propagated to all tasks' PGDs. */ sync_global_pgds(VMALLOC_START & PGDIR_MASK, VMALLOC_END); +#else + /* + * For KMSAN, make sure metadata pages for vmalloc area and modules are + * also synced. + */ + sync_global_pgds(VMALLOC_START & PGDIR_MASK, VMALLOC_META_END); + sync_global_pgds(MODULES_SHADOW_START & PGDIR_MASK, + MODULES_ORIGIN_END); +#endif } void vmalloc_sync_unmappings(void) @@ -360,7 +370,17 @@ static noinline int vmalloc_fault(unsigned long address) pte_t *pte; /* Make sure we are in vmalloc area: */ +#ifdef CONFIG_KMSAN + /* + * For KMSAN, make sure metadata pages for vmalloc area and modules are + * also synced. + */ + if (!(address >= VMALLOC_START && address < VMALLOC_META_END) && + !(address >= MODULES_SHADOW_START && + address < MODULES_ORIGIN_END)) +#else if (!(address >= VMALLOC_START && address < VMALLOC_END)) +#endif return -1; /* From patchwork Wed Mar 25 16:12:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458257 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3B84B913 for ; Wed, 25 Mar 2020 16:13:48 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EFE6B20409 for ; Wed, 25 Mar 2020 16:13:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="h9uyjMA3" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EFE6B20409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6BB3D6B0073; Wed, 25 Mar 2020 12:13:41 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 61CC46B0074; Wed, 25 Mar 2020 12:13:41 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 50C076B0075; Wed, 25 Mar 2020 12:13:41 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0072.hostedemail.com [216.40.44.72]) by kanga.kvack.org (Postfix) with ESMTP id 3ABF36B0073 for ; Wed, 25 Mar 2020 12:13:41 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 172851808B137 for ; Wed, 25 Mar 2020 16:13:41 +0000 (UTC) X-FDA: 76634380242.10.duck62_7757485972a00 X-Spam-Summary: 2,0,0,644617ed7be5ba9e,d41d8cd98f00b204,3myn7xgykccache9anckkcha.8kihejqt-iigr68g.knc@flex--glider.bounces.google.com,,RULES_HIT:1:2:41:69:152:355:379:541:800:960:966:973:988:989:1042:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1593:1594:1605:1730:1747:1777:1792:2196:2199:2393:2538:2553:2559:2562:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3872:3873:3874:4050:4250:4321:4385:5007:6119:6261:6653:6742:6743:7875:7903:7904:8660:9969:10004:11026:11473:11658:11914:12043:12048:12291:12294:12295:12296:12297:12438:12555:12683:12895:12986:13148:13161:13229:13230:13846:14394:14659:21080:21365:21444:21451:21627:21740:21772:21990:30054:30056:30064:30067:30070:30090,0,RBL:209.85.221.74:@flex--glider.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: duck62_7757485972a00 X-Filterd-Recvd-Size: 10645 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:39 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id d17so1358140wrw.19 for ; Wed, 25 Mar 2020 09:13:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=fl1uw9ngyzzftL52TQ78sZ6bEc80fwgvsXYc9OM2PVk=; b=h9uyjMA3cHnpsYz6tVhzHnvXutMoXBxt6lXpNWAFbU0a/KGV1/JeCFJ7sGRUhV2GLs Oy9wQ8zFqyy80gGmX86uQmmmKaoHbOzzqb5CNR71DFM5NiKyw4D61Cgt0c05bUTiF2qz GkMdnPvfvsHOxwoEFQYRUYWC6fuqR7RAoqmITi4lQ/nNtq3ypRCvGcR7YK41S2HGcLvO EQ6IhYSdDhwOZ4dDukuIh2GraUD939b10sZV6thK+6XwjNeGEL06QG4oRxAyDNqUwY6M NwZoZCSLSUaqoR7yRT7/nAdktuoHJ6pJ/uOBoqqauKWQIMWlbIvB+oKMapeDe/D/WzEu KlOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=fl1uw9ngyzzftL52TQ78sZ6bEc80fwgvsXYc9OM2PVk=; b=BtuOn5aCEKUXwCDD+Vb5UfBVFXT24JvEMXdgkt458/IBcJi9J3dUWyJSsMl+qyXszV yyLspKxz8iMUo9GC06eFxEDgLXNJOMsC9/in89N38M5humo09/+dv7HVAYTziudDfGBR yOmG6KGj3gU111guScZXEsXJCJCG4y3+i1lA8eG0eueDzdvrAFSEdJsoaLzxdRA8EUjU LF6M2b8iFgEtUt2YH4R9QBAtlpN9m0zddHka6Tc3PMYjCuNrUFq11JZhPd44MvJAFKkt eJWNOYIoWYGnRG6yb04NXZNwkx2OSh22Ka1VRwQtcIyDWzNpeiq7ucvKyXuH1sphEwNa CZTg== X-Gm-Message-State: ANhLgQ3vRuDPd32hw/+TITp23Gj29NmrjT7cBabA0vcXLkgo7Lgb24Wh /mQwixmHoF0YfJpc23CEgiZdOLEaoSk= X-Google-Smtp-Source: ADFU+vuy9srKRSIwQO1mQQqJphOrG0+dNT8Z5CY58s6jiQJNQyZ0mU0UpbdakK62uV/mWkP+2RqdzGqMhxU= X-Received: by 2002:adf:e68b:: with SMTP id r11mr3983948wrm.138.1585152817262; Wed, 25 Mar 2020 09:13:37 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:24 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-14-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 13/38] kmsan: add tests for KMSAN From: glider@google.com To: Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The initial commit adds several tests that trigger KMSAN warnings in simple cases. To use, build the kernel with CONFIG_TEST_KMSAN and do `insmod test_kmsan.ko` Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- v2: - added printk_test() v4: - test_kmsan: don't report -Wuninitialized warnings in the test - test_kmsan.c: addressed comments by Andrey Konovalov Change-Id: I287e86ae83a82b770f2baa46e5bbdce1dfa65195 --- lib/Makefile | 2 + lib/test_kmsan.c | 229 +++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 231 insertions(+) create mode 100644 lib/test_kmsan.c diff --git a/lib/Makefile b/lib/Makefile index ab68a86743607..d8058c5c05826 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -68,6 +68,8 @@ CFLAGS_test_kasan.o += $(call cc-disable-warning, vla) obj-$(CONFIG_TEST_UBSAN) += test_ubsan.o CFLAGS_test_ubsan.o += $(call cc-disable-warning, vla) UBSAN_SANITIZE_test_ubsan.o := y +obj-$(CONFIG_TEST_KMSAN) += test_kmsan.o +CFLAGS_test_kmsan.o += $(call cc-disable-warning, uninitialized) obj-$(CONFIG_TEST_KSTRTOX) += test-kstrtox.o obj-$(CONFIG_TEST_LIST_SORT) += test_list_sort.o obj-$(CONFIG_TEST_MIN_HEAP) += test_min_heap.o diff --git a/lib/test_kmsan.c b/lib/test_kmsan.c new file mode 100644 index 0000000000000..f1780ed0cd315 --- /dev/null +++ b/lib/test_kmsan.c @@ -0,0 +1,229 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Module for testing KMSAN. + * + * Copyright (C) 2017-2019 Google LLC + * Author: Alexander Potapenko + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + * + */ + +/* + * Tests below use noinline and volatile to work around compiler optimizations + * that may mask KMSAN bugs. + */ +#define pr_fmt(fmt) "kmsan test: %s : " fmt, __func__ + +#include +#include +#include +#include +#include + +#define CHECK(x) \ + do { \ + if (x) \ + pr_info(#x " is true\n"); \ + else \ + pr_info(#x " is false\n"); \ + } while (0) + +int signed_sum3(int a, int b, int c) +{ + return a + b + c; +} + +noinline void uninit_kmalloc_test(void) +{ + int *ptr; + + pr_info("-----------------------------\n"); + pr_info("uninitialized kmalloc test (UMR report)\n"); + ptr = kmalloc(sizeof(int), GFP_KERNEL); + pr_info("kmalloc returned %p\n", ptr); + CHECK(*ptr); +} +noinline void init_kmalloc_test(void) +{ + int *ptr; + + pr_info("-----------------------------\n"); + pr_info("initialized kmalloc test (no reports)\n"); + ptr = kmalloc(sizeof(int), GFP_KERNEL); + memset(ptr, 0, sizeof(int)); + pr_info("kmalloc returned %p\n", ptr); + CHECK(*ptr); +} + +noinline void init_kzalloc_test(void) +{ + int *ptr; + + pr_info("-----------------------------\n"); + pr_info("initialized kzalloc test (no reports)\n"); + ptr = kzalloc(sizeof(int), GFP_KERNEL); + pr_info("kzalloc returned %p\n", ptr); + CHECK(*ptr); +} + +noinline void uninit_multiple_args_test(void) +{ + volatile int a; + volatile char b = 3, c; + + pr_info("-----------------------------\n"); + pr_info("uninitialized local passed to fn (UMR report)\n"); + CHECK(signed_sum3(a, b, c)); +} + +noinline void uninit_stack_var_test(void) +{ + int cond; + + pr_info("-----------------------------\n"); + pr_info("uninitialized stack variable (UMR report)\n"); + CHECK(cond); +} + +noinline void init_stack_var_test(void) +{ + volatile int cond = 1; + + pr_info("-----------------------------\n"); + pr_info("initialized stack variable (no reports)\n"); + CHECK(cond); +} + +noinline void two_param_fn_2(int arg1, int arg2) +{ + CHECK(arg1); + CHECK(arg2); +} + +noinline void one_param_fn(int arg) +{ + two_param_fn_2(arg, arg); + CHECK(arg); +} + +noinline void two_param_fn(int arg1, int arg2) +{ + int init = 0; + + one_param_fn(init); + CHECK(arg1); + CHECK(arg2); +} + +noinline void params_test(void) +{ + volatile int uninit, init = 1; + + pr_info("-----------------------------\n"); + pr_info("uninit passed through a function parameter (UMR report)\n"); + two_param_fn(uninit, init); +} + +noinline void do_uninit_local_array(char *array, int start, int stop) +{ + int i; + volatile char uninit; + + for (i = start; i < stop; i++) + array[i] = uninit; +} + +noinline void uninit_kmsan_check_memory_test(void) +{ + volatile char local_array[8]; + + pr_info("-----------------------------\n"); + pr_info("kmsan_check_memory() called on uninit local (UMR report)\n"); + do_uninit_local_array((char *)local_array, 5, 7); + + kmsan_check_memory((char *)local_array, 8); +} + +noinline void init_kmsan_vmap_vunmap_test(void) +{ + const int npages = 2; + struct page *pages[npages]; + void *vbuf; + int i; + + pr_info("-----------------------------\n"); + pr_info("pages initialized via vmap (no reports)\n"); + + for (i = 0; i < npages; i++) + pages[i] = alloc_page(GFP_KERNEL); + vbuf = vmap(pages, npages, VM_MAP, PAGE_KERNEL); + memset(vbuf, 0xfe, npages * PAGE_SIZE); + for (i = 0; i < npages; i++) + kmsan_check_memory(page_address(pages[i]), PAGE_SIZE); + + if (vbuf) + vunmap(vbuf); + for (i = 0; i < npages; i++) + if (pages[i]) + __free_page(pages[i]); +} + +noinline void init_vmalloc_test(void) +{ + char *buf; + int npages = 8, i; + + pr_info("-----------------------------\n"); + pr_info("pages initialized via vmap (no reports)\n"); + buf = vmalloc(PAGE_SIZE * npages); + buf[0] = 1; + memset(buf, 0xfe, PAGE_SIZE * npages); + CHECK(buf[0]); + for (i = 0; i < npages; i++) + kmsan_check_memory(&buf[PAGE_SIZE * i], PAGE_SIZE); + vfree(buf); +} + +noinline void uaf_test(void) +{ + volatile int *var; + + pr_info("-----------------------------\n"); + pr_info("use-after-free in kmalloc-ed buffer (UMR report)\n"); + var = kmalloc(80, GFP_KERNEL); + var[3] = 0xfeedface; + kfree((int *)var); + CHECK(var[3]); +} + +noinline void printk_test(void) +{ + volatile int uninit; + + pr_info("-----------------------------\n"); + pr_info("uninit local passed to pr_info() (UMR report)\n"); + pr_info("%px contains %d\n", &uninit, uninit); +} + +static noinline int __init kmsan_tests_init(void) +{ + uninit_kmalloc_test(); + init_kmalloc_test(); + init_kzalloc_test(); + uninit_multiple_args_test(); + uninit_stack_var_test(); + init_stack_var_test(); + params_test(); + uninit_kmsan_check_memory_test(); + init_kmsan_vmap_vunmap_test(); + init_vmalloc_test(); + uaf_test(); + printk_test(); + return -EAGAIN; +} + +module_init(kmsan_tests_init); +MODULE_LICENSE("GPL"); From patchwork Wed Mar 25 16:12:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458259 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EAC731667 for ; Wed, 25 Mar 2020 16:13:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9D81920774 for ; Wed, 25 Mar 2020 16:13:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="dOxenkAC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9D81920774 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8DEB66B0074; Wed, 25 Mar 2020 12:13:42 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8B5C46B0075; Wed, 25 Mar 2020 12:13:42 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7CBB36B0078; Wed, 25 Mar 2020 12:13:42 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0097.hostedemail.com [216.40.44.97]) by kanga.kvack.org (Postfix) with ESMTP id 65BDA6B0074 for ; Wed, 25 Mar 2020 12:13:42 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 4A815181AC9B6 for ; Wed, 25 Mar 2020 16:13:42 +0000 (UTC) X-FDA: 76634380284.29.waste92_7793c7ef4184a X-Spam-Summary: 2,0,0,9f48159e957dd593,d41d8cd98f00b204,3nin7xgykccmfkhcdqfnnfkd.bnlkhmtw-llju9bj.nqf@flex--glider.bounces.google.com,,RULES_HIT:1:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1593:1594:1605:1730:1747:1777:1792:1981:2194:2199:2393:2559:2562:2636:2692:2693:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:4250:5007:6261:6653:6742:6743:7875:7903:8957:9969:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12555:12683:12895:13221:13229:13846:14096:14097:14394:14659:21080:21251:21365:21444:21451:21611:21627:30054:30055:30064,0,RBL:209.85.219.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: waste92_7793c7ef4184a X-Filterd-Recvd-Size: 13744 Received: from mail-qv1-f73.google.com (mail-qv1-f73.google.com [209.85.219.73]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:41 +0000 (UTC) Received: by mail-qv1-f73.google.com with SMTP id d7so2127145qvq.12 for ; Wed, 25 Mar 2020 09:13:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=48wUDVETC0+4No/mdLPeFXowssHnkUObE08Sk+uJSos=; b=dOxenkAC707zL0bnr2Pd+dAhRQfgqFE3WYg+G9ohS29UfnWFLNQwrA4MKFaFhSB4l2 hncR/0wG8yN8hRB7z8nEpZK/4OltYo/M+TeMLcPn9wkLHqlCe7cz7ajRHrV6v9mvPcGr pvekbRGOLJIh1SywHp/jCaCBqt8KDUUMbgDbmnmsDRMFvDowj0o/65n4nrgstYDpFrT8 5kUzxRVtfUJ1tOKBU7SilbQSfM3lpG9QFV1UufYWrWDoF7uNfY5E2IgUAtmrFj391A4X Ud3VDaaaUMZv+b3eQDUqCpIUi1b8m5JcOml6H6WoveiG3K0PDIn6pCpCr062JC+++eS8 dOtA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=48wUDVETC0+4No/mdLPeFXowssHnkUObE08Sk+uJSos=; b=Au5WO9IyGaUiInJUULnQebaag5PsNq+8Xbxas/gwGq6ysQHtwnNc/8Bgo09/T13CdR P2WM56Dc6RYbmZP8z+RVmGda8dMkcAl0PXRmV+xg81Pbq+fGzli8JO9cC6Mg17hDZFsL r6wceDol/SpDMQbbDZgwmQVsZA/XE5rBH22ykfu7SGVhx9DwdikRrJh9zLHxxHHs7bgp jf2TZu8Xvm+DZaR/437WKxNt3JgVuPkddrikBxpAaWIGHXZWcni13DQuUv2lH+15S5jk /DoXPJgdXstcu+n6eEeCXTecqwlafVP3KxQyQxg0PspRdXi7UiTLJVg1TY0JN4LtWSVo 5Rrg== X-Gm-Message-State: ANhLgQ06UDmE/8qRWDpzcxvxKLOrrYcvV51QN3gnmIkv/nTtw2wOYfRe PJbZh1GC7LJD8v+h0hAzSGevhJlGnd8= X-Google-Smtp-Source: ADFU+vt8psX7oI+VrVWwuuuSd0DdixIB//11uZqMbjy6eeAnfbgNsSsJxDjtFUeLBn1Oes+dXkVnXyQCCzo= X-Received: by 2002:ad4:5421:: with SMTP id g1mr3771617qvt.57.1585152820530; Wed, 25 Mar 2020 09:13:40 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:25 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-15-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 14/38] crypto: kmsan: disable accelerated configs under KMSAN From: glider@google.com To: Herbert Xu , "David S. Miller" , Eric Biggers , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, dmitry.torokhov@gmail.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN is unable to understand when initialized values come from assembly. Disable accelerated configs in KMSAN builds to prevent false positive reports. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Herbert Xu Cc: "David S. Miller" Cc: Eric Biggers Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org Reviewed-by: Andrey Konovalov --- v4: - shorten comments as requested by Marco Elver v5: - move the 'depends' directives together, added missing configs as requested by Eric Biggers Change-Id: Iddc71a2a27360e036d719c0940ebf15553cf8de8 --- crypto/Kconfig | 30 ++++++++++++++++++++++++++++++ 1 file changed, 30 insertions(+) diff --git a/crypto/Kconfig b/crypto/Kconfig index c24a47406f8f5..5035e8b2b033f 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -268,6 +268,7 @@ config CRYPTO_CURVE25519 config CRYPTO_CURVE25519_X86 tristate "x86_64 accelerated Curve25519 scalar multiplication library" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_LIB_CURVE25519_GENERIC select CRYPTO_ARCH_HAVE_LIB_CURVE25519 @@ -317,11 +318,13 @@ config CRYPTO_AEGIS128_SIMD bool "Support SIMD acceleration for AEGIS-128" depends on CRYPTO_AEGIS128 && ((ARM || ARM64) && KERNEL_MODE_NEON) depends on !ARM || CC_IS_CLANG || GCC_VERSION >= 40800 + depends on !KMSAN # avoid false positives from assembly default y config CRYPTO_AEGIS128_AESNI_SSE2 tristate "AEGIS-128 AEAD algorithm (x86_64 AESNI+SSE2 implementation)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_AEAD select CRYPTO_SIMD help @@ -458,6 +461,7 @@ config CRYPTO_NHPOLY1305 config CRYPTO_NHPOLY1305_SSE2 tristate "NHPoly1305 hash function (x86_64 SSE2 implementation)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_NHPOLY1305 help SSE2 optimized implementation of the hash function used by the @@ -466,6 +470,7 @@ config CRYPTO_NHPOLY1305_SSE2 config CRYPTO_NHPOLY1305_AVX2 tristate "NHPoly1305 hash function (x86_64 AVX2 implementation)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_NHPOLY1305 help AVX2 optimized implementation of the hash function used by the @@ -579,6 +584,7 @@ config CRYPTO_CRC32C config CRYPTO_CRC32C_INTEL tristate "CRC32c INTEL hardware acceleration" depends on X86 + depends on !KMSAN # avoid false positives from assembly select CRYPTO_HASH help In Intel processor with SSE4.2 supported, the processor will @@ -619,6 +625,7 @@ config CRYPTO_CRC32 config CRYPTO_CRC32_PCLMUL tristate "CRC32 PCLMULQDQ hardware acceleration" depends on X86 + depends on !KMSAN # avoid false positives from assembly select CRYPTO_HASH select CRC32 help @@ -684,6 +691,7 @@ config CRYPTO_BLAKE2S config CRYPTO_BLAKE2S_X86 tristate "BLAKE2s digest algorithm (x86 accelerated version)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_LIB_BLAKE2S_GENERIC select CRYPTO_ARCH_HAVE_LIB_BLAKE2S @@ -698,6 +706,7 @@ config CRYPTO_CRCT10DIF config CRYPTO_CRCT10DIF_PCLMUL tristate "CRCT10DIF PCLMULQDQ hardware acceleration" depends on X86 && 64BIT && CRC_T10DIF + depends on !KMSAN # avoid false positives from assembly select CRYPTO_HASH help For x86_64 processors with SSE4.2 and PCLMULQDQ supported, @@ -745,6 +754,7 @@ config CRYPTO_POLY1305 config CRYPTO_POLY1305_X86_64 tristate "Poly1305 authenticator algorithm (x86_64/SSE2/AVX2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_LIB_POLY1305_GENERIC select CRYPTO_ARCH_HAVE_LIB_POLY1305 help @@ -870,6 +880,7 @@ config CRYPTO_SHA1 config CRYPTO_SHA1_SSSE3 tristate "SHA1 digest algorithm (SSSE3/AVX/AVX2/SHA-NI)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SHA1 select CRYPTO_HASH help @@ -881,6 +892,7 @@ config CRYPTO_SHA1_SSSE3 config CRYPTO_SHA256_SSSE3 tristate "SHA256 digest algorithm (SSSE3/AVX/AVX2/SHA-NI)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SHA256 select CRYPTO_HASH help @@ -893,6 +905,7 @@ config CRYPTO_SHA256_SSSE3 config CRYPTO_SHA512_SSSE3 tristate "SHA512 digest algorithm (SSSE3/AVX/AVX2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SHA512 select CRYPTO_HASH help @@ -1064,6 +1077,7 @@ config CRYPTO_WP512 config CRYPTO_GHASH_CLMUL_NI_INTEL tristate "GHASH hash function (CLMUL-NI accelerated)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_CRYPTD help This is the x86_64 CLMUL-NI accelerated implementation of @@ -1114,6 +1128,7 @@ config CRYPTO_AES_TI config CRYPTO_AES_NI_INTEL tristate "AES cipher algorithms (AES-NI)" depends on X86 + depends on !KMSAN # avoid false positives from assembly select CRYPTO_AEAD select CRYPTO_LIB_AES select CRYPTO_ALGAPI @@ -1237,6 +1252,7 @@ config CRYPTO_BLOWFISH_COMMON config CRYPTO_BLOWFISH_X86_64 tristate "Blowfish cipher algorithm (x86_64)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_BLOWFISH_COMMON help @@ -1268,6 +1284,7 @@ config CRYPTO_CAMELLIA_X86_64 tristate "Camellia cipher algorithm (x86_64)" depends on X86 && 64BIT depends on CRYPTO + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_GLUE_HELPER_X86 help @@ -1285,6 +1302,7 @@ config CRYPTO_CAMELLIA_AESNI_AVX_X86_64 tristate "Camellia cipher algorithm (x86_64/AES-NI/AVX)" depends on X86 && 64BIT depends on CRYPTO + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_CAMELLIA_X86_64 select CRYPTO_GLUE_HELPER_X86 @@ -1305,6 +1323,7 @@ config CRYPTO_CAMELLIA_AESNI_AVX2_X86_64 tristate "Camellia cipher algorithm (x86_64/AES-NI/AVX2)" depends on X86 && 64BIT depends on CRYPTO + depends on !KMSAN # avoid false positives from assembly select CRYPTO_CAMELLIA_AESNI_AVX_X86_64 help Camellia cipher algorithm module (x86_64/AES-NI/AVX2). @@ -1351,6 +1370,7 @@ config CRYPTO_CAST5 config CRYPTO_CAST5_AVX_X86_64 tristate "CAST5 (CAST-128) cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_CAST5 select CRYPTO_CAST_COMMON @@ -1373,6 +1393,7 @@ config CRYPTO_CAST6 config CRYPTO_CAST6_AVX_X86_64 tristate "CAST6 (CAST-256) cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_CAST6 select CRYPTO_CAST_COMMON @@ -1406,6 +1427,7 @@ config CRYPTO_DES_SPARC64 config CRYPTO_DES3_EDE_X86_64 tristate "Triple DES EDE cipher algorithm (x86-64)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_LIB_DES help @@ -1473,6 +1495,7 @@ config CRYPTO_CHACHA20 config CRYPTO_CHACHA20_X86_64 tristate "ChaCha stream cipher algorithms (x86_64/SSSE3/AVX2/AVX-512VL)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_LIB_CHACHA_GENERIC select CRYPTO_ARCH_HAVE_LIB_CHACHA @@ -1516,6 +1539,7 @@ config CRYPTO_SERPENT config CRYPTO_SERPENT_SSE2_X86_64 tristate "Serpent cipher algorithm (x86_64/SSE2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_GLUE_HELPER_X86 select CRYPTO_SERPENT @@ -1535,6 +1559,7 @@ config CRYPTO_SERPENT_SSE2_X86_64 config CRYPTO_SERPENT_SSE2_586 tristate "Serpent cipher algorithm (i586/SSE2)" depends on X86 && !64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_GLUE_HELPER_X86 select CRYPTO_SERPENT @@ -1554,6 +1579,7 @@ config CRYPTO_SERPENT_SSE2_586 config CRYPTO_SERPENT_AVX_X86_64 tristate "Serpent cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_GLUE_HELPER_X86 select CRYPTO_SERPENT @@ -1574,6 +1600,7 @@ config CRYPTO_SERPENT_AVX_X86_64 config CRYPTO_SERPENT_AVX2_X86_64 tristate "Serpent cipher algorithm (x86_64/AVX2)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SERPENT_AVX_X86_64 help Serpent cipher algorithm, by Anderson, Biham & Knudsen. @@ -1669,6 +1696,7 @@ config CRYPTO_TWOFISH_586 config CRYPTO_TWOFISH_X86_64 tristate "Twofish cipher algorithm (x86_64)" depends on (X86 || UML_X86) && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_ALGAPI select CRYPTO_TWOFISH_COMMON help @@ -1685,6 +1713,7 @@ config CRYPTO_TWOFISH_X86_64 config CRYPTO_TWOFISH_X86_64_3WAY tristate "Twofish cipher algorithm (x86_64, 3-way parallel)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_TWOFISH_COMMON select CRYPTO_TWOFISH_X86_64 @@ -1706,6 +1735,7 @@ config CRYPTO_TWOFISH_X86_64_3WAY config CRYPTO_TWOFISH_AVX_X86_64 tristate "Twofish cipher algorithm (x86_64/AVX)" depends on X86 && 64BIT + depends on !KMSAN # avoid false positives from assembly select CRYPTO_SKCIPHER select CRYPTO_GLUE_HELPER_X86 select CRYPTO_SIMD From patchwork Wed Mar 25 16:12:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458261 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6307F14B4 for ; Wed, 25 Mar 2020 16:13:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3167A20772 for ; Wed, 25 Mar 2020 16:13:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="d7d8A0cu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3167A20772 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4BED96B0075; Wed, 25 Mar 2020 12:13:46 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 495A96B0078; Wed, 25 Mar 2020 12:13:46 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 385276B007B; Wed, 25 Mar 2020 12:13:46 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0109.hostedemail.com [216.40.44.109]) by kanga.kvack.org (Postfix) with ESMTP id 1E5E56B0075 for ; Wed, 25 Mar 2020 12:13:46 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id EE4388248047 for ; Wed, 25 Mar 2020 16:13:45 +0000 (UTC) X-FDA: 76634380452.15.scene65_7825b046f554e X-Spam-Summary: 2,0,0,3384fe7eda7e70f6,d41d8cd98f00b204,3n4n7xgykccyinkfgtiqqing.eqonkpwz-oomxcem.qti@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:901:960:966:967:973:988:989:1260:1263:1277:1313:1314:1345:1359:1431:1437:1516:1518:1534:1541:1593:1594:1711:1730:1747:1777:1792:2196:2199:2393:2525:2559:2564:2682:2685:2859:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3152:3352:3865:3866:3867:3871:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4385:5007:6261:6653:6742:6743:8599:9025:9388:9969:10004:10049:10400:11232:11658:11914:12043:12048:12296:12297:12438:12555:12895:13069:13311:13357:13846:14096:14097:14181:14394:14659:14721:21080:21365:21444:21451:21627:30054:30062:30064,0,RBL:209.85.221.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: scene65_7825b046f554e X-Filterd-Recvd-Size: 4944 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:45 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id h17so1362192wru.16 for ; Wed, 25 Mar 2020 09:13:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=a6lHJAJCHxoOmSQhCTPS1DgBjSD3nvhbZodQXjKaNuQ=; b=d7d8A0cuZhPtkeys6IyZaGOF5WSWCgoqzw2jHVrkYw0ScvZG5MWiF9LfUVPHNePfT4 QgSj8/J4IfwhqGGKiXdorScuzV+hxaMsWtS9vLBM8Ifh7SyiXqTqp32RA+OH+LcKE2lx ug0L3TPWDd/UT59oG65cLmo5yQjht3JdRW+LtrAIrGqXT0iZpIsrRMVGw16EN9UbCohM sFq3ldoXH+hKhzvTkXS8lvO+xW6k6teFPMlr3/K+0czh9jZDU8w7EA32E9SVk/knPjLK +pyF6M0vqU/L/CfW9S5okNhrd6u3JKSyN3NnmbZaCGUqUJU8NdXhySlx9Y6XErjTv+h2 agsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=a6lHJAJCHxoOmSQhCTPS1DgBjSD3nvhbZodQXjKaNuQ=; b=gy20+WpaEAPfveCNrPTox4uVtAIR14HlblmFb9wZZwQDYgn9YesxZXtX6CCNNzp49J KTR4d5uiLS09M6bY9EJON8UDVcCvtJkCyq50P6e3ryjhqtIGBcmOoDgzuld7GXaaCQ0x hEVn/CnuNNpOsbXMl4ZjA5igXb4LvCJREuiJtsRN5/PlrV0ug9OTHsqAOh0PAjAqbAi2 1zjbS/YQVHlSBTRum0vILM9OkLVQNrcDfci930dPiuX4Fqv71cElOZOHKeJjDHoUPWEC pkva8pBfODaWqNzmIh4S4Wx3P4LDYTI92pWt33+ttPjxFwmqg4tHuYQhc/Ziu2dcNEef j6UQ== X-Gm-Message-State: ANhLgQ368yNZgMz7e6s5/UWQajrDraaV7VtSwBbXViB11ZhCpeg31zTk ZigiQrGNfsDZJ/eT3rUWsXfHOAXOUcQ= X-Google-Smtp-Source: ADFU+vsGii7ajATDXcKphJdUT0kfQ8qwY5S5O2byjMEreXm3Lv1x9FPJZSyz7euci0lrCHEri3L8N/i5GVA= X-Received: by 2002:a05:6000:114f:: with SMTP id d15mr4368752wrx.143.1585152823758; Wed, 25 Mar 2020 09:13:43 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:26 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-16-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 15/38] kmsan: x86: disable UNWINDER_ORC under KMSAN From: glider@google.com To: Qian Cai , Christoph Hellwig , Herbert Xu , Harry Wentland , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN doesn't currently support UNWINDER_ORC, causing the kernel to freeze at boot time. See http://github.com/google/kmsan/issues/48. Signed-off-by: Alexander Potapenko Cc: Qian Cai Cc: Christoph Hellwig Cc: Herbert Xu Cc: Harry Wentland Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- This patch is part of "kmsan: Kconfig changes to disable options incompatible with KMSAN", which was split into smaller pieces. Change-Id: I9cb6ebbaeb9a38e9e1d015c68ab77d40420a7ad0 --- arch/x86/Kconfig.debug | 3 +++ 1 file changed, 3 insertions(+) diff --git a/arch/x86/Kconfig.debug b/arch/x86/Kconfig.debug index 2e74690b028a5..ad71eb2a416ec 100644 --- a/arch/x86/Kconfig.debug +++ b/arch/x86/Kconfig.debug @@ -276,6 +276,9 @@ choice config UNWINDER_ORC bool "ORC unwinder" depends on X86_64 + # KMSAN doesn't support UNWINDER_ORC yet, + # see https://github.com/google/kmsan/issues/48. + depends on !KMSAN select STACK_VALIDATION ---help--- This option enables the ORC (Oops Rewind Capability) unwinder for From patchwork Wed Mar 25 16:12:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458263 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9FF0A913 for ; Wed, 25 Mar 2020 16:13:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6095120774 for ; Wed, 25 Mar 2020 16:13:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Eoavv/bO" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6095120774 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 497466B007B; Wed, 25 Mar 2020 12:13:49 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 46DBE6B007D; Wed, 25 Mar 2020 12:13:49 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3AC686B007E; Wed, 25 Mar 2020 12:13:49 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0112.hostedemail.com [216.40.44.112]) by kanga.kvack.org (Postfix) with ESMTP id 202936B007B for ; Wed, 25 Mar 2020 12:13:49 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id EC5642BFC1 for ; Wed, 25 Mar 2020 16:13:48 +0000 (UTC) X-FDA: 76634380536.03.fall62_78935664e6f3b X-Spam-Summary: 2,0,0,2230771300a11d4f,d41d8cd98f00b204,3oon7xgykccklqnijwlttlqj.htrqnszc-rrpafhp.twl@flex--glider.bounces.google.com,,RULES_HIT:41:152:334:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1544:1593:1594:1711:1730:1747:1777:1792:2194:2197:2199:2200:2393:2559:2562:2904:3138:3139:3140:3141:3142:3152:3355:3653:3865:3867:3868:3870:3871:3872:3874:4119:4250:4605:5007:6119:6261:6653:6742:6743:7875:7974:8660:8784:9969:10004:11026:11473:11658:11914:12043:12048:12296:12297:12438:12555:12895:13148:13230:13846:14096:14097:14181:14394:14659:14721:21080:21221:21365:21433:21444:21451:21627:21740:21795:21990:30051:30054:30064:30069,0,RBL:209.85.221.74:@flex--glider.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:2,LUA_SUMMARY:none X-HE-Tag: fall62_78935664e6f3b X-Filterd-Recvd-Size: 8043 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:48 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id f8so1378652wrp.1 for ; Wed, 25 Mar 2020 09:13:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=1/Qp93MRgGz2zAATzzImSzxxf/Q7LJs0a04QBt6vCuA=; b=Eoavv/bOMMPQ7/F30RbMbnA358EezMcW1w3DjDNCuPE+YiGYk2V3km0+nvUeQFpXU4 nKul8ASmwCshOHdhEhFeDBFvVIbpKAAzG/+qZkJzeW7usvF/LJeEQCb1EbWWI+JeZiWU lq5/6RJrp08FITrwGFvgk0YIoQyNJPsMD65zQdSg/lso8/mJZmACNjDtbgVNTPwQAsiz 0kHDP1Rt22sNYqhMEvxoh2DRlz+ijV0ih97ykN2WptDXTYsHeTnaGoUZTfa/sRvlijcF +w8O9OG1WO4Owp3fJyY4tuoYXBc6IVzQyKgDJtTjQW8oc3wdh7ciYB0EwGx1vZmWUHwU gFkw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=1/Qp93MRgGz2zAATzzImSzxxf/Q7LJs0a04QBt6vCuA=; b=AbBJ1hFP/5bwUsZ2srlK7H62mFgjXb7QE1G/t6xWQ2dN/zIe6mVdU8ndI3VrhUF9jp ptwi+5JG4FCHs6kQxg1gcRmz/IWtC6Vw2t2c4AmyXA/cD6xJiTqlkSn7HCKX8cBRsraB IyGEzoKFZLZeXgF8F+yrdTfBl2Kflhj18xkIUlwCORd2D/GodE5WVX6hKySVblfRWoMQ bTLZEbhTrAsvJI0xE9D+F2eaTZyXxEZLfoWhtdxZs79SP1t7MroFkNnkjrryXhvAi6AD huAQmgVY2YXmWbyqGZdmmY3W+NjAPWR7y+u6LjruocdbOzKYNSKLAyUozNP0IVcI3TGz rUJw== X-Gm-Message-State: ANhLgQ0Zg0t2JlTaj4hvrXSLwww3O1GcshoLb+u5d/D9UJ9IQ3/gA8yE nOLQGj1TXS2AKGnZKVvWQjNIh12MAhQ= X-Google-Smtp-Source: ADFU+vuTgQHXpTp20DET2xBebeWvzPSM6yB5Roam+3SechTtO6BrGbXsbeY7g6xqqFCF6DzGWkeJuA0oBx0= X-Received: by 2002:adf:a21a:: with SMTP id p26mr4218863wra.102.1585152826967; Wed, 25 Mar 2020 09:13:46 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:27 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-17-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 16/38] kmsan: x86/asm: softirq: add KMSAN IRQ entry hooks From: glider@google.com To: Jens Axboe , Andy Lutomirski , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , Christoph Hellwig , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add assembly helpers to entry_64.S that invoke hooks from kmsan_entry.c and notify KMSAN about interrupts. Also call these hooks from kernel/softirq.c This is needed to switch between several KMSAN contexts holding function parameter metadata. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Jens Axboe Cc: Andy Lutomirski Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: Christoph Hellwig Cc: linux-mm@kvack.org Acked-by: Andrey Konovalov --- v4: - moved softirq changes to this patch Change-Id: I3037d51672fe69d09e588b27adb2d9fdc6ad3a7d --- arch/x86/entry/entry_64.S | 16 ++++++++++++++++ kernel/softirq.c | 5 +++++ 2 files changed, 21 insertions(+) diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S index 0e9504fabe526..03f5a32b0af4d 100644 --- a/arch/x86/entry/entry_64.S +++ b/arch/x86/entry/entry_64.S @@ -35,6 +35,7 @@ #include #include #include +#include #include #include #include @@ -575,6 +576,7 @@ SYM_CODE_START(interrupt_entry) 1: ENTER_IRQ_STACK old_rsp=%rdi save_ret=1 + KMSAN_INTERRUPT_ENTER /* We entered an interrupt context - irqs are off: */ TRACE_IRQS_OFF @@ -604,12 +606,14 @@ SYM_CODE_START_LOCAL(common_interrupt) addq $-0x80, (%rsp) /* Adjust vector to [-256, -1] range */ call interrupt_entry UNWIND_HINT_REGS indirect=1 + KMSAN_UNPOISON_PT_REGS call do_IRQ /* rdi points to pt_regs */ /* 0(%rsp): old RSP */ ret_from_intr: DISABLE_INTERRUPTS(CLBR_ANY) TRACE_IRQS_OFF + KMSAN_INTERRUPT_EXIT LEAVE_IRQ_STACK testb $3, CS(%rsp) @@ -801,6 +805,7 @@ SYM_CODE_START(\sym) .Lcommon_\sym: call interrupt_entry UNWIND_HINT_REGS indirect=1 + KMSAN_UNPOISON_PT_REGS call \do_sym /* rdi points to pt_regs */ jmp ret_from_intr SYM_CODE_END(\sym) @@ -908,15 +913,18 @@ apicinterrupt IRQ_WORK_VECTOR irq_work_interrupt smp_irq_work_interrupt .if \shift_ist != -1 subq $\ist_offset, CPU_TSS_IST(\shift_ist) + KMSAN_IST_ENTER(\shift_ist) .endif .if \read_cr2 movq %r12, %rdx /* Move CR2 into 3rd argument */ .endif + KMSAN_UNPOISON_PT_REGS call \do_sym .if \shift_ist != -1 + KMSAN_IST_EXIT(\shift_ist) addq $\ist_offset, CPU_TSS_IST(\shift_ist) .endif @@ -1079,7 +1087,9 @@ SYM_FUNC_START(do_softirq_own_stack) pushq %rbp mov %rsp, %rbp ENTER_IRQ_STACK regs=0 old_rsp=%r11 + KMSAN_SOFTIRQ_ENTER call __do_softirq + KMSAN_SOFTIRQ_EXIT LEAVE_IRQ_STACK regs=0 leaveq ret @@ -1466,9 +1476,12 @@ SYM_CODE_START(nmi) * done with the NMI stack. */ + KMSAN_NMI_ENTER movq %rsp, %rdi movq $-1, %rsi + KMSAN_UNPOISON_PT_REGS call do_nmi + KMSAN_NMI_EXIT /* * Return back to user mode. We must *not* do the normal exit @@ -1678,10 +1691,13 @@ end_repeat_nmi: call paranoid_entry UNWIND_HINT_REGS + KMSAN_NMI_ENTER /* paranoidentry do_nmi, 0; without TRACE_IRQS_OFF */ movq %rsp, %rdi movq $-1, %rsi + KMSAN_UNPOISON_PT_REGS call do_nmi + KMSAN_NMI_EXIT /* Always restore stashed CR3 value (see paranoid_entry) */ RESTORE_CR3 scratch_reg=%r15 save_reg=%r14 diff --git a/kernel/softirq.c b/kernel/softirq.c index 0427a86743a46..98c5f4062cbfe 100644 --- a/kernel/softirq.c +++ b/kernel/softirq.c @@ -11,6 +11,7 @@ #include #include +#include #include #include #include @@ -370,7 +371,9 @@ static inline void invoke_softirq(void) * it is the irq stack, because it should be near empty * at this stage. */ + kmsan_context_enter(); __do_softirq(); + kmsan_context_exit(); #else /* * Otherwise, irq_exit() is called on the task stack that can @@ -600,7 +603,9 @@ static void run_ksoftirqd(unsigned int cpu) * We can safely run softirq on inline stack, as we are not deep * in the task stack here. */ + kmsan_context_enter(); __do_softirq(); + kmsan_context_exit(); local_irq_enable(); cond_resched(); return; From patchwork Wed Mar 25 16:12:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458265 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8348C913 for ; Wed, 25 Mar 2020 16:14:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 362A820774 for ; Wed, 25 Mar 2020 16:14:02 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="oOYzSI49" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 362A820774 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5D0056B007E; Wed, 25 Mar 2020 12:13:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 581EE6B0080; Wed, 25 Mar 2020 12:13:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 470F56B0081; Wed, 25 Mar 2020 12:13:52 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0188.hostedemail.com [216.40.44.188]) by kanga.kvack.org (Postfix) with ESMTP id 304B66B007E for ; Wed, 25 Mar 2020 12:13:52 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 0359E18206365 for ; Wed, 25 Mar 2020 16:13:52 +0000 (UTC) X-FDA: 76634380662.16.cover37_78ff2bbc91012 X-Spam-Summary: 2,0,0,44a4d01cb4262a15,d41d8cd98f00b204,3pyn7xgykccwotqlmzowwotm.kwutqvcf-uusdiks.wzo@flex--glider.bounces.google.com,,RULES_HIT:1:2:41:152:355:379:541:800:960:968:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1593:1594:1605:1730:1747:1777:1792:1801:2194:2199:2393:2553:2559:2562:2693:2902:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:4050:4250:4321:4605:5007:6261:6609:6653:6742:6743:7875:7903:8660:8784:9969:10004:11026:11473:11657:11658:11914:12043:12048:12296:12297:12438:12555:12895:12986:13148:13230:13846:14394:14659:21080:21365:21444:21451:21627:21740:21939:30012:30054:30064:30090,0,RBL:209.85.221.74:@flex--glider.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:17,LUA_SUMMARY:none X-HE-Tag: cover37_78ff2bbc91012 X-Filterd-Recvd-Size: 10653 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:51 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id y1so254334wrp.5 for ; Wed, 25 Mar 2020 09:13:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=OCAWH2XOJkierb6veGtRwTOLvSBLg+bwKorlGaZhIwQ=; b=oOYzSI49k8lQkFxL+r+mSHovhH/iJECoUjnBoDzhPkTioM3MoBwsO7L18Pc34o3JJ6 LIYfDRy91Bjs9eYFzvjv21HomhkWf6fQzDnSz63wYmNWW2+3PTHi9Abg3Pk/ekx6IFJw kVKBh9wxjMOHKU8K3lpHf0mepzx5ImXNuWYyOAM+4aFQP7p9Gk28+6WEF0a9VNQeHX5a ngtBvSbnH27gwBHDsXsX3Qfr3I6y8djhVTnm1LyuaptC9obWZbWbLqu6bgQaqjHRG3U6 f0sNrlIw6QdGnSJzoIeBIzoseJH8NFvaNduyBaftyfJhLILDNEJ1Kx+Nb7DI7HTjdMlk 7tqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=OCAWH2XOJkierb6veGtRwTOLvSBLg+bwKorlGaZhIwQ=; b=KIokMahSaBUoy9ZTKaADu9zRf/4G06v4wvSMzgZogggMHogYWWE2Al63EJz6sjfQPe gp44SmSDJBB1iFWdMYgtU4BHMJOTgW0qOhHdmp9O+0j3LQex6lrUaqTXMfZCkUMJi6+J wbGvUEkAg5xgB/dnMNsmkeY70dq73mWykmY7fp3Gprrn0S83cm/ime5T5XkLkFqIL763 AVnJRa34sURPN7yiLmx55ag91GLqVV8iDqOr8fsrNsHybACgg2hvG1xl7unIUl0Acpmy qX3sm7Fmj/zqBsMokuAe66NHb1be5WDMptIWvaQVs5Ev/NErW6L2ENK/R6wAh3Szxy2c KYsg== X-Gm-Message-State: ANhLgQ0gKjj8gdUveBYcDm2HVYREBFeg5u1cQbkJTwd+22pmUqReiwcW JnvUlCGX/QbL4vDUxUfZeu/1ZE7PKr8= X-Google-Smtp-Source: ADFU+vtZ7EMawfXf0vWxJhHQTGJGRmaG7YUOSJMCdN7YU95ZC+41f46g1vZSyUR08mKoOov44EbmXvYv3T0= X-Received: by 2002:adf:b3d4:: with SMTP id x20mr3906530wrd.269.1585152829955; Wed, 25 Mar 2020 09:13:49 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:28 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-18-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 17/38] kmsan: disable KMSAN instrumentation for certain kernel parts From: glider@google.com To: Ard Biesheuvel , Thomas Gleixner , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Instrumenting some files with KMSAN will result in kernel being unable to link, boot or crashing at runtime for various reasons (e.g. infinite recursion caused by instrumentation hooks calling instrumented code again). Disable KMSAN in the following places: - arch/x86/boot and arch/x86/realmode/rm, as KMSAN doesn't work for i386; - arch/x86/entry/vdso, which isn't linked with KMSAN runtime; - three files in arch/x86/kernel - boot problems; - arch/x86/mm/cpu_entry_area.c - recursion; - EFI stub - build failures; - kcov, stackdepot, lockdep - recursion. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Ard Biesheuvel Cc: Thomas Gleixner Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org Reviewed-by: Andrey Konovalov --- v4: - fix lockdep support by not instrumenting lockdep.c - unified comments with KCSAN Change-Id: I90961eabf2dcb9ae992aed259088953bad5e4d6d --- arch/x86/boot/Makefile | 1 + arch/x86/boot/compressed/Makefile | 2 ++ arch/x86/entry/vdso/Makefile | 3 +++ arch/x86/kernel/Makefile | 4 ++++ arch/x86/kernel/cpu/Makefile | 1 + arch/x86/mm/Makefile | 3 +++ arch/x86/realmode/rm/Makefile | 1 + drivers/firmware/efi/libstub/Makefile | 1 + kernel/Makefile | 1 + kernel/locking/Makefile | 4 ++++ lib/Makefile | 1 + 11 files changed, 22 insertions(+) diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile index d7aa1c3a6b25a..2ca8b9b478f3a 100644 --- a/arch/x86/boot/Makefile +++ b/arch/x86/boot/Makefile @@ -12,6 +12,7 @@ # Sanitizer runtimes are unavailable and cannot be linked for early boot code. KASAN_SANITIZE := n KCSAN_SANITIZE := n +KMSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y # Kernel does not boot with kcov instrumentation here. diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile index 7619742f91c9a..2af62067a90ec 100644 --- a/arch/x86/boot/compressed/Makefile +++ b/arch/x86/boot/compressed/Makefile @@ -20,6 +20,8 @@ # Sanitizer runtimes are unavailable and cannot be linked for early boot code. KASAN_SANITIZE := n KCSAN_SANITIZE := n +# KMSAN doesn't work for i386 +KMSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile index ecf6128c95516..e2b1b9be89ab7 100644 --- a/arch/x86/entry/vdso/Makefile +++ b/arch/x86/entry/vdso/Makefile @@ -13,6 +13,9 @@ KBUILD_CFLAGS += $(DISABLE_LTO) # Sanitizer runtimes are unavailable and cannot be linked here. KASAN_SANITIZE := n +KMSAN_SANITIZE_vclock_gettime.o := n +KMSAN_SANITIZE_vgetcpu.o := n + UBSAN_SANITIZE := n KCSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile index 1ee83df407e3b..a3b7b0452c817 100644 --- a/arch/x86/kernel/Makefile +++ b/arch/x86/kernel/Makefile @@ -32,6 +32,10 @@ KASAN_SANITIZE_paravirt.o := n # by several compilation units. To be safe, disable all instrumentation. KCSAN_SANITIZE := n +# Work around reboot loop. +KMSAN_SANITIZE_head$(BITS).o := n +KMSAN_SANITIZE_nmi.o := n + OBJECT_FILES_NON_STANDARD_relocate_kernel_$(BITS).o := y OBJECT_FILES_NON_STANDARD_test_nx.o := y OBJECT_FILES_NON_STANDARD_paravirt_patch.o := y diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile index dba6a83bc3493..0e299ba013868 100644 --- a/arch/x86/kernel/cpu/Makefile +++ b/arch/x86/kernel/cpu/Makefile @@ -12,6 +12,7 @@ endif # If these files are instrumented, boot hangs during the first second. KCOV_INSTRUMENT_common.o := n KCOV_INSTRUMENT_perf_event.o := n +KMSAN_SANITIZE_common.o := n # As above, instrumenting secondary CPU boot code causes boot hangs. KCSAN_SANITIZE_common.o := n diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile index f7fd0e868c9c8..f11848633cf5b 100644 --- a/arch/x86/mm/Makefile +++ b/arch/x86/mm/Makefile @@ -11,6 +11,9 @@ KASAN_SANITIZE_mem_encrypt_identity.o := n # reference __initdata sections. KCSAN_SANITIZE := n +# Avoid recursion by not calling KMSAN hooks for CEA code. +KMSAN_SANITIZE_cpu_entry_area.o := n + ifdef CONFIG_FUNCTION_TRACER CFLAGS_REMOVE_mem_encrypt.o = -pg CFLAGS_REMOVE_mem_encrypt_identity.o = -pg diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile index 83f1b6a56449f..f614009d3e4e2 100644 --- a/arch/x86/realmode/rm/Makefile +++ b/arch/x86/realmode/rm/Makefile @@ -10,6 +10,7 @@ # Sanitizer runtimes are unavailable and cannot be linked here. KASAN_SANITIZE := n KCSAN_SANITIZE := n +KMSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y # Prevents link failures: __sanitizer_cov_trace_pc() is not linked in. diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile index dd31237fba2e9..2cf047a0d2e06 100644 --- a/drivers/firmware/efi/libstub/Makefile +++ b/drivers/firmware/efi/libstub/Makefile @@ -36,6 +36,7 @@ GCOV_PROFILE := n # Sanitizer runtimes are unavailable and cannot be linked here. KASAN_SANITIZE := n KCSAN_SANITIZE := n +KMSAN_SANITIZE := n UBSAN_SANITIZE := n OBJECT_FILES_NON_STANDARD := y diff --git a/kernel/Makefile b/kernel/Makefile index 6ac453daf500e..e9093daf41056 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -35,6 +35,7 @@ KCOV_INSTRUMENT_stacktrace.o := n KCOV_INSTRUMENT_kcov.o := n KASAN_SANITIZE_kcov.o := n KCSAN_SANITIZE_kcov.o := n +KMSAN_SANITIZE_kcov.o := n CFLAGS_kcov.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) # cond_syscall is currently not LTO compatible diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile index 6d11cfb9b41f2..1dd1f7d81e691 100644 --- a/kernel/locking/Makefile +++ b/kernel/locking/Makefile @@ -3,6 +3,10 @@ # and is generally not a function of system call inputs. KCOV_INSTRUMENT := n +# Instrumenting lockdep.c with KMSAN may cause deadlocks because of +# recursive KMSAN runtime calls. +KMSAN_SANITIZE_lockdep.o := n + obj-y += mutex.o semaphore.o rwsem.o percpu-rwsem.o # Avoid recursion lockdep -> KCSAN -> ... -> lockdep. diff --git a/lib/Makefile b/lib/Makefile index d8058c5c05826..6ec959b62a55f 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -234,6 +234,7 @@ obj-$(CONFIG_IRQ_POLL) += irq_poll.o CFLAGS_stackdepot.o += -fno-builtin obj-$(CONFIG_STACKDEPOT) += stackdepot.o KASAN_SANITIZE_stackdepot.o := n +KMSAN_SANITIZE_stackdepot.o := n KCOV_INSTRUMENT_stackdepot.o := n libfdt_files = fdt.o fdt_ro.o fdt_wip.o fdt_rw.o fdt_sw.o fdt_strerror.o \ From patchwork Wed Mar 25 16:12:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458267 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 30F66913 for ; Wed, 25 Mar 2020 16:14:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E931E20774 for ; Wed, 25 Mar 2020 16:14:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="u+yPsS+/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E931E20774 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 385886B0082; Wed, 25 Mar 2020 12:13:55 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 338846B0083; Wed, 25 Mar 2020 12:13:55 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 224E66B0085; Wed, 25 Mar 2020 12:13:55 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0020.hostedemail.com [216.40.44.20]) by kanga.kvack.org (Postfix) with ESMTP id 0C53C6B0082 for ; Wed, 25 Mar 2020 12:13:55 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id D708C7581B for ; Wed, 25 Mar 2020 16:13:54 +0000 (UTC) X-FDA: 76634380788.01.blade37_796d0d21d4725 X-Spam-Summary: 2,0,0,e8a1bac1556dd875,d41d8cd98f00b204,3qin7xgykcc8rwtopcrzzrwp.nzxwtyfi-xxvglnv.zcr@flex--glider.bounces.google.com,,RULES_HIT:2:41:152:355:379:541:800:960:966:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1593:1594:1605:1606:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2553:2559:2562:2693:2731:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3872:3874:4119:4250:4321:4385:5007:6119:6261:6653:6742:6743:7875:7903:7904:8603:8660:9592:9969:10004:11026:11473:11657:11658:11914:12043:12048:12291:12296:12297:12438:12555:12895:12986:13148:13230:13846:14096:14097:14394:14659:21080:21212:21365:21444:21451:21627:21740:21966:21990:30012:30029:30054:30055:30064:30070:30090,0,RBL:209.85.221.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: blade37_796d0d21d4725 X-Filterd-Recvd-Size: 8586 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:54 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id h14so1362602wrr.12 for ; Wed, 25 Mar 2020 09:13:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Sa6nsq0BYXIlysvXMPCpNA3EIKSRawmp2xcpE7WG474=; b=u+yPsS+/uRngXo1UzqnXXoVjjJl/OCO7XeMd7L66pM0UT8/SDMqUSQXyemRPPhIMyf 51spg05rsggW3fUjL+x8m3uQhsFSODUHL5/86Y6VJNRnLv6jOuDIDgM3hBOavx98CcQO Xc5DvC0/TUsL0xHHqA3ckwuD1sn+0t3+2KAQLEEfysJauHmUJK341WVzz/n+09bsT+Xb 5bpQkyG4OyYDilbmddgitHhkp3/lfNErq1cDgeS5fIu+u+OQVMOMmQmwQ7bU5ThYJqCV 0Jr5ozz++HNE1YBNolTG/Nn3yGFMrdyihXqF69qWTZimMW71vV4JpgeIslzrtV3qodAP rJNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Sa6nsq0BYXIlysvXMPCpNA3EIKSRawmp2xcpE7WG474=; b=mFm0Rpa4pTzEIyGsuAPRx1nc0iRNNC4AtB0wAT5Qh7cezFPsb0LX+y7iu7QBbT5cRr k4N2wp5NbSF9+WH2OmJA4SBZ0nrU3g2dDDafTcwrg3BY7cbEl5cuofkv3v+0NBBawpAL dmy+tJYz9+s4qCd7DjOWHL3zI9aR213qc8CkX5l1X1p5rTj0imE/ijmX2aIcBOOOj8FE B87kLSWfy+GSCajZ/yRkROww21FHzrIp9nyYOImPfAref3XAybm5EOY8f0xS0xjPD9Gu 4Ou7qAQQV2Kq959pPqXqNembx0Y+xvo0HgMhnKhv9OM7OZEMzvDYTHU2p76iaqHZbuKH LWQQ== X-Gm-Message-State: ANhLgQ32TmptrCpfSu3KOqFXSxZTCCBCPyZkuemkd4CSbE8VSSkpLxJI 7eKIIFyffK4c/WKu2XM/XfRpyrzLdFM= X-Google-Smtp-Source: ADFU+vvAGZoF26dIztUr51cgdgQNZCrtakfrlaH+b0C27JkYlF9DYm/ikdaAvyvLczMLIRntLmkeHuicx/w= X-Received: by 2002:adf:e98b:: with SMTP id h11mr4243604wrm.409.1585152832911; Wed, 25 Mar 2020 09:13:52 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:29 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-19-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 18/38] kmsan: mm: call KMSAN hooks from SLUB code From: glider@google.com To: Andrew Morton , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In order to report uninitialized memory coming from heap allocations KMSAN has to poison them unless they're created with __GFP_ZERO. It's handy that we need KMSAN hooks in the places where init_on_alloc/init_on_free initialization is performed. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Andrew Morton Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- v3: - reverted unrelated whitespace changes Change-Id: I51103b7981d3aabed747d0c85cbdc85568665871 --- mm/slub.c | 29 ++++++++++++++++++++++++----- 1 file changed, 24 insertions(+), 5 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 332d4b459a907..67c7f76bee412 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -21,6 +21,8 @@ #include #include #include +#include +#include /* KMSAN_INIT_VALUE */ #include #include #include @@ -283,17 +285,27 @@ static void prefetch_freepointer(const struct kmem_cache *s, void *object) prefetch(object + s->offset); } +/* + * When running under KMSAN, get_freepointer_safe() may return an uninitialized + * pointer value in the case the current thread loses the race for the next + * memory chunk in the freelist. In that case this_cpu_cmpxchg_double() in + * slab_alloc_node() will fail, so the uninitialized value won't be used, but + * KMSAN will still check all arguments of cmpxchg because of imperfect + * handling of inline assembly. + * To work around this problem, use KMSAN_INIT_VALUE() to force initialize the + * return value of get_freepointer_safe(). + */ static inline void *get_freepointer_safe(struct kmem_cache *s, void *object) { unsigned long freepointer_addr; void *p; if (!debug_pagealloc_enabled_static()) - return get_freepointer(s, object); + return KMSAN_INIT_VALUE(get_freepointer(s, object)); freepointer_addr = (unsigned long)object + s->offset; probe_kernel_read(&p, (void **)freepointer_addr, sizeof(p)); - return freelist_ptr(s, p, freepointer_addr); + return KMSAN_INIT_VALUE(freelist_ptr(s, p, freepointer_addr)); } static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp) @@ -1411,6 +1423,7 @@ static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) ptr = kasan_kmalloc_large(ptr, size, flags); /* As ptr might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ptr, size, 1, flags); + kmsan_kmalloc_large(ptr, size, flags); return ptr; } @@ -1418,6 +1431,7 @@ static __always_inline void kfree_hook(void *x) { kmemleak_free(x); kasan_kfree_large(x, _RET_IP_); + kmsan_kfree_large(x); } static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x) @@ -1461,6 +1475,7 @@ static inline bool slab_free_freelist_hook(struct kmem_cache *s, do { object = next; next = get_freepointer(s, object); + kmsan_slab_free(s, object); if (slab_want_init_on_free(s)) { /* @@ -2784,6 +2799,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s, if (unlikely(slab_want_init_on_alloc(gfpflags, s)) && object) memset(object, 0, s->object_size); + kmsan_slab_alloc(s, object, gfpflags); slab_post_alloc_hook(s, gfpflags, 1, &object); return object; @@ -3167,7 +3183,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, void **p) { struct kmem_cache_cpu *c; - int i; + int i, j; /* memcg and kmem_cache debug support */ s = slab_pre_alloc_hook(s, flags); @@ -3217,11 +3233,11 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, /* Clear memory outside IRQ disabled fastpath loop */ if (unlikely(slab_want_init_on_alloc(flags, s))) { - int j; - for (j = 0; j < i; j++) memset(p[j], 0, s->object_size); } + for (j = 0; j < i; j++) + kmsan_slab_alloc(s, p[j], flags); /* memcg and kmem_cache debug support */ slab_post_alloc_hook(s, flags, size, p); @@ -3829,6 +3845,7 @@ static int __init setup_slub_min_objects(char *str) __setup("slub_min_objects=", setup_slub_min_objects); +__no_sanitize_memory void *__kmalloc(size_t size, gfp_t flags) { struct kmem_cache *s; @@ -5725,6 +5742,7 @@ static char *create_unique_id(struct kmem_cache *s) p += sprintf(p, "%07u", s->size); BUG_ON(p > name + ID_STR_LENGTH - 1); + kmsan_unpoison_shadow(name, p - name); return name; } @@ -5874,6 +5892,7 @@ static int sysfs_slab_alias(struct kmem_cache *s, const char *name) al->name = name; al->next = alias_list; alias_list = al; + kmsan_unpoison_shadow(al, sizeof(struct saved_alias)); return 0; } From patchwork Wed Mar 25 16:12:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458269 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E034014B4 for ; Wed, 25 Mar 2020 16:14:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9320920774 for ; Wed, 25 Mar 2020 16:14:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Vj4rZEJ2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9320920774 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 756A16B0087; Wed, 25 Mar 2020 12:13:58 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 72E946B0088; Wed, 25 Mar 2020 12:13:58 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5CE3E6B0089; Wed, 25 Mar 2020 12:13:58 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0199.hostedemail.com [216.40.44.199]) by kanga.kvack.org (Postfix) with ESMTP id 435986B0087 for ; Wed, 25 Mar 2020 12:13:58 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 246FA805C0 for ; Wed, 25 Mar 2020 16:13:58 +0000 (UTC) X-FDA: 76634380956.19.skin39_79e7fef097b60 X-Spam-Summary: 2,0,0,f9abaea1df3f2caa,d41d8cd98f00b204,3q4n7xgykcdiuzwrsfuccuzs.qcazwbil-aayjoqy.cfu@flex--glider.bounces.google.com,,RULES_HIT:1:41:152:355:379:541:800:960:966:968:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1593:1594:1605:1730:1747:1777:1792:1801:2196:2198:2199:2200:2393:2538:2553:2559:2562:2637:2693:2731:2892:2895:2896:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:4385:4605:5007:6117:6119:6261:6653:6742:6743:7875:7903:8603:8660:8957:9969:10004:11026:11232:11473:11657:11658:11914:12043:12048:12291:12296:12297:12438:12555:12691:12737:12895:12986:13148:13230:13846:14096:14097:14394:14659:21080:21365:21433:21444:21451:21611:21627:21990:30054:30064:30067:30070:30075:30079:30090,0,RBL:209.85.128.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:1,LUA_SUMM ARY:none X-HE-Tag: skin39_79e7fef097b60 X-Filterd-Recvd-Size: 14675 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:13:57 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id v184so1066923wme.7 for ; Wed, 25 Mar 2020 09:13:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=OKO6nEfQr1AbR7sx/iTvYwCH8Kd7BfHfKURGBj01uGQ=; b=Vj4rZEJ2oFzn/uUjrjG+TBqNmr9derrAvbyMNdmgJv/qjwIKkB9JJSBPuNbMiwrzGi yJDleczp7ztMHUcriAFRRidgyVJtIR2mSN+0KBbgb50Py7uI2Imqb7Ql+6r9y9PCNtEJ gBCk0NJ/NEiuh5pY2QHgQMcEYeN22SpiqTY7zG6QPE7G9vk7OPVp/jfEa5ik20hxBx8i bvivz/euGgV5/dOqixniHhScgNGHwdVdWmRi9ePdqG/BeazlHfhhqgM6ybrkOJI6MJ+e agtGQk2kI7xSvS6qz/HQcicDCWbd/Ik+3DTkEgb5y3Lt2sscn/mX5z/k5iD3pGpWnB01 Si3w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=OKO6nEfQr1AbR7sx/iTvYwCH8Kd7BfHfKURGBj01uGQ=; b=d2EQ1QGTcf5cbc6akxWqkTbgllBVhiYNGr2xzTR7+tMh8sQwDO+t7jjC1pTaHX5u41 6qqamJ2qqdJih1w8895hPwiiT+etu8CSsUOh+nr6blgSxiYebECw/TiIL5fvXXDzi4Mr T3IHGToKfxIWvULy1WcS5FGZoslcYSIhYCDIfjZ/wAfws3pwRNhX1KPZKXeXAPTPo2ut QfKNEX++yFnxEaD6h3Wu0zIBiHwj1MvU3AeRxSUDPVJUJFMMm4Jh4W0eonxnOz/6TT/E xOe0HI0YalQNOw91Hz1sEri8OAiEYwOe2TI3Kd0aQowrZLaLt7we6ubPuSIRnHEBkAmH nAlA== X-Gm-Message-State: ANhLgQ323hO0ZwuTTwfd9TmKno6OiqyjThdySUk/OuzGnF1CVDG4jyr9 UCQI5YoJuNF4DhtTizjlkPKNlFv/kE8= X-Google-Smtp-Source: ADFU+vtQ9TE7XFKZnayfNeeq68SyxwoeD6m2YheHiwn6zFWz2hWJnnCazyeJsSS04+37+mDsMCb73//5jzc= X-Received: by 2002:adf:d849:: with SMTP id k9mr4229491wrl.108.1585152835999; Wed, 25 Mar 2020 09:13:55 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:30 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-20-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 19/38] kmsan: mm: maintain KMSAN metadata for page operations From: glider@google.com To: Andrew Morton , Greg Kroah-Hartman , Eric Dumazet , Wolfram Sang , Petr Mladek , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, ericvh@gmail.com, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Insert KMSAN hooks that make the necessary bookkeeping changes: - allocate/split/deallocate metadata pages in alloc_pages()/split_page()/free_page(); - clear page shadow and origins in clear_page(), copy_user_highpage(); - copy page metadata in copy_highpage(), wp_page_copy(); - handle vmap()/vunmap()/iounmap(); Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Andrew Morton Cc: Greg Kroah-Hartman Cc: Eric Dumazet Cc: Wolfram Sang Cc: Petr Mladek Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- This patch was previously called "kmsan: call KMSAN hooks where needed" v2: - dropped call to kmsan_handle_vprintk, updated comment in printk.c v3: - put KMSAN_INIT_VALUE on a separate line in vprintk_store() - dropped call to kmsan_handle_i2c_transfer() - minor style fixes v4: - split mm-unrelated bits to other patches as requested by Andrey Konovalov - dropped changes to mm/compaction.c - use kmsan_unpoison_shadow in page_64.h and highmem.h Change-Id: I1250a928d9263bf71fdaa067a070bdee686ef47b --- arch/x86/include/asm/page_64.h | 13 +++++++++++++ arch/x86/mm/ioremap.c | 3 +++ include/linux/highmem.h | 3 +++ lib/ioremap.c | 5 +++++ mm/gup.c | 3 +++ mm/memory.c | 2 ++ mm/page_alloc.c | 17 +++++++++++++++++ mm/vmalloc.c | 24 ++++++++++++++++++++++-- 8 files changed, 68 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h index 939b1cff4a7b7..045856c38f494 100644 --- a/arch/x86/include/asm/page_64.h +++ b/arch/x86/include/asm/page_64.h @@ -44,14 +44,27 @@ void clear_page_orig(void *page); void clear_page_rep(void *page); void clear_page_erms(void *page); +/* This is an assembly header, avoid including too much of kmsan.h */ +#ifdef CONFIG_KMSAN +void kmsan_unpoison_shadow(const void *addr, size_t size); +#endif +__no_sanitize_memory static inline void clear_page(void *page) { +#ifdef CONFIG_KMSAN + /* alternative_call_2() changes |page|. */ + void *page_copy = page; +#endif alternative_call_2(clear_page_orig, clear_page_rep, X86_FEATURE_REP_GOOD, clear_page_erms, X86_FEATURE_ERMS, "=D" (page), "0" (page) : "cc", "memory", "rax", "rcx"); +#ifdef CONFIG_KMSAN + /* Clear KMSAN shadow for the pages that have it. */ + kmsan_unpoison_shadow(page_copy, PAGE_SIZE); +#endif } void copy_page(void *to, void *from); diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c index 935a91e1fd774..80399defe90aa 100644 --- a/arch/x86/mm/ioremap.c +++ b/arch/x86/mm/ioremap.c @@ -7,6 +7,7 @@ * (C) Copyright 1995 1996 Linus Torvalds */ +#include #include #include #include @@ -469,6 +470,8 @@ void iounmap(volatile void __iomem *addr) return; } + kmsan_iounmap_page_range((unsigned long)addr, + (unsigned long)addr + get_vm_area_size(p)); memtype_free(p->phys_addr, p->phys_addr + get_vm_area_size(p)); /* Finally remove it */ diff --git a/include/linux/highmem.h b/include/linux/highmem.h index ea5cdbd8c2c32..9f6efa26e9b5c 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include @@ -255,6 +256,7 @@ static inline void copy_user_highpage(struct page *to, struct page *from, vfrom = kmap_atomic(from); vto = kmap_atomic(to); copy_user_page(vto, vfrom, vaddr, to); + kmsan_unpoison_shadow(page_address(to), PAGE_SIZE); kunmap_atomic(vto); kunmap_atomic(vfrom); } @@ -270,6 +272,7 @@ static inline void copy_highpage(struct page *to, struct page *from) vfrom = kmap_atomic(from); vto = kmap_atomic(to); copy_page(vto, vfrom); + kmsan_copy_page_meta(to, from); kunmap_atomic(vto); kunmap_atomic(vfrom); } diff --git a/lib/ioremap.c b/lib/ioremap.c index 3f0e18543de84..14b0325b6fa9e 100644 --- a/lib/ioremap.c +++ b/lib/ioremap.c @@ -6,6 +6,7 @@ * * (C) Copyright 1995 1996 Linus Torvalds */ +#include #include #include #include @@ -214,6 +215,8 @@ int ioremap_page_range(unsigned long addr, unsigned long start; unsigned long next; int err; + unsigned long old_addr = addr; + phys_addr_t old_phys_addr = phys_addr; might_sleep(); BUG_ON(addr >= end); @@ -228,6 +231,8 @@ int ioremap_page_range(unsigned long addr, } while (pgd++, phys_addr += (next - addr), addr = next, addr != end); flush_cache_vmap(start, end); + if (!err) + kmsan_ioremap_page_range(old_addr, end, old_phys_addr, prot); return err; } diff --git a/mm/gup.c b/mm/gup.c index a212305695209..a2546215f165f 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -4,6 +4,7 @@ #include #include +#include #include #include #include @@ -2710,6 +2711,7 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write, gup_fast_permitted(start, end)) { local_irq_save(flags); gup_pgd_range(start, end, gup_flags, pages, &nr_pinned); + kmsan_gup_pgd_range(pages, nr_pinned); local_irq_restore(flags); } @@ -2765,6 +2767,7 @@ static int internal_get_user_pages_fast(unsigned long start, int nr_pages, gup_fast_permitted(start, end)) { local_irq_disable(); gup_pgd_range(addr, end, gup_flags, pages, &nr_pinned); + kmsan_gup_pgd_range(pages, nr_pinned); local_irq_enable(); ret = nr_pinned; } diff --git a/mm/memory.c b/mm/memory.c index 8d7f387dd0c77..aa9e266449e26 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -51,6 +51,7 @@ #include #include #include +#include #include #include #include @@ -2676,6 +2677,7 @@ static vm_fault_t wp_page_copy(struct vm_fault *vmf) put_page(old_page); return 0; } + kmsan_copy_page_meta(new_page, old_page); } if (mem_cgroup_try_charge_delay(new_page, mm, GFP_KERNEL, &memcg, false)) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index ca1453204e667..869dc64226296 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -26,6 +26,8 @@ #include #include #include +#include +#include #include #include #include @@ -1178,6 +1180,7 @@ static __always_inline bool free_pages_prepare(struct page *page, VM_BUG_ON_PAGE(PageTail(page), page); trace_mm_page_free(page, order); + kmsan_free_page(page, order); /* * Check tail pages before head page information is cleared to @@ -3199,6 +3202,7 @@ void split_page(struct page *page, unsigned int order) VM_BUG_ON_PAGE(PageCompound(page), page); VM_BUG_ON_PAGE(!page_count(page), page); + kmsan_split_page(page, order); for (i = 1; i < (1 << order); i++) set_page_refcounted(page + i); split_page_owner(page, order); @@ -3349,6 +3353,14 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone, /* * Allocate a page from the given zone. Use pcplists for order-0 allocations. */ + +/* + * Do not instrument rmqueue() with KMSAN. This function may call + * __msan_poison_alloca() through a call to set_pfnblock_flags_mask(). + * If __msan_poison_alloca() attempts to allocate pages for the stack depot, it + * may call rmqueue() again, which will result in a deadlock. + */ +__no_sanitize_memory static inline struct page *rmqueue(struct zone *preferred_zone, struct zone *zone, unsigned int order, @@ -4862,6 +4874,11 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int preferred_nid, trace_mm_page_alloc(page, order, alloc_mask, ac.migratetype); + if (page) + if (kmsan_alloc_page(page, order, gfp_mask)) { + __free_pages(page, order); + page = NULL; + } return page; } EXPORT_SYMBOL(__alloc_pages_nodemask); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index 6b8eeb0ecee51..c5577e616c33b 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -127,7 +128,8 @@ static void vunmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end) } while (p4d++, addr = next, addr != end); } -static void vunmap_page_range(unsigned long addr, unsigned long end) +/* Exported for KMSAN, visible in mm/kmsan/kmsan.h only. */ +void __vunmap_page_range(unsigned long addr, unsigned long end) { pgd_t *pgd; unsigned long next; @@ -141,6 +143,13 @@ static void vunmap_page_range(unsigned long addr, unsigned long end) vunmap_p4d_range(pgd, addr, next); } while (pgd++, addr = next, addr != end); } +EXPORT_SYMBOL(__vunmap_page_range); + +static void vunmap_page_range(unsigned long addr, unsigned long end) +{ + kmsan_vunmap_page_range(addr, end); + __vunmap_page_range(addr, end); +} static int vmap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, pgprot_t prot, struct page **pages, int *nr) @@ -224,8 +233,11 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr, * will have pfns corresponding to the "pages" array. * * Ie. pte at addr+N*PAGE_SIZE shall point to pfn corresponding to pages[N] + * + * This function is exported for use in KMSAN, but is only declared in KMSAN + * headers. */ -static int vmap_page_range_noflush(unsigned long start, unsigned long end, +int __vmap_page_range_noflush(unsigned long start, unsigned long end, pgprot_t prot, struct page **pages) { pgd_t *pgd; @@ -245,6 +257,14 @@ static int vmap_page_range_noflush(unsigned long start, unsigned long end, return nr; } +EXPORT_SYMBOL(__vmap_page_range_noflush); + +static int vmap_page_range_noflush(unsigned long start, unsigned long end, + pgprot_t prot, struct page **pages) +{ + kmsan_vmap_page_range_noflush(start, end, prot, pages); + return __vmap_page_range_noflush(start, end, prot, pages); +} static int vmap_page_range(unsigned long start, unsigned long end, pgprot_t prot, struct page **pages) From patchwork Wed Mar 25 16:12:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458271 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B5A0914B4 for ; Wed, 25 Mar 2020 16:14:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7E68A20774 for ; Wed, 25 Mar 2020 16:14:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nCNz5mGq" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7E68A20774 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9058D6B0088; Wed, 25 Mar 2020 12:14:01 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 8B5226B0089; Wed, 25 Mar 2020 12:14:01 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6BE0F6B008A; Wed, 25 Mar 2020 12:14:01 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0213.hostedemail.com [216.40.44.213]) by kanga.kvack.org (Postfix) with ESMTP id 52ACE6B0088 for ; Wed, 25 Mar 2020 12:14:01 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 291448248047 for ; Wed, 25 Mar 2020 16:14:01 +0000 (UTC) X-FDA: 76634381082.08.skin79_7a58ca484f941 X-Spam-Summary: 2,0,0,de5e47b2552a3b6d,d41d8cd98f00b204,3r4n7xgykcdyydavwjyggydw.ugedafmp-eecnsuc.gjy@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1541:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3152:3352:3740:3865:3866:3867:4250:4321:5007:6261:6653:6742:6743:7875:8660:9969:10004:10400:11026:11232:11473:11658:11914:12043:12048:12296:12297:12438:12555:12895:12986:13069:13148:13230:13311:13357:13846:14096:14097:14181:14394:14659:14721:21080:21324:21365:21444:21451:21611:21627:21990:30045:30054:30064,0,RBL:209.85.128.74:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:22,LUA_SUMMARY:none X-HE-Tag: skin79_7a58ca484f941 X-Filterd-Recvd-Size: 5318 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:00 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id w9so869499wmi.2 for ; Wed, 25 Mar 2020 09:14:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=kztRBTTWc3oyP+MYIjwGq6CzMPRNXmFizN5qYVB0f+A=; b=nCNz5mGqQ1/xB7t3CbqD7Z7ucAqPvLNBKtFkRP7CRJSWb9m6C6MGjFs1LKEb6vvJiH 2teFoRNDpe90K1b3Zcvhlz6VgQFjL4LC+hanR/A5T72l7gDCjztGF1p0FbCDGT+g2zy2 y1sqxmArJ1Kcf5Z60Wj2gmlwX5iQR/zKCCwMqaa5GiCuZymX/Y1WL6+GBC3s4eJ4DxPs mO8WGhAEH5uvte6Mi/hnOsDU61B6XfXCoEd3E2VSg0c+A2Rr1U+DFGDJgyIN9/vh+Tvv kqIep/PBZTfHo7lVId+p11aoGXRyUYUZ0k2NUF2erbf+lyUMQzddVds88zaAq01vX9wN MzMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kztRBTTWc3oyP+MYIjwGq6CzMPRNXmFizN5qYVB0f+A=; b=pi4nIT48QDwAdlFdrRMV3yWSeRI8BXoz+iHRqjQ/VkqAmyniQ3rYFaL8RAyF1biSsT SeReds1DAtUkIS9jvdkpGfIMC/GYyYoFy+ImZoazaTdgeKGSWvn98FtfbgOHRud8+gCT jtsBCRSu9+qNquttqsNAc86PxZQ/HTlzTIJJ33f4iC31Tix5JI6WckIekZrhcH/2AMJF QExPaDvws2QHEF2pXwnbkV21mH2Aa8HgjdtcPPtrbLPBwVl3WSKdVqgafwqdXESSiJHc vWHMWVJxOqLzQ8oABJMenumdXs2I+RgoMzbxLYkEt6x3wpZdsSFwL6k08ehy4mgoTzQc yq1Q== X-Gm-Message-State: ANhLgQ26f8nkf9u8ba4S++Kw57Tp1AY/2XzGQXwVcem4Bndl+H/JHj/l 6UrlDLz3KzW7/uM7gea+e7OAwUDXHuM= X-Google-Smtp-Source: ADFU+vunqqhs/V5QbFSweHy95rVys8n9wJL4i9VuYGsXCu3TNY3e2DhRXllDI0dcGKem0gIISriWvploG9M= X-Received: by 2002:adf:bbcd:: with SMTP id z13mr4184690wrg.389.1585152839184; Wed, 25 Mar 2020 09:13:59 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:31 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-21-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 20/38] kmsan: handle memory sent to/from USB From: glider@google.com To: Andrew Morton , Greg Kroah-Hartman , Eric Dumazet , Wolfram Sang , Petr Mladek , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, ericvh@gmail.com, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Depending on the value of is_out kmsan_handle_urb() KMSAN either marks the data copied to the kernel from a USB device as initialized, or checks the data sent to the device for being initialized. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Andrew Morton Cc: Greg Kroah-Hartman Cc: Eric Dumazet Cc: Wolfram Sang Cc: Petr Mladek Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- This patch was previously called "kmsan: call KMSAN hooks where needed" v4: - split this patch away Change-Id: Idd0f8ce858975112285706ffb7286f570bd3007b --- drivers/usb/core/urb.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/usb/core/urb.c b/drivers/usb/core/urb.c index da923ec176122..4a0b0ac0f52f9 100644 --- a/drivers/usb/core/urb.c +++ b/drivers/usb/core/urb.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -402,6 +403,7 @@ int usb_submit_urb(struct urb *urb, gfp_t mem_flags) URB_SETUP_MAP_SINGLE | URB_SETUP_MAP_LOCAL | URB_DMA_SG_COMBINED); urb->transfer_flags |= (is_out ? URB_DIR_OUT : URB_DIR_IN); + kmsan_handle_urb(urb, is_out); if (xfertype != USB_ENDPOINT_XFER_CONTROL && dev->state < USB_STATE_CONFIGURED) From patchwork Wed Mar 25 16:12:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458273 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 713A114B4 for ; Wed, 25 Mar 2020 16:14:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 30ED220740 for ; Wed, 25 Mar 2020 16:14:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="X2m/4GGS" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 30ED220740 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 841346B000C; Wed, 25 Mar 2020 12:14:04 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 81A0D6B008A; Wed, 25 Mar 2020 12:14:04 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6DF596B008C; Wed, 25 Mar 2020 12:14:04 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0163.hostedemail.com [216.40.44.163]) by kanga.kvack.org (Postfix) with ESMTP id 52E0E6B000C for ; Wed, 25 Mar 2020 12:14:04 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 34EFD18199605 for ; Wed, 25 Mar 2020 16:14:04 +0000 (UTC) X-FDA: 76634381208.07.brake54_7aca1ace00b4d X-Spam-Summary: 2,0,0,835571a2495f8ecf,d41d8cd98f00b204,3son7xgykcdkbgdyzmbjjbgz.xjhgdips-hhfqvxf.jmb@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:966:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:2196:2199:2393:2559:2562:3138:3139:3140:3141:3142:3152:3353:3865:3867:3871:3874:4117:4250:4321:4385:5007:6261:6653:6742:6743:7774:7875:8660:9969:10004:10400:11026:11232:11473:11658:11914:12043:12048:12296:12297:12438:12555:12895:12986:13148:13230:13846:14093:14096:14097:14181:14394:14659:14721:21080:21365:21433:21444:21450:21451:21627:21990:30012:30054:30064,0,RBL:209.85.221.74:@flex--glider.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:10,LUA_SUMMARY:none X-HE-Tag: brake54_7aca1ace00b4d X-Filterd-Recvd-Size: 6238 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:03 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id e10so1373375wru.6 for ; Wed, 25 Mar 2020 09:14:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=G1ANBkXsphASF7TIfj2U8dl4l2QWKr5exekIRkX1oFM=; b=X2m/4GGSZ8uFZEoKGkU8zeSpv0/dUPvN0REXGjc+64neEoAEHm4NU7yRzTKff9SBAf LhWBpuCrjoB6QWQApHY6HaWH7wnhJ3wn3FDT4n8DGMvca0YhDPHAsaf6sGN5kt+QD+4W xtFdVhd8yWL4kD+WQES8uSonl936PYAqCR/yA1NskKbpnYnojTR8BxdKP0igDYao37l2 Ch7qs7YTO1pSzbCbMGAg5Siri2RzkUo3VqdepyDhoc1MvF+/V3QcujTmRJ90UVoM2oC4 uK79fofbOwSgdlTKiaCXmdjgM2bEUbgEFiLSlPWVIQ5M8mQezvLnqh+p1hAlTWXl0XJa iuPA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=G1ANBkXsphASF7TIfj2U8dl4l2QWKr5exekIRkX1oFM=; b=X14ePmAmwm6aaABp2/IpbrscUydP7MTt+LVuwbOO66tB4LjAPxwEq39AcpTOyMDxxn ELqYj4qiiVSzjMX20GQAFbtk3Cm+DckYqfu93DJCpIpWgIc6rut7aDfMbo5e7zBhnskl 9Hk78X8FWCpQ0vkJMCVNRCWkf5yze2f7VaTB5NW8szp7qG383gTg2JfM7XA43DxvuoeP cMX8NQF50f93jQz9Cz/iLv9cvGonR4zc4jZDlDklvumCWs4OeIid+VoPt206cV3juhN6 U8r6zeFwZtzkL3FdgNbmrj/Os5HWoh4LGDe8e+m+6AlKyJFKp6IL3Vpr4h0GBO+FVkLs Tsug== X-Gm-Message-State: ANhLgQ1oURty4S8AYx5bM0fCJAQzVE8MBOYkq8hcFZR8rXkJnibsN+uX 2oE2Q3+LsR/lPVGUe5dEm6xZiSuER/k= X-Google-Smtp-Source: ADFU+vsLleCRzgvQ9l2mzah7qORL7skEMW7l2m+CtenE2/AKiBNPOYI5+w5g+O8jt2A8p7KSXr48SSh8rqo= X-Received: by 2002:adf:f309:: with SMTP id i9mr4589083wro.0.1585152842316; Wed, 25 Mar 2020 09:14:02 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:32 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-22-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 21/38] kmsan: handle task creation and exiting From: glider@google.com To: Andrew Morton , Greg Kroah-Hartman , Eric Dumazet , Wolfram Sang , Petr Mladek , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, ericvh@gmail.com, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Tell KMSAN that a new task is created, so the tool creates a backing metadata structure for that task. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Andrew Morton Cc: Greg Kroah-Hartman Cc: Eric Dumazet Cc: Wolfram Sang Cc: Petr Mladek Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- This patch was previously called "kmsan: call KMSAN hooks where needed" v4: - split this patch away Change-Id: I7a6a83419b0e038f8993175461255f462a430205 --- kernel/exit.c | 2 ++ kernel/fork.c | 2 ++ kernel/kthread.c | 2 ++ 3 files changed, 6 insertions(+) diff --git a/kernel/exit.c b/kernel/exit.c index e93c6197a827c..377f9edbb28fa 100644 --- a/kernel/exit.c +++ b/kernel/exit.c @@ -60,6 +60,7 @@ #include #include #include +#include #include #include #include @@ -709,6 +710,7 @@ void __noreturn do_exit(long code) profile_task_exit(tsk); kcov_task_exit(tsk); + kmsan_task_exit(tsk); WARN_ON(blk_needs_flush_plug(tsk)); diff --git a/kernel/fork.c b/kernel/fork.c index d48e063a3abe7..21f7f411880d3 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -37,6 +37,7 @@ #include #include #include +#include #include #include #include @@ -943,6 +944,7 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) account_kernel_stack(tsk, 1); kcov_task_init(tsk); + kmsan_task_create(tsk); #ifdef CONFIG_FAULT_INJECTION tsk->fail_nth = 0; diff --git a/kernel/kthread.c b/kernel/kthread.c index b262f47046ca4..33ca743ca8b54 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include #include @@ -350,6 +351,7 @@ struct task_struct *__kthread_create_on_node(int (*threadfn)(void *data), set_cpus_allowed_ptr(task, cpu_all_mask); } kfree(create); + kmsan_task_create(task); return task; } From patchwork Wed Mar 25 16:12:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458275 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 951A8913 for ; Wed, 25 Mar 2020 16:14:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 62EA820740 for ; Wed, 25 Mar 2020 16:14:21 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CuNjJkr2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 62EA820740 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3383A6B009A; Wed, 25 Mar 2020 12:14:07 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 272B36B009B; Wed, 25 Mar 2020 12:14:07 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1B0096B009C; Wed, 25 Mar 2020 12:14:07 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0211.hostedemail.com [216.40.44.211]) by kanga.kvack.org (Postfix) with ESMTP id E5DD96B009A for ; Wed, 25 Mar 2020 12:14:06 -0400 (EDT) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id BDD5A1819B39A for ; Wed, 25 Mar 2020 16:14:06 +0000 (UTC) X-FDA: 76634381292.21.talk18_7b2a4a9dd9159 X-Spam-Summary: 13,1.2,0,41572e7ed7db1bb0,d41d8cd98f00b204,3tyn7xgykcdwejgbcpemmejc.amkjglsv-kkityai.mpe@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1541:1593:1594:1711:1730:1747:1777:1792:1981:2194:2199:2393:2559:2562:3138:3139:3140:3141:3142:3152:3352:3698:3865:3867:4250:4321:5007:6120:6261:6653:6742:6743:7875:7901:8660:9707:9969:10008:10400:11026:11232:11658:11914:12043:12048:12296:12297:12438:12555:12895:13069:13148:13230:13255:13311:13357:13846:14096:14097:14181:14394:14659:14721:21080:21365:21444:21451:21627:21740:21990:30054:30064:30070,0,RBL:209.85.221.201:@flex--glider.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:1:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: talk18_7b2a4a9dd9159 X-Filterd-Recvd-Size: 5128 Received: from mail-vk1-f201.google.com (mail-vk1-f201.google.com [209.85.221.201]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:06 +0000 (UTC) Received: by mail-vk1-f201.google.com with SMTP id v203so965036vkd.13 for ; Wed, 25 Mar 2020 09:14:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=U5cjGukdLuhYwS9bq9faqk2UFsQ/bmDivVgoQTLdZqI=; b=CuNjJkr2f1aTmEfzCyEKnsBRbsikVTED73pIwYa9DkVfhfoNSDA2t9b52mTfjI+rwH 3m/wjZHvh9cgdZ3omyzPh6C/eZeLfM9yGAAHjyUyw0FKLvskNV0sNPCRh0FRg7IxsCEj eSI5NAEMz8momLDr2vP9H8RdGzq7XxECvA1A9jgiaa1E0E18stkVp7l3ctjtIGQJU2N+ N4BmMsL87L+Tsfty/z8q7eJlMUH9uaoOOWqO83WDyGFAJGMKJFt7Gi1kg3Io4pF0mOUr W4DMiTl7+KjAAv+1CorDrAnG/OCtOLW2XWYOjrwsT66mkbrE8Rfua/aAPw+ZKgbHVF+9 nx/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=U5cjGukdLuhYwS9bq9faqk2UFsQ/bmDivVgoQTLdZqI=; b=fK/NBMx6Crpd32KxkPOfsA8jWnclmSdklmVAzIsckfW++26jxzXYr+RAHnfmEXJ+K3 EL4iaSTsANNyY7LiHyVIUqcjSybH9tui1zvAsVJLuEalHGiZoWJ+O0cND3N+KWjiCZXQ f21YI7nBorXalQ89Mr5ya8srOliVe0iOuCPb45Dq+iMpnXLd250ImzqQLWZHBLxQlees ZHX53w5JwheMuNkudvfVW/c8cBobeHn4qG9v2O/zXG21i8ie97ZaL7L7VcIQ3BWpF3fR GsELlxhT3+ijsGmMfOdWa8LL2CTd3B903JjSlWohIka53pbP/GiqCjM3HOH13XyzKz1l hNfA== X-Gm-Message-State: ANhLgQ2AOmDZzng0UXduR2mFwxprkLmbQyUaSkLG/doKUkO7y/Yu7GPI jV98hfm5bqFPaHLbaGa7wgBqvtVBYxM= X-Google-Smtp-Source: ADFU+vvaimTYaR4nw4BaEX/6j1MKfUBOBp6jI6ParG4it9YusLacbdmIkauoz+NcvtmvWlFSyLEgqLLfAo4= X-Received: by 2002:a1f:d084:: with SMTP id h126mr2842641vkg.25.1585152845554; Wed, 25 Mar 2020 09:14:05 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:33 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-23-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 22/38] kmsan: net: check the value of skb before sending it to the network From: glider@google.com To: Andrew Morton , Greg Kroah-Hartman , Eric Dumazet , Wolfram Sang , Petr Mladek , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, ericvh@gmail.com, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Calling kmsan_check_skb() lets KMSAN check the bytes to be transferred over the network for being initialized. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Andrew Morton Cc: Greg Kroah-Hartman Cc: Eric Dumazet Cc: Wolfram Sang Cc: Petr Mladek Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- This patch was previously called "kmsan: call KMSAN hooks where needed" v4: - split this patch away Change-Id: Iff48409dc50341d59e355ce3ec11d4722f0799e2 --- net/sched/sch_generic.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c index 2efd5b61acef1..4b2cc309bb1e3 100644 --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -654,6 +655,7 @@ static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc) } else { WRITE_ONCE(qdisc->empty, true); } + kmsan_check_skb(skb); return skb; } From patchwork Wed Mar 25 16:12:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458277 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0861F913 for ; Wed, 25 Mar 2020 16:14:25 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CA99A20740 for ; Wed, 25 Mar 2020 16:14:24 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="KZdKDfyo" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CA99A20740 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0C8AF6B000D; Wed, 25 Mar 2020 12:14:11 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0027D6B009B; Wed, 25 Mar 2020 12:14:10 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D490A6B009D; Wed, 25 Mar 2020 12:14:10 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id B67D76B000D for ; Wed, 25 Mar 2020 12:14:10 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 921AC83EAC for ; Wed, 25 Mar 2020 16:14:10 +0000 (UTC) X-FDA: 76634381460.08.shop87_7bb4cde5f4202 X-Spam-Summary: 2,0,0,eed78c8ff005ed94,d41d8cd98f00b204,3uin7xgykcd8hmjefshpphmf.dpnmjovy-nnlwbdl.psh@flex--glider.bounces.google.com,,RULES_HIT:41:152:305:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:2393:2553:2559:2562:2693:3138:3139:3140:3141:3142:3152:3353:3865:3866:3867:3868:3870:3871:3872:3874:4250:4321:5007:6261:6653:6742:6743:7875:7903:8660:9969:10004:10400:11026:11232:11473:11658:11914:12043:12048:12296:12297:12438:12555:12895:12986:13148:13221:13229:13230:13846:14093:14097:14181:14394:14659:14721:21080:21365:21433:21444:21451:21627:21939:21990:30054:30064:30070:30090,0,RBL:209.85.128.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: shop87_7bb4cde5f4202 X-Filterd-Recvd-Size: 5666 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:09 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id f8so1070701wmh.4 for ; Wed, 25 Mar 2020 09:14:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=VgQgLsqptvNqC9anxeIc1DuP2l9vuZVYJJith3YbZag=; b=KZdKDfyoi+CDUuVEnW3GghtiABZyjqDb8iM1HlQi4VYp5Ikh/ebxjRtqGlByUTHFwk n6CPgyaV5z8x2dRhJHPU1waItoH5FGqr3E69K1D3fH8uT/eM2bYXP0yv7hCzzaAOEBNJ htf/EoiTie/W19VjOJw/622jNxyZLNDiIepaA4LJPJSBXnDlToY53V4apGiOtx0dqhQw M53L+UhR7T2kbVMG2kOfE34riS/1ip5boFKl1OvDvQXLmatH8FyitSqzpAWQGsYT5sKp ocwYXU6v9I/6RdfKVuhLG0RZK76YSAWrRxMciFl4NejW31kC9UO81hXkIs+wUqEXQi4m C9Cg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=VgQgLsqptvNqC9anxeIc1DuP2l9vuZVYJJith3YbZag=; b=pabnUebeFwq1xc7D4xWxBcYlYUbbeiP624NjX6jlJobk1iDdmtgZWcQrhMBTdzTZcP HbFbltNDTkGBQFyLUI91S7fuTyqzTMXdzqj2j9LQiStIS7cJjBv1o9ox3StwbVBfQtv+ hEfG2uZ/XUWS1hTg0YVXVP8y8RoWbprRVSrn2rbxU5+bSOl0EBQllrFGEE82fSAhWRsO ruKabbUgAn4Yj/5mCey+pkrdMwlj6Fv7DN9xAI6NJqdg6S/xj4PVP1aXwPCsJRewfLdr 6PEd8wCp1RFSDJe6ePFwR1H7vJ8KhWSYtiDsVAOcVtfetc+KomNfcl7GEhFxrs+7pYgf xG3A== X-Gm-Message-State: ANhLgQ0yHxsmWAJQ4R+B10qfAO3HyFtuGTU7QjK0Lw2DYxsQ66BtkO5D 2bpYiuUZvkZcSViwGskpbNQPxetiufE= X-Google-Smtp-Source: ADFU+vvXQtWt3PMhcVuks8cfm8fLlhdHO/1dOFOF9GE+uTXTwN/LFuknjs9sVN5j4RdGGcJVI3geSOVe6uc= X-Received: by 2002:adf:e946:: with SMTP id m6mr4353064wrn.187.1585152848608; Wed, 25 Mar 2020 09:14:08 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:34 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-24-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 23/38] kmsan: printk: treat the result of vscnprintf() as initialized From: glider@google.com To: Andrew Morton , Greg Kroah-Hartman , Eric Dumazet , Wolfram Sang , Petr Mladek , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, ericvh@gmail.com, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In vprintk_store(), vscnprintf() may return an uninitialized text_len value if any of its arguments are uninitialized. In that case KMSAN will report one or more errors in vscnprintf() itself, but it doesn't make much sense to track that value further, as it may trigger more errors in printk. Instead, we explicitly mark it as initialized. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Andrew Morton Cc: Greg Kroah-Hartman Cc: Eric Dumazet Cc: Wolfram Sang Cc: Petr Mladek Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org Acked-by: Petr Mladek Reviewed-by: Andrey Konovalov --- This patch was split from "kmsan: call KMSAN hooks where needed", as requested by Andrey Konovalov. Petr Mladek has previously acked the printk part of that patch, hence the Acked-by above. v4: - split this patch away Change-Id: Ibed60b0bdd25f8ae91acee5800b5328e78e0735a --- kernel/printk/printk.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c index ad46062345452..4cadba3c1e68d 100644 --- a/kernel/printk/printk.c +++ b/kernel/printk/printk.c @@ -1913,6 +1913,12 @@ int vprintk_store(int facility, int level, * prefix which might be passed-in as a parameter. */ text_len = vscnprintf(text, sizeof(textbuf), fmt, args); + /* + * If any of vscnprintf() arguments is uninitialized, KMSAN will report + * one or more errors and also probably mark text_len as uninitialized. + * Initialize |text_len| to prevent the errors from spreading further. + */ + text_len = KMSAN_INIT_VALUE(text_len); /* mark and strip a trailing newline */ if (text_len && text[text_len-1] == '\n') { From patchwork Wed Mar 25 16:12:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458279 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B8F51913 for ; Wed, 25 Mar 2020 16:14:28 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6C4DC20409 for ; Wed, 25 Mar 2020 16:14:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MYbrzIrq" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6C4DC20409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4CEB86B000E; Wed, 25 Mar 2020 12:14:14 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3D39B6B009B; Wed, 25 Mar 2020 12:14:14 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 24D9D6B009E; Wed, 25 Mar 2020 12:14:14 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0106.hostedemail.com [216.40.44.106]) by kanga.kvack.org (Postfix) with ESMTP id E3F6B6B000E for ; Wed, 25 Mar 2020 12:14:13 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id CB7C923118 for ; Wed, 25 Mar 2020 16:14:13 +0000 (UTC) X-FDA: 76634381586.26.spy60_7c30d4939a554 X-Spam-Summary: 2,0,0,fabb2887528f0fdc,d41d8cd98f00b204,3u4n7xgykceikpmhivksskpi.gsqpmry1-qqozego.svk@flex--glider.bounces.google.com,,RULES_HIT:1:2:41:152:355:379:541:800:960:968:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1593:1594:1605:1730:1747:1777:1792:1801:2194:2199:2393:2553:2559:2562:2895:2904:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3872:3874:4051:4250:4321:4605:5007:6119:6261:6653:6742:6743:7774:7875:7903:7974:8660:9040:9969:11026:11473:11657:11658:11914:12043:12048:12291:12296:12297:12438:12555:12895:12986:13148:13230:13846:13972:14096:14097:14394:14659:21080:21325:21365:21433:21444:21451:21618:21795:21966:21987:21990:30012:30029:30045:30051:30054:30064:30075:30079:30090,0,RBL:209.85.128.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:1:0,LFtime:2,LUA_SUMMARY:none X-HE-Tag: spy60_7c30d4939a554 X-Filterd-Recvd-Size: 12116 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:13 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id y1so1073874wmj.3 for ; Wed, 25 Mar 2020 09:14:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=hDEG4SL4kroOQXm1WszfPhYOz0CcPiVpP2hbsmuCCeY=; b=MYbrzIrqVxxzIGycKH1/6Rq/YAclt9uQQ1gKsN3qxXZZrnHp9ql3MFwHyQKn2q28Fw A2Xnd2mCeJESFu1a8wM1zhxMEHrRbvDOtHpRhm46vQl5p0sUOO06jOWXHUeWnUFfeoQw sDI8etOUaP4gBUAjZeHzP4YQ9pjcn/fgYbNqhzvjCrhC8frGGDue+nd3slYa48x6+ZBY fmOpJ1rJamwXQiBUJyyLC8XyqJnI1zbCSpPlime0QuAqUPeE7UZ0oKLDjOjOkXy3Cwwm vhSlJhStcSfW9WyWNodnegvw/tPXFl/+G9ZOgwxyPNUY03Cg9KIiFD1xyDp2EIx6fnfR FiMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=hDEG4SL4kroOQXm1WszfPhYOz0CcPiVpP2hbsmuCCeY=; b=mMOiNJTEd2RNVAmB3CT6KkF39HH0sxN6z0GkT5xm1WVnRIG58K3+JIkKZYifVrDT7S Hbd/3ovXnjaQoKt97BdV4utD6htpsP4vgtwdGYdwmlys1FH/S8nR7SQIYNnR5D7X/NV7 GQAi1biJGggM40kjVxmtByx6K9semWCh/r8ZWEFRu/3d86Ruj/tmUfr1inuPigz1JgEx zNlqxFlA2ug2TT7YdpgadMnGufRsDeYXlLy1HFiNTyJzj0yi92WlIssxRgrDc+TCk7V4 +rCvvr1ah+SYqwRbVskQ7cWLpB6NM8wnogivReHehaMkqXtQuAva7DDi8Hn0LE9RA8x7 OmHg== X-Gm-Message-State: ANhLgQ3XS7VB21O/aVppXW5+etBJxht0cwWbIgA0v4bsvxEosWDA0zI0 cgRGHMRWIkkeTjnlHCxVwdth2+A/2Mw= X-Google-Smtp-Source: ADFU+vt/2hLUuCl0oc5iXdL/FQyJ0Y0Nfl1tE0Wm2ypMPo1e8PQclrs4LqCJegH58xzwkapY7i8AdqLi6EQ= X-Received: by 2002:a5d:488c:: with SMTP id g12mr4344764wrq.67.1585152851835; Wed, 25 Mar 2020 09:14:11 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:35 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-25-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 24/38] kmsan: disable instrumentation of certain functions From: glider@google.com To: Thomas Gleixner , Andrew Morton , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some functions are called from handwritten assembly, and therefore don't have their arguments' metadata fully set up by the instrumentation code. Mark them with __no_sanitize_memory to avoid false positives from spreading further. Certain functions perform task switching, so that the value of |current| is different as they proceed. Because KMSAN state pointer is only read once at the beginning of the function, touching it after |current| has changed may be dangerous. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Thomas Gleixner Cc: Andrew Morton Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- v3: - removed TODOs from comments v4: - updated the comments, dropped __no_sanitize_memory from idle_cpu(), sched_init(), profile_tick() - split away the uprobes part as requested by Andrey Konovalov Change-Id: I684d23dac5a22eb0a4cea71993cb934302b17cea --- arch/x86/entry/common.c | 2 ++ arch/x86/include/asm/irq_regs.h | 2 ++ arch/x86/include/asm/syscall_wrapper.h | 2 ++ arch/x86/kernel/apic/apic.c | 3 +++ arch/x86/kernel/dumpstack_64.c | 5 +++++ arch/x86/kernel/process_64.c | 5 +++++ arch/x86/kernel/traps.c | 13 +++++++++++-- kernel/sched/core.c | 22 ++++++++++++++++++++++ 8 files changed, 52 insertions(+), 2 deletions(-) diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c index ec167d8c41cbd..5c3d0f3a14c37 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -280,6 +280,8 @@ __visible inline void syscall_return_slowpath(struct pt_regs *regs) } #ifdef CONFIG_X86_64 +/* Tell KMSAN to not instrument this function and to initialize |regs|. */ +__no_sanitize_memory __visible void do_syscall_64(unsigned long nr, struct pt_regs *regs) { struct thread_info *ti; diff --git a/arch/x86/include/asm/irq_regs.h b/arch/x86/include/asm/irq_regs.h index 187ce59aea28e..a6fc1641e2861 100644 --- a/arch/x86/include/asm/irq_regs.h +++ b/arch/x86/include/asm/irq_regs.h @@ -14,6 +14,8 @@ DECLARE_PER_CPU(struct pt_regs *, irq_regs); +/* Tell KMSAN to return an initialized struct pt_regs. */ +__no_sanitize_memory static inline struct pt_regs *get_irq_regs(void) { return __this_cpu_read(irq_regs); diff --git a/arch/x86/include/asm/syscall_wrapper.h b/arch/x86/include/asm/syscall_wrapper.h index e2389ce9bf58a..098b1a8d6bc41 100644 --- a/arch/x86/include/asm/syscall_wrapper.h +++ b/arch/x86/include/asm/syscall_wrapper.h @@ -196,6 +196,8 @@ struct pt_regs; ALLOW_ERROR_INJECTION(__x64_sys##name, ERRNO); \ static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)); \ static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__));\ + /* Tell KMSAN to initialize |regs|. */ \ + __no_sanitize_memory \ asmlinkage long __x64_sys##name(const struct pt_regs *regs) \ { \ return __se_sys##name(SC_X86_64_REGS_TO_ARGS(x,__VA_ARGS__));\ diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c index 5f973fed3c9ff..1f0250f14e462 100644 --- a/arch/x86/kernel/apic/apic.c +++ b/arch/x86/kernel/apic/apic.c @@ -1127,6 +1127,9 @@ static void local_apic_timer_interrupt(void) * [ if a single-CPU system runs an SMP kernel then we call the local * interrupt as well. Thus we cannot inline the local irq ... ] */ + +/* Tell KMSAN to initialize |regs|. */ +__no_sanitize_memory __visible void __irq_entry smp_apic_timer_interrupt(struct pt_regs *regs) { struct pt_regs *old_regs = set_irq_regs(regs); diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c index 87b97897a8810..3d1691f81cada 100644 --- a/arch/x86/kernel/dumpstack_64.c +++ b/arch/x86/kernel/dumpstack_64.c @@ -150,6 +150,11 @@ static bool in_irq_stack(unsigned long *stack, struct stack_info *info) return true; } +/* + * This function may touch stale uninitialized values on stack. Do not + * instrument it with KMSAN to avoid false positives. + */ +__no_sanitize_memory int get_stack_info(unsigned long *stack, struct task_struct *task, struct stack_info *info, unsigned long *visit_mask) { diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index ffd497804dbc3..5e8c6767e9916 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -424,6 +424,11 @@ void compat_start_thread(struct pt_regs *regs, u32 new_ip, u32 new_sp) * Kprobes not supported here. Set the probe on schedule instead. * Function graph tracer not supported too. */ +/* + * Avoid touching KMSAN state or reporting anything here, as __switch_to() does + * weird things with tasks. + */ +__no_sanitize_memory __visible __notrace_funcgraph struct task_struct * __switch_to(struct task_struct *prev_p, struct task_struct *next_p) { diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index d54cffdc7cac2..917268aee054e 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -638,7 +638,11 @@ NOKPROBE_SYMBOL(do_int3); * Help handler running on a per-cpu (IST or entry trampoline) stack * to switch to the normal thread stack if the interrupted code was in * user mode. The actual stack switch is done in entry_64.S + * */ + +/* This function switches the registers - don't instrument it with KMSAN. */ +__no_sanitize_memory asmlinkage __visible notrace struct pt_regs *sync_regs(struct pt_regs *eregs) { struct pt_regs *regs = (struct pt_regs *)this_cpu_read(cpu_current_top_of_stack) - 1; @@ -654,6 +658,11 @@ struct bad_iret_stack { }; asmlinkage __visible notrace +/* + * Dark magic happening here, let's not instrument this function. + * Also avoid copying any metadata by using raw __memmove(). + */ +__no_sanitize_memory struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s) { /* @@ -668,10 +677,10 @@ struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s) (struct bad_iret_stack *)this_cpu_read(cpu_tss_rw.x86_tss.sp0) - 1; /* Copy the IRET target to the new stack. */ - memmove(&new_stack->regs.ip, (void *)s->regs.sp, 5*8); + __memmove(&new_stack->regs.ip, (void *)s->regs.sp, 5*8); /* Copy the remainder of the stack from the current stack. */ - memmove(new_stack, s, offsetof(struct bad_iret_stack, regs.ip)); + __memmove(new_stack, s, offsetof(struct bad_iret_stack, regs.ip)); BUG_ON(!user_mode(&new_stack->regs)); return new_stack; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 1a5937936ac75..bb1b659c12f6a 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -471,6 +471,11 @@ void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task) put_task_struct(task); } +/* + * Context switch here may lead to KMSAN task state corruption. Disable KMSAN + * instrumentation. + */ +__no_sanitize_memory void wake_up_q(struct wake_q_head *head) { struct wake_q_node *node = head->first; @@ -3217,6 +3222,12 @@ prepare_task_switch(struct rq *rq, struct task_struct *prev, * past. prev == current is still correct but we need to recalculate this_rq * because prev may have moved to another CPU. */ + +/* + * Context switch here may lead to KMSAN task state corruption. Disable KMSAN + * instrumentation. + */ +__no_sanitize_memory static struct rq *finish_task_switch(struct task_struct *prev) __releases(rq->lock) { @@ -4052,6 +4063,12 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) * * WARNING: must be called with preemption disabled! */ + +/* + * Context switch here may lead to KMSAN task state corruption. Disable KMSAN + * instrumentation. + */ +__no_sanitize_memory static void __sched notrace __schedule(bool preempt) { struct task_struct *prev, *next; @@ -6789,6 +6806,11 @@ static inline int preempt_count_equals(int preempt_offset) return (nested == preempt_offset); } +/* + * This function might be called from code that is not instrumented with KMSAN. + * Nevertheless, treat its arguments as initialized. + */ +__no_sanitize_memory void __might_sleep(const char *file, int line, int preempt_offset) { /* From patchwork Wed Mar 25 16:12:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458281 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6F0B96CA for ; Wed, 25 Mar 2020 16:14:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 32C2820409 for ; Wed, 25 Mar 2020 16:14:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kArDV6HC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 32C2820409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C61626B0010; Wed, 25 Mar 2020 12:14:16 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B9C516B009B; Wed, 25 Mar 2020 12:14:16 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A64256B009F; Wed, 25 Mar 2020 12:14:16 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0219.hostedemail.com [216.40.44.219]) by kanga.kvack.org (Postfix) with ESMTP id 85B526B0010 for ; Wed, 25 Mar 2020 12:14:16 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 6546D1815A4B5 for ; Wed, 25 Mar 2020 16:14:16 +0000 (UTC) X-FDA: 76634381712.02.base08_7c932de703909 X-Spam-Summary: 2,0,0,2a09401de04e7b1f,d41d8cd98f00b204,3v4n7xgykceyotqlmzowwotm.kwutqv25-uus3iks.wzo@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:968:973:982:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:2693:2895:3138:3139:3140:3141:3142:3152:3353:3865:3867:3868:3870:3871:3872:4250:4321:5007:6261:6653:6742:6743:7875:9969:10004:10400:11026:11658:11914:12043:12048:12114:12296:12297:12438:12555:12895:12986:13846:14181:14394:14659:14721:21080:21365:21444:21451:21627:21990:30054:30064,0,RBL:209.85.217.74:@flex--glider.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: base08_7c932de703909 X-Filterd-Recvd-Size: 5572 Received: from mail-vs1-f74.google.com (mail-vs1-f74.google.com [209.85.217.74]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:15 +0000 (UTC) Received: by mail-vs1-f74.google.com with SMTP id s17so449686vss.13 for ; Wed, 25 Mar 2020 09:14:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=+1ImLXF3l/Ipa3gAsWzPBFk/gFKHo02K1UNcHQIkRhA=; b=kArDV6HCjlwydcRPHSUXYTyLL1UjhaFtTiaX2HIxk9+4+hDhIP5uouw9YMXN1Vu72j 5I/ip6RxV0Y1aDaa7AuQJ1m9LBROs6ESqEtmVY1ch8U31EGAWzp6hhcx1UGriou+4S4W bJ1jwCHmb4zY2FIawTY5CHniuME0a2jHJWyqoyDo3sNlVJpO+zT1dR5uhN7syKu/2sdN RByBne6HsgvBo4z4/+AhduyMhT/WM3HHkEt+jM0/ptUZJqoA+othHAU4Edv96Afm6UBN CmqBNtq2+FzKPc88/mpHdzF5bsH6v9rdqVsfNsvPsV10rY77KygeErIL7iW6sS24hdux kweg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=+1ImLXF3l/Ipa3gAsWzPBFk/gFKHo02K1UNcHQIkRhA=; b=EmW9fahBHv6wscUs/i6isoW8RdYmz9WL7nVb1+J6/YmQtUkdMngAomIaQaOaJenX2m KWLKOoPI5uHrCeNJMv4JiXsQ7DspdWleg1m4LiYbGonSZeKVsmBnOXYD9rDQnqgxFmU8 kR4XbfxjgG69ftwZuJ7LdwNPZDC7ytAYffQpbF1xJcDPLm3kT/BsHIiGrF7doUrMjI5z 6TVXY+mf/aVYmlL8PbS2eF/ymaBxMD8GIV2ojIYVM4hPqE5x14VyGay4csTVMEsv10ax 40XMXXLhLQHgYgWSSVpDusILI58DSlz/P2X8l+0C98dNyUfywnlaR2t+V/Z94pxhazea QdWQ== X-Gm-Message-State: ANhLgQ001CwLOr6ZNFtgTmWLjFAmaKj6oEjqm02kz+MZUJx6jDlLHDqc jJyLBO/gYa8sE+2IzjI7d9Q5PQkAubk= X-Google-Smtp-Source: ADFU+vs2Jv2EYeUFlUVeoCis67biNjSKQGPoDzysxktIo0hAgVoJPhmvNfsrSVn4pmmAUfhFiYkuGgbe4Ng= X-Received: by 2002:ab0:710a:: with SMTP id x10mr2898993uan.76.1585152855214; Wed, 25 Mar 2020 09:14:15 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:36 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-26-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 25/38] kmsan: unpoison |tlb| in arch_tlb_gather_mmu() From: glider@google.com To: Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a hack to reduce stackdepot pressure. struct mmu_gather contains 7 1-bit fields packed into a 32-bit unsigned int value. The remaining 25 bits remain uninitialized and are never used, but KMSAN updates the origin for them in zap_pXX_range() in mm/memory.c, thus creating very long origin chains. This is technically correct, but consumes too much memory. Unpoisoning the whole structure will prevent creating such chains. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org Reviewed-by: Andrey Konovalov --- v4: - removed a TODO, updated patch description Change-Id: I22a201e7e4f67ed74f8129072f12e5351b26103a --- mm/mmu_gather.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c index a3538cb2bcbee..d3d57c276e301 100644 --- a/mm/mmu_gather.c +++ b/mm/mmu_gather.c @@ -1,6 +1,7 @@ #include #include #include +#include #include #include #include @@ -264,6 +265,15 @@ void tlb_flush_mmu(struct mmu_gather *tlb) void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, unsigned long start, unsigned long end) { + /* + * struct mmu_gather contains 7 1-bit fields packed into a 32-bit + * unsigned int value. The remaining 25 bits remain uninitialized + * and are never used, but KMSAN updates the origin for them in + * zap_pXX_range() in mm/memory.c, thus creating very long origin + * chains. This is technically correct, but consumes too much memory. + * Unpoisoning the whole structure will prevent creating such chains. + */ + kmsan_unpoison_shadow(tlb, sizeof(*tlb)); tlb->mm = mm; /* Is it from 0 to ~0? */ From patchwork Wed Mar 25 16:12:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458283 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1EFDF6CA for ; Wed, 25 Mar 2020 16:14:36 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C3E8920409 for ; Wed, 25 Mar 2020 16:14:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="p0Q37psA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C3E8920409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 3AD7F6B009B; Wed, 25 Mar 2020 12:14:20 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 335CD6B00A0; Wed, 25 Mar 2020 12:14:20 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 188FD6B00A1; Wed, 25 Mar 2020 12:14:20 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0223.hostedemail.com [216.40.44.223]) by kanga.kvack.org (Postfix) with ESMTP id E28856B009B for ; Wed, 25 Mar 2020 12:14:19 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id D065E844F4 for ; Wed, 25 Mar 2020 16:14:19 +0000 (UTC) X-FDA: 76634381838.17.boat21_7d0e85d495535 X-Spam-Summary: 2,0,0,7697d3f9fd7a6b19,d41d8cd98f00b204,3won7xgykcekrwtop2rzzrwp.nzxwty58-xxv6lnv.z2r@flex--glider.bounces.google.com,,RULES_HIT:1:2:41:152:355:379:541:800:960:967:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1593:1594:1605:1730:1747:1777:1792:1801:2393:2525:2538:2559:2563:2682:2685:2859:2895:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3870:3871:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4049:4250:4321:4605:5007:6261:6653:6691:6742:6743:7875:8603:8660:8784:9025:9969:10004:11026:11473:11657:11658:11914:12043:12048:12050:12296:12297:12438:12555:12895:12986:13148:13230:13846:14096:14097:14394:14659:21080:21365:21444:21451:21627:21990:30054:30055:30064:30067,0,RBL:209.85.222.74:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:23 ,LUA_SUM X-HE-Tag: boat21_7d0e85d495535 X-Filterd-Recvd-Size: 10332 Received: from mail-ua1-f74.google.com (mail-ua1-f74.google.com [209.85.222.74]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:19 +0000 (UTC) Received: by mail-ua1-f74.google.com with SMTP id f15so1033673uap.19 for ; Wed, 25 Mar 2020 09:14:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rDP3zvgiw6o5ldgzCmXWwQQtGpnJpieSNSs99UNbhmM=; b=p0Q37psA7jmeP1LLf1OV6yMEYr/uopsM1ykTowb8hpLdpoU0MZJ6mIOBXr3Ar3s7eO yzc2lDMsO7ZRl3QuBlwkMUFwup8E557hPnMlg5CTmnn0nHHCb5MPDHeFL/8Q7L/DJVZU e22axRNO1ADL1EWZSdPIGCipMeDXYPoMQOa1xWaT8rA2X8RFTHgv80svvuLBFYQd9i8n Kw5e7Xit2pVG+w+H2e/o2XaTxbA+ZvIzHdimST+vpxaw+bGWgdXtfougs+BoyjZLaDMB 0rf3AHkov//E8O6HfvxaYHT5/JiRESJAQ2Fx70p8pmbGOQJH8qL/9y9wDlfHQm/HmcNq JtbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rDP3zvgiw6o5ldgzCmXWwQQtGpnJpieSNSs99UNbhmM=; b=rKGnPxBtnvxJkLlxpbXZNMOo4/kGwcsdgNlP5hFFSKFUaK/Gag3cdtn6ucgZlZHy1i jrBZmWptuc8EDjWWs9JJHxamxgaeZ6pTNMF2ugJaeFloCyOOp+YaaeK65Z1F0xxEMH/V taQiOuYDiEmYYdwwnNWTvtJLOrfKy4gTs1UFOLD8TcfxdZIaHLv/n2IGmmaP5amh5Y3W gil1AB+nmtRvu074D41+I3eh+XHcvz0Mfa3rfIiB/DLl9J/OfjwQftB7dxE5D5RvL163 GMLG6AwzvaeCJutpidR1KGtoVMkUUZgyTFGRrKjJP7htAoi8S2CNEuACd5/NlXTT7Uh+ bsMg== X-Gm-Message-State: ANhLgQ0SLgETK0Un3XoAVI+ihbadQu5JxYDjQJqAe8ZrGalmyB6+4Fzz rG4aZFH1ognK0AGyqPwUqnhl5COKHLE= X-Google-Smtp-Source: ADFU+vsUw+oOwSvnNREg6mLrvxXsGkgeqUufnnw6E3EvGa3IxIJIxu7jik333Z7SunlZSnNKbHS8f1zVScE= X-Received: by 2002:a1f:4c86:: with SMTP id z128mr2832911vka.70.1585152858447; Wed, 25 Mar 2020 09:14:18 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:37 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-27-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 26/38] kmsan: use __msan_ string functions where possible. From: glider@google.com To: Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Unless stated otherwise (by explicitly calling __memcpy(), __memset() or __memmove()) we want all string functions to call their __msan_ versions (e.g. __msan_memcpy() instead of memcpy()), so that shadow and origin values are updated accordingly. Bootloader must still use the default string functions to avoid crashes. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- v3: - use default string functions in the bootloader v4: - include kmsan-checks.h into compiler.h - also handle memset() and memmove() - fix https://github.com/google/kmsan/issues/64 v5: - don't compile memset() and memmove() under KMSAN Change-Id: Ib2512ce5aa8d457453dd38caa12f58f002166813 --- arch/x86/boot/compressed/misc.h | 1 + arch/x86/include/asm/string_64.h | 23 ++++++++++++++++++- .../firmware/efi/libstub/efi-stub-helper.c | 5 ++++ drivers/firmware/efi/libstub/tpm.c | 5 ++++ include/linux/compiler.h | 9 +++++++- include/linux/string.h | 2 ++ 6 files changed, 43 insertions(+), 2 deletions(-) diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h index c8181392f70d7..dd4bd8c5d97a1 100644 --- a/arch/x86/boot/compressed/misc.h +++ b/arch/x86/boot/compressed/misc.h @@ -12,6 +12,7 @@ #undef CONFIG_PARAVIRT_XXL #undef CONFIG_PARAVIRT_SPINLOCKS #undef CONFIG_KASAN +#undef CONFIG_KMSAN /* cpu_feature_enabled() cannot be used this early */ #define USE_EARLY_PGTABLE_L5 diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h index 75314c3dbe471..0442e679b3079 100644 --- a/arch/x86/include/asm/string_64.h +++ b/arch/x86/include/asm/string_64.h @@ -11,11 +11,23 @@ function. */ #define __HAVE_ARCH_MEMCPY 1 +#if defined(CONFIG_KMSAN) +#undef memcpy +/* __msan_memcpy() is defined in compiler.h */ +#define memcpy(dst, src, len) __msan_memcpy(dst, src, len) +#else extern void *memcpy(void *to, const void *from, size_t len); +#endif extern void *__memcpy(void *to, const void *from, size_t len); #define __HAVE_ARCH_MEMSET +#if defined(CONFIG_KMSAN) +extern void *__msan_memset(void *s, int c, size_t n); +#undef memset +#define memset(dst, c, len) __msan_memset(dst, c, len) +#else void *memset(void *s, int c, size_t n); +#endif void *__memset(void *s, int c, size_t n); #define __HAVE_ARCH_MEMSET16 @@ -55,7 +67,13 @@ static inline void *memset64(uint64_t *s, uint64_t v, size_t n) } #define __HAVE_ARCH_MEMMOVE +#if defined(CONFIG_KMSAN) +#undef memmove +void *__msan_memmove(void *dest, const void *src, size_t len); +#define memmove(dst, src, len) __msan_memmove(dst, src, len) +#else void *memmove(void *dest, const void *src, size_t count); +#endif void *__memmove(void *dest, const void *src, size_t count); int memcmp(const void *cs, const void *ct, size_t count); @@ -64,7 +82,8 @@ char *strcpy(char *dest, const char *src); char *strcat(char *dest, const char *src); int strcmp(const char *cs, const char *ct); -#if defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__) +#if (defined(CONFIG_KASAN) && !defined(__SANITIZE_ADDRESS__)) || \ + (defined(CONFIG_KMSAN) && !defined(__SANITIZE_MEMORY__)) /* * For files that not instrumented (e.g. mm/slub.c) we @@ -73,7 +92,9 @@ int strcmp(const char *cs, const char *ct); #undef memcpy #define memcpy(dst, src, len) __memcpy(dst, src, len) +#undef memmove #define memmove(dst, src, len) __memmove(dst, src, len) +#undef memset #define memset(s, c, n) __memset(s, c, n) #ifndef __NO_FORTIFY diff --git a/drivers/firmware/efi/libstub/efi-stub-helper.c b/drivers/firmware/efi/libstub/efi-stub-helper.c index 9f34c72429397..610f791c2493e 100644 --- a/drivers/firmware/efi/libstub/efi-stub-helper.c +++ b/drivers/firmware/efi/libstub/efi-stub-helper.c @@ -5,7 +5,12 @@ * implementation files. * * Copyright 2011 Intel Corporation; author Matt Fleming + * + * + * This file is not linked with KMSAN runtime. + * Do not replace memcpy with __memcpy. */ +#undef CONFIG_KMSAN #include #include diff --git a/drivers/firmware/efi/libstub/tpm.c b/drivers/firmware/efi/libstub/tpm.c index 1d59e103a2e3a..7e8906b1c1c98 100644 --- a/drivers/firmware/efi/libstub/tpm.c +++ b/drivers/firmware/efi/libstub/tpm.c @@ -6,7 +6,12 @@ * Copyright (C) 2017 Google, Inc. * Matthew Garrett * Thiebaud Weksteen + * + * + * This file is not linked with KMSAN runtime. + * Do not replace memcpy with __memcpy. */ +#undef CONFIG_KMSAN #include #include #include diff --git a/include/linux/compiler.h b/include/linux/compiler.h index c6c67729729e3..f2b97241fe2d4 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -180,6 +180,13 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, #include #include +#ifdef CONFIG_KMSAN +void *__msan_memcpy(void *dst, const void *src, u64 size); +#define __DO_MEMCPY(res, p, size) __msan_memcpy(res, p, size) +#else +#define __DO_MEMCPY(res, p, size) __builtin_memcpy(res, p, size) +#endif + #define __READ_ONCE_SIZE \ ({ \ switch (size) { \ @@ -189,7 +196,7 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, case 8: *(__u64 *)res = *(volatile __u64 *)p; break; \ default: \ barrier(); \ - __builtin_memcpy((void *)res, (const void *)p, size); \ + __DO_MEMCPY((void *)res, (const void *)p, size); \ barrier(); \ } \ }) diff --git a/include/linux/string.h b/include/linux/string.h index 6dfbb2efa8157..7ef92817c082f 100644 --- a/include/linux/string.h +++ b/include/linux/string.h @@ -356,6 +356,7 @@ __FORTIFY_INLINE char *strncat(char *p, const char *q, __kernel_size_t count) return p; } +#ifndef CONFIG_KMSAN __FORTIFY_INLINE void *memset(void *p, int c, __kernel_size_t size) { size_t p_size = __builtin_object_size(p, 0); @@ -395,6 +396,7 @@ __FORTIFY_INLINE void *memmove(void *p, const void *q, __kernel_size_t size) fortify_panic(__func__); return __builtin_memmove(p, q, size); } +#endif extern void *__real_memscan(void *, int, __kernel_size_t) __RENAME(memscan); __FORTIFY_INLINE void *memscan(void *p, int c, __kernel_size_t size) From patchwork Wed Mar 25 16:12:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458285 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 501EA6CA for ; Wed, 25 Mar 2020 16:14:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 020E920409 for ; Wed, 25 Mar 2020 16:14:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="prjBqCQU" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 020E920409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CFC596B00A0; Wed, 25 Mar 2020 12:14:23 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C5F886B00A2; Wed, 25 Mar 2020 12:14:23 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AB4066B00A3; Wed, 25 Mar 2020 12:14:23 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0033.hostedemail.com [216.40.44.33]) by kanga.kvack.org (Postfix) with ESMTP id 878C66B00A0 for ; Wed, 25 Mar 2020 12:14:23 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 6BF4784781 for ; Wed, 25 Mar 2020 16:14:23 +0000 (UTC) X-FDA: 76634382006.07.pull39_7d97cae83b414 X-Spam-Summary: 2,0,0,5ed3f24c9ff08ea3,d41d8cd98f00b204,3xyn7xgykcewuzwrs5u22uzs.q20zw18b-00y9oqy.25u@flex--glider.bounces.google.com,,RULES_HIT:1:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1593:1594:1605:1730:1747:1777:1792:1981:2194:2199:2393:2559:2562:2636:3138:3139:3140:3141:3142:3152:3865:3866:3867:3868:3871:3872:3874:4321:4605:5007:6261:6653:6742:6743:7875:7903:7904:8603:9969:10004:11026:11473:11657:11658:11914:12043:12048:12291:12297:12438:12555:12683:12895:12986:13846:14096:14097:14394:14659:21080:21365:21444:21451:21627:21990:30012:30051:30054:30064:30070,0,RBL:209.85.221.74:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:62,LUA_SUMMARY:none X-HE-Tag: pull39_7d97cae83b414 X-Filterd-Recvd-Size: 14488 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:22 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id o18so1372076wrx.9 for ; Wed, 25 Mar 2020 09:14:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xmEL4PoSL4dRoy+EDuixfj+9Bi6u9R1TX0Usnq36bgE=; b=prjBqCQUE169ghwz94hYnk4baHKSrXUFg7F0ufECHrJWj+Zv6bmmfW+kaM6OypKc48 mXAcJuzRL0uCKPO5Bv1AZTdNqIsZa6B6kn8cniXRBF/nANIlRhS9LjbODWn0TNV0GJwy v8sZumgKsUGTWymfd1wVBnyuiF232VPa6aI3VpAz80GHdYEjDHB4HnJ9xMp6G9yZqR8n 9MjnCMJoKSpK0f4LtHURqqo25gxqAH2xwRY/ZmiJGSS5hEtdVZ1uZ9epXBhgNL85u2p/ tQzX64lrRCoffZiptIwMfeSXQ9o7QXmIBEDDZ5dTr1bBucLbnNgoAxvv+ieFY2i8eKq4 BrSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xmEL4PoSL4dRoy+EDuixfj+9Bi6u9R1TX0Usnq36bgE=; b=SqEYkRQ+9u0P9QZtgDv+P0rnl67Ye3fMRD5wJfmGHStES/io/F4Ud+zrQC/I3ICNJn OxNoW/fJoQVEOM/MCZk+sRamZXX/7qyV9zpaZqkBFiiiqsW9pyezYsj8M3Kjd7vNZiJq qWfvnJcRp8Vxg65oPbIfAkm+dgPzoT7snaTgMnidEA2tixkX5mEHgLjwYygZwsdylWuK cbEHsbVbpMVq+KTpbkyz3HBzmQNysSjM3mAnTe6gE9n2IMobr3YfPpfTNeis4iqHNLPR UirMKj5Va74VeMmA9mJ4S+TEGfQfQ1yqlEA36DI2hExuR02kohbcNXFPr38lmmXJv5Sm 31qA== X-Gm-Message-State: ANhLgQ2lLWMZlGBJvhfJs09BQE9mloSlOW7mXl8VidIqXEgu4U4KdKVE xx79OPk3jWmAHZRK54/yU+jLv5SEeBU= X-Google-Smtp-Source: ADFU+vt4lZrAygC4LS8Fq0zS/l/3Scsc+fN7a/p63q/Ptkr3O+i7vKKSjeQa4961xbhRBOMvIACKzquD7Z4= X-Received: by 2002:a5d:4683:: with SMTP id u3mr4193014wrq.248.1585152861512; Wed, 25 Mar 2020 09:14:21 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:38 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-28-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 27/38] kmsan: hooks for copy_to_user() and friends From: glider@google.com To: Alexander Viro , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Memory that is copied from userspace must be unpoisoned. Before copying memory to userspace, check it and report an error if it contains uninitialized bits. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Alexander Viro Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- v3: - fixed compilation errors reported by kbuild test bot v4: - minor variable fixes as requested by Andrey Konovalov - simplified code around copy_to_user() hooks v5: - fixed use of uninitialized value spotted by kbuild test robot Change-Id: I38428b9c7d1909b8441dcec1749b080494a7af99 --- arch/x86/include/asm/uaccess.h | 10 ++++++++++ include/asm-generic/cacheflush.h | 7 ++++++- include/asm-generic/uaccess.h | 12 +++++++++-- include/linux/uaccess.h | 34 ++++++++++++++++++++++++++------ lib/iov_iter.c | 14 +++++++++---- lib/usercopy.c | 8 ++++++-- 6 files changed, 70 insertions(+), 15 deletions(-) diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 61d93f062a36e..bfb55fdba5df4 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -6,6 +6,7 @@ */ #include #include +#include #include #include #include @@ -174,6 +175,7 @@ __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL)) ASM_CALL_CONSTRAINT \ : "0" (ptr), "i" (sizeof(*(ptr)))); \ (x) = (__force __typeof__(*(ptr))) __val_gu; \ + kmsan_unpoison_shadow(&(x), sizeof(*(ptr))); \ __builtin_expect(__ret_gu, 0); \ }) @@ -248,6 +250,7 @@ extern void __put_user_8(void); __chk_user_ptr(ptr); \ might_fault(); \ __pu_val = x; \ + kmsan_check_memory(&(__pu_val), sizeof(*(ptr))); \ switch (sizeof(*(ptr))) { \ case 1: \ __put_user_x(1, __pu_val, ptr, __ret_pu); \ @@ -270,7 +273,9 @@ extern void __put_user_8(void); #define __put_user_size(x, ptr, size, label) \ do { \ + __typeof__(*(ptr)) __pus_val = x; \ __chk_user_ptr(ptr); \ + kmsan_check_memory(&(__pus_val), size); \ switch (size) { \ case 1: \ __put_user_goto(x, ptr, "b", "b", "iq", label); \ @@ -295,7 +300,9 @@ do { \ */ #define __put_user_size_ex(x, ptr, size) \ do { \ + __typeof__(*(ptr)) __puse_val = x; \ __chk_user_ptr(ptr); \ + kmsan_check_memory(&(__puse_val), size); \ switch (size) { \ case 1: \ __put_user_asm_ex(x, ptr, "b", "b", "iq"); \ @@ -363,6 +370,7 @@ do { \ default: \ (x) = __get_user_bad(); \ } \ + kmsan_unpoison_shadow(&(x), size); \ } while (0) #define __get_user_asm(x, addr, err, itype, rtype, ltype, errret) \ @@ -413,6 +421,7 @@ do { \ default: \ (x) = __get_user_bad(); \ } \ + kmsan_unpoison_shadow(&(x), size); \ } while (0) #define __get_user_asm_ex(x, addr, itype, rtype, ltype) \ @@ -433,6 +442,7 @@ do { \ __typeof__(ptr) __pu_ptr = (ptr); \ __typeof__(size) __pu_size = (size); \ __uaccess_begin(); \ + kmsan_check_memory(&(__pu_val), size); \ __put_user_size(__pu_val, __pu_ptr, __pu_size, __pu_label); \ __pu_err = 0; \ __pu_label: \ diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index cac7404b2bdd2..7023c44457ef9 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -4,6 +4,7 @@ /* Keep includes the same across arches. */ #include +#include #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 0 @@ -99,6 +100,7 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end) #ifndef copy_to_user_page #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ + kmsan_check_memory(src, len); \ memcpy(dst, src, len); \ flush_icache_user_range(vma, page, vaddr, len); \ } while (0) @@ -106,7 +108,10 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end) #ifndef copy_from_user_page #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ - memcpy(dst, src, len) + do { \ + memcpy(dst, src, len); \ + kmsan_unpoison_shadow(dst, len); \ + } while (0) #endif #endif /* __ASM_CACHEFLUSH_H */ diff --git a/include/asm-generic/uaccess.h b/include/asm-generic/uaccess.h index e935318804f8a..88b626c3ef2de 100644 --- a/include/asm-generic/uaccess.h +++ b/include/asm-generic/uaccess.h @@ -142,7 +142,11 @@ static inline int __access_ok(unsigned long addr, unsigned long size) static inline int __put_user_fn(size_t size, void __user *ptr, void *x) { - return unlikely(raw_copy_to_user(ptr, x, size)) ? -EFAULT : 0; + int n; + + n = raw_copy_to_user(ptr, x, size); + kmsan_copy_to_user(ptr, x, size, n); + return unlikely(n) ? -EFAULT : 0; } #define __put_user_fn(sz, u, k) __put_user_fn(sz, u, k) @@ -203,7 +207,11 @@ extern int __put_user_bad(void) __attribute__((noreturn)); #ifndef __get_user_fn static inline int __get_user_fn(size_t size, const void __user *ptr, void *x) { - return unlikely(raw_copy_from_user(x, ptr, size)) ? -EFAULT : 0; + int res; + + res = raw_copy_from_user(x, ptr, size); + kmsan_unpoison_shadow(x, size - res); + return unlikely(res) ? -EFAULT : 0; } #define __get_user_fn(sz, u, k) __get_user_fn(sz, u, k) diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 8a215c5c1aed8..b38bdeb135dde 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -5,6 +5,7 @@ #include #include #include +#include #define uaccess_kernel() segment_eq(get_fs(), KERNEL_DS) @@ -58,18 +59,26 @@ static __always_inline __must_check unsigned long __copy_from_user_inatomic(void *to, const void __user *from, unsigned long n) { + unsigned long res; + instrument_copy_from_user(to, from, n); check_object_size(to, n, false); - return raw_copy_from_user(to, from, n); + res = raw_copy_from_user(to, from, n); + kmsan_unpoison_shadow(to, n - res); + return res; } static __always_inline __must_check unsigned long __copy_from_user(void *to, const void __user *from, unsigned long n) { + unsigned long res; + might_fault(); instrument_copy_from_user(to, from, n); check_object_size(to, n, false); - return raw_copy_from_user(to, from, n); + res = raw_copy_from_user(to, from, n); + kmsan_unpoison_shadow(to, n - res); + return res; } /** @@ -88,18 +97,26 @@ __copy_from_user(void *to, const void __user *from, unsigned long n) static __always_inline __must_check unsigned long __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n) { + unsigned long res; + instrument_copy_to_user(to, from, n); check_object_size(from, n, true); - return raw_copy_to_user(to, from, n); + res = raw_copy_to_user(to, from, n); + kmsan_copy_to_user((const void *)to, from, n, res); + return res; } static __always_inline __must_check unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n) { + unsigned long res; + might_fault(); instrument_copy_to_user(to, from, n); check_object_size(from, n, true); - return raw_copy_to_user(to, from, n); + res = raw_copy_to_user(to, from, n); + kmsan_copy_to_user((const void *)to, from, n, res); + return res; } #ifdef INLINE_COPY_FROM_USER @@ -107,10 +124,12 @@ static inline __must_check unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n) { unsigned long res = n; + might_fault(); if (likely(access_ok(from, n))) { instrument_copy_from_user(to, from, n); res = raw_copy_from_user(to, from, n); + kmsan_unpoison_shadow(to, n - res); } if (unlikely(res)) memset(to + (n - res), 0, res); @@ -125,12 +144,15 @@ _copy_from_user(void *, const void __user *, unsigned long); static inline __must_check unsigned long _copy_to_user(void __user *to, const void *from, unsigned long n) { + unsigned long res = n; + might_fault(); if (access_ok(to, n)) { instrument_copy_to_user(to, from, n); - n = raw_copy_to_user(to, from, n); + res = raw_copy_to_user(to, from, n); + kmsan_copy_to_user(to, from, n, res); } - return n; + return res; } #else extern __must_check unsigned long diff --git a/lib/iov_iter.c b/lib/iov_iter.c index bf538c2bec777..179c28455693d 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -138,20 +138,26 @@ static int copyout(void __user *to, const void *from, size_t n) { + int res; + if (access_ok(to, n)) { instrument_copy_to_user(to, from, n); - n = raw_copy_to_user(to, from, n); + res = raw_copy_to_user(to, from, n); + kmsan_copy_to_user(to, from, n, res); } - return n; + return res; } static int copyin(void *to, const void __user *from, size_t n) { + size_t res; + if (access_ok(from, n)) { instrument_copy_from_user(to, from, n); - n = raw_copy_from_user(to, from, n); + res = raw_copy_from_user(to, from, n); + kmsan_unpoison_shadow(to, n - res); } - return n; + return res; } static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t bytes, diff --git a/lib/usercopy.c b/lib/usercopy.c index 4bb1c5e7a3eb0..bf313548c4039 100644 --- a/lib/usercopy.c +++ b/lib/usercopy.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 #include #include +#include #include /* out-of-line parts */ @@ -13,6 +14,7 @@ unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n if (likely(access_ok(from, n))) { instrument_copy_from_user(to, from, n); res = raw_copy_from_user(to, from, n); + kmsan_unpoison_shadow(to, n - res); } if (unlikely(res)) memset(to + (n - res), 0, res); @@ -24,12 +26,14 @@ EXPORT_SYMBOL(_copy_from_user); #ifndef INLINE_COPY_TO_USER unsigned long _copy_to_user(void __user *to, const void *from, unsigned long n) { + unsigned long res; might_fault(); if (likely(access_ok(to, n))) { instrument_copy_to_user(to, from, n); - n = raw_copy_to_user(to, from, n); + res = raw_copy_to_user(to, from, n); + kmsan_copy_to_user(to, from, n, res); } - return n; + return res; } EXPORT_SYMBOL(_copy_to_user); #endif From patchwork Wed Mar 25 16:12:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458287 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D68D76CA for ; Wed, 25 Mar 2020 16:14:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 99E5F20409 for ; Wed, 25 Mar 2020 16:14:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="uFOU7nF0" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 99E5F20409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E33706B00A2; Wed, 25 Mar 2020 12:14:26 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id DBFAA6B00A4; Wed, 25 Mar 2020 12:14:26 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C36926B00A5; Wed, 25 Mar 2020 12:14:26 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0180.hostedemail.com [216.40.44.180]) by kanga.kvack.org (Postfix) with ESMTP id 981506B00A2 for ; Wed, 25 Mar 2020 12:14:26 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 726811801F2CF for ; Wed, 25 Mar 2020 16:14:26 +0000 (UTC) X-FDA: 76634382132.17.cat10_7e0dd3bed231c X-Spam-Summary: 2,0,0,115ccab85b4b0435,d41d8cd98f00b204,3yin7xgykce8x2zuv8x55x2v.t532z4be-331crt1.58x@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1541:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3152:3352:3865:3867:3868:5007:6261:6653:6742:6743:9969:10004:10400:11026:11232:11658:11914:12043:12048:12296:12297:12438:12555:12895:13069:13311:13357:13846:14096:14097:14181:14394:14659:14721:21080:21212:21365:21444:21451:21627:30054:30064,0,RBL:209.85.221.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: cat10_7e0dd3bed231c X-Filterd-Recvd-Size: 5008 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf12.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:26 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id v14so1367202wrq.13 for ; Wed, 25 Mar 2020 09:14:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=gyq7hwN+bisRArIjD7Jqw5eHSbpVolCsZs6kktVM9HU=; b=uFOU7nF0gUAz/Oy62ihGODUkDhlkGwot5Cd5M65DG6hGMt+rVq0TGgEW+XJsmRP5KT wV9k3Qn8gt2Zahyidlnk9L3iI1BE+61P0BWlidR0+tfKHQEqJJOs4InrfZNvjSl88o0N 37uTduGFYmz2n61cDL99062tECcJj3gtZWYC8I73RS/J+w30oE4eKe33umcRl3I5pQyZ Me5rTodffpR3ByfOYB8C7O1ERhppkWhZyBcN3icpAoYHwbbGG4s01obW81D8/FwRJ5WK xd48tSQ9GpXB1KCWqzSRLXJlBOTF8kPXBZV/j4TQHfYeb6fUpDh+09B6nB0r/lapNvfR ew7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=gyq7hwN+bisRArIjD7Jqw5eHSbpVolCsZs6kktVM9HU=; b=CLgnPD3IwktswVa9NdZ62Co72BjT3aCJLaZkT6CUGP32auPRMkYxs7lr9sKdu8yKo8 wCqSm8Ksj6DzHNeqW9zVlOss0fydxDpDo5jyK5iKB8g3eAEUOU7ZDZoXAP6BIYpIg6BN tcSJaEZScoXsihviXxqLQfpS6YE0vaHVQUt8TZll4eKBfHztTey4mcnb77/XUW7hr8mG csdpJ9bWLRwfEg4oVWW+r7tkyHczHDNVEE+dxEh1cH2y2Oqzng4GGevpy5Frz8WiZ7Os YQABj4Vc8BhQDCrvVWpkU2nW5+YntTNYZ5erGluJD8AiTsck1/COM/c/aYPVKuxTkrOz 667Q== X-Gm-Message-State: ANhLgQ3TpTirtSNOrs9RebCy31VSiNM4Xeg0feC5mRetopeCUpmB1nkd yN+A3gAJZV/xv1bLlM6zlI9RpwNXgSk= X-Google-Smtp-Source: ADFU+vs0XM59oG2kZqEzxFrE548WfVgevMAAgSa86uIo0PXQ2wuZ3TP7TmdvlcyJQT5QV0byWMHSbehogio= X-Received: by 2002:adf:edcf:: with SMTP id v15mr4132316wro.309.1585152864527; Wed, 25 Mar 2020 09:14:24 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:39 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-29-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 28/38] kmsan: init: call KMSAN initialization routines From: glider@google.com To: Jens Axboe , Andy Lutomirski , Vegard Nossum , Dmitry Vyukov , Andrey Konovalov , Marco Elver , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: kmsan_initialize_shadow() creates metadata pages for mappings created at boot time. kmsan_initialize() initializes the bookkeeping for init_task and enables KMSAN. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Jens Axboe Cc: Andy Lutomirski Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Andrey Konovalov Cc: Marco Elver Cc: linux-mm@kvack.org Reviewed-by: Andrey Konovalov --- Change-Id: Ie3af251d629b911668f8651d868c544f3c11209f --- init/main.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/init/main.c b/init/main.c index 345a9ab4450f1..4dd15063d32fe 100644 --- a/init/main.c +++ b/init/main.c @@ -33,6 +33,7 @@ #include #include #include +#include #include #include #include @@ -772,6 +773,7 @@ static void __init mm_init(void) page_ext_init_flatmem(); init_debug_pagealloc(); report_meminit(); + kmsan_initialize_shadow(); mem_init(); kmem_cache_init(); kmemleak_init(); @@ -847,6 +849,7 @@ asmlinkage __visible void __init start_kernel(void) sort_main_extable(); trap_init(); mm_init(); + kmsan_initialize(); ftrace_init(); From patchwork Wed Mar 25 16:12:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458289 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D671E6CA for ; Wed, 25 Mar 2020 16:14:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9A67B20409 for ; Wed, 25 Mar 2020 16:14:46 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="g5mHdfJr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 9A67B20409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1B0336B00A4; Wed, 25 Mar 2020 12:14:30 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 114AA6B00A6; Wed, 25 Mar 2020 12:14:30 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EF7C26B00A7; Wed, 25 Mar 2020 12:14:29 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0229.hostedemail.com [216.40.44.229]) by kanga.kvack.org (Postfix) with ESMTP id CFC046B00A4 for ; Wed, 25 Mar 2020 12:14:29 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id AC72A8248047 for ; Wed, 25 Mar 2020 16:14:29 +0000 (UTC) X-FDA: 76634382258.11.place50_7e7f843e8c637 X-Spam-Summary: 2,0,0,fb1086efd5be15f2,d41d8cd98f00b204,3y4n7xgykcfi052xyb08805y.w86527eh-664fuw4.8b0@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:617:800:960:973:982:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:2897:3138:3139:3140:3141:3142:3152:3353:3865:3866:3867:3872:4117:4250:4321:4605:5007:6261:6653:6742:6743:7875:7903:8660:9969:10004:10400:11026:11232:11473:11658:11914:12043:12048:12296:12297:12438:12555:12895:13148:13230:13846:14093:14097:14181:14394:14659:14721:21080:21365:21444:21451:21611:21627:30054:30064,0,RBL:209.85.221.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: place50_7e7f843e8c637 X-Filterd-Recvd-Size: 6400 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:29 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id f8so1379649wrp.1 for ; Wed, 25 Mar 2020 09:14:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=bBG9lgwUrPmgqsOzzaW4vKvaQj094UYKenoIEuCe488=; b=g5mHdfJr1Grg2tNVALATfglEcximeLr1JkhMXGCrX1jrxFvYMFD0BRR4BBFu71TviS uwjAyYjGra03P9Ch+DgtAGn/+BkqQlV7/hSBCgAoBZasZHb6KCZVdbIzWp7148yTwuPD ufuiF4IMBql1B9cfESx80zq567Lzq22nw7axYsZlfwJ0U1GeAuwIWpRM/m9JRAL4/Fku xyufYUDsV2J6ouLMPen5UZj1VV/Sqvecx5Tzj0CNGmCdT0r6Xj/4qkPBSGdVH4zqXEmQ R3W6a7qdFUQgbsLmlH40MHQVRSoD3ZsoC4XJM+WiUmP850BVLJ+AskWgMJh54uTKFhf9 AU+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bBG9lgwUrPmgqsOzzaW4vKvaQj094UYKenoIEuCe488=; b=XQrnerzIhUctYGxkJiSl9d8EYU7aU0RnpjhKf0HCBJB2alyvgM37gWVpvIQULGRchp QJTUf1qbQG4CEAb4RPrEH9xxPCjd90jSn3iuRSktLQAVbvN+n6FJfG/c/MdSBmO/ckdh aUJXECw/+qcVNd27/+7UniXLbsSqL1RcaDfKQd0Zj6o304rXMX2odi4iuxvwr3oJcdPq EhEV3l6ukiguPTDPdh3GNu8ZEgTu1Oa1i2Ohi84KC6oJWApSu25fW/KRx9KxfJukA//Y vOPfXoTIkAL9RMScJuJGeDFEQstjNpUyQor48X1tymZlkhfwNVe2KMEy1/8D/YemiCQs 04rA== X-Gm-Message-State: ANhLgQ2Teqb76zqUr8UGkUPLb/DDCqx2PqYzQTqc8lpW//w0DVH0bOM1 s/TCAXeK9eJzfRR6lSuu+iG7tj4Zr1w= X-Google-Smtp-Source: ADFU+vuWjL38s5D+zWA8MILVsH3kZXbWQhF0iYpB5OzGWfDlMpmHN4g3dqRminre+Y5sLrElUyRMYBgeIek= X-Received: by 2002:adf:82ab:: with SMTP id 40mr3976785wrc.323.1585152867832; Wed, 25 Mar 2020 09:14:27 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:40 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-30-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 29/38] kmsan: enable KMSAN builds From: glider@google.com To: Jens Axboe , Andy Lutomirski , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Make KMSAN usable by adding the necessary Makefile bits. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Jens Axboe Cc: Andy Lutomirski Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- This patch was previously called "kmsan: Changing existing files to enable KMSAN builds". Logically unrelated parts of it were split away. v4: - split away changes to init/main.c as requested by Andrey Konovalov Change-Id: I37e0b7f2d2f2b0aeac5753ff9d6b411485fc374e --- Makefile | 3 ++- mm/Makefile | 1 + scripts/Makefile.lib | 6 ++++++ 3 files changed, 9 insertions(+), 1 deletion(-) diff --git a/Makefile b/Makefile index a532333b4cd02..da315f20618b3 100644 --- a/Makefile +++ b/Makefile @@ -482,7 +482,7 @@ export KBUILD_HOSTCXXFLAGS KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS LDFLAGS_MODULE export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS KBUILD_LDFLAGS export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE -export CFLAGS_KASAN CFLAGS_KASAN_NOSANITIZE CFLAGS_UBSAN CFLAGS_KCSAN +export CFLAGS_KASAN CFLAGS_KASAN_NOSANITIZE CFLAGS_UBSAN CFLAGS_KCSAN CFLAGS_KMSAN export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL @@ -901,6 +901,7 @@ KBUILD_CFLAGS += $(call cc-option,-fcf-protection=none) endif include scripts/Makefile.kasan +include scripts/Makefile.kmsan include scripts/Makefile.extrawarn include scripts/Makefile.ubsan include scripts/Makefile.kcsan diff --git a/mm/Makefile b/mm/Makefile index fa91e963c2f9e..7b9bce9cc0afb 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -81,6 +81,7 @@ obj-$(CONFIG_PAGE_POISONING) += page_poison.o obj-$(CONFIG_SLAB) += slab.o obj-$(CONFIG_SLUB) += slub.o obj-$(CONFIG_KASAN) += kasan/ +obj-$(CONFIG_KMSAN) += kmsan/ obj-$(CONFIG_FAILSLAB) += failslab.o obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-$(CONFIG_MEMTEST) += memtest.o diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib index b12dd5ba48960..e9a8c2671a4b3 100644 --- a/scripts/Makefile.lib +++ b/scripts/Makefile.lib @@ -137,6 +137,12 @@ _c_flags += $(if $(patsubst n%,, \ $(CFLAGS_KASAN), $(CFLAGS_KASAN_NOSANITIZE)) endif +ifeq ($(CONFIG_KMSAN),y) +_c_flags += $(if $(patsubst n%,, \ + $(KMSAN_SANITIZE_$(basetarget).o)$(KMSAN_SANITIZE)y), \ + $(CFLAGS_KMSAN)) +endif + ifeq ($(CONFIG_UBSAN),y) _c_flags += $(if $(patsubst n%,, \ $(UBSAN_SANITIZE_$(basetarget).o)$(UBSAN_SANITIZE)$(CONFIG_UBSAN_SANITIZE_ALL)), \ From patchwork Wed Mar 25 16:12:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458291 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 28EB0913 for ; Wed, 25 Mar 2020 16:14:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EA4C020409 for ; Wed, 25 Mar 2020 16:14:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="cAfnE1Vw" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EA4C020409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 733536B00A6; Wed, 25 Mar 2020 12:14:33 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6BABF6B00A8; Wed, 25 Mar 2020 12:14:33 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 559926B00A9; Wed, 25 Mar 2020 12:14:33 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0180.hostedemail.com [216.40.44.180]) by kanga.kvack.org (Postfix) with ESMTP id 338076B00A6 for ; Wed, 25 Mar 2020 12:14:33 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 2142B181C34EF for ; Wed, 25 Mar 2020 16:14:33 +0000 (UTC) X-FDA: 76634382426.01.part71_7f04677504043 X-Spam-Summary: 2,0,0,5cac5703c7b4ec93,d41d8cd98f00b204,3z4n7xgykcfy49612f4cc492.0ca96bil-aa8jy08.cf4@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:2393:2553:2559:2562:2904:3138:3139:3140:3141:3142:3152:3352:3865:3866:3867:3870:3872:3874:4321:5007:6261:6653:6737:6742:6743:7514:9969:10004:10400:11026:11232:11658:11914:12043:12048:12296:12297:12438:12555:12895:12986:13161:13229:13846:14093:14097:14181:14394:14659:14721:21060:21080:21365:21444:21451:21627:21990:30034:30045:30054:30064:30090,0,RBL:209.85.128.74:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: part71_7f04677504043 X-Filterd-Recvd-Size: 5865 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf11.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:32 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id m4so2668417wme.0 for ; Wed, 25 Mar 2020 09:14:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=a+HHqC0dc7Q+M3lI9Kv2dExasURR+VQ+5gFn0GtjCWE=; b=cAfnE1Vw95Kp8qMfCrgctDsXy9lALLxqUKdaO5dg621kfxeZIeQOy/e/oHNgln3Yto BQ/NF6SjCovsR9RMfAUyijxqRvvkQwVt5PiGFARUSGo0y24zBpgJjd/HKY95mhHlnm7E NyOn8PHghpYSpgc51B6MGBbf5niPFP6HHQYeAE0sp4tz9zkNUZDvpA9IVdcD8rp+2MEw b5LKuE+3eEwtXJSrCVT8sKjCdzCZbYLAddpbifjUHOKywfx7WgU+ddGJ0XAMFnS8mMsE mmn6TH2DyZLhBJkgguwmcTBWwO57d4kpl/t/cKAE8q9BCljqN2NeyNXtOinv2FnqF2iD h4mw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=a+HHqC0dc7Q+M3lI9Kv2dExasURR+VQ+5gFn0GtjCWE=; b=q56nJ8OTedkvbPbboOZOEZ6pz2SVGtHf5xdlUoR7tQOWJewLvK6Op5DBKwWfuAgQDQ MJP+jXXVCiFw+/XLZXL//hBKViKNA0bTd3Rfz5Gdesed7ozfiw4qs8rclUMnidcRtlz5 xQnOdIbmqTvc/rQ+n1BQ5VXNKGeVcJoCYh7qQ//aVOiFgzy0A7crAir65dWThXHnelAT rub7BWLkJSYfl3cT33Ncr6b6zdg25XbTbiL2/yDBg9tqYd0IExp/y6ZiElSVWqf5nBK3 AtcZVNMZFkn1RidzwmHIJ1OUTvl121e0JvnhRwvqJSxht0IqgiPRvqAWVprayu8bATcI kGSQ== X-Gm-Message-State: ANhLgQ3cbiGMatrg2H7fIYXx8DvjfzBbDz+i/UGDc7alGjTmY4YUMKbW 4Hp7VZ7ravLKSAhB+Zyx//5LCUrPU7I= X-Google-Smtp-Source: ADFU+vs9PPJ+u32WNl22gimU/x02Zgiy0Z9QDgDzx7/LlYQ88GNvA93T6s+4y6YFpZu/q0/bL/DG9SEdlBA= X-Received: by 2002:a5d:540c:: with SMTP id g12mr4456924wrv.178.1585152871215; Wed, 25 Mar 2020 09:14:31 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:41 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-31-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 30/38] kmsan: handle /dev/[u]random From: glider@google.com To: Andrew Morton , Jens Axboe , "Theodore Ts'o" , Dmitry Torokhov , "Martin K. Petersen" , "Michael S. Tsirkin" , Christoph Hellwig , Eric Dumazet , Eric Van Hensbergen , Takashi Iwai , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , Matthew Wilcox , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, darrick.wong@oracle.com, davem@davemloft.net, ebiggers@google.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, m.szyprowski@samsung.com, mark.rutland@arm.com, schwidefsky@de.ibm.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The random number generator may use uninitialized memory, but it may not return uninitialized values. Unpoison the output buffer in _extract_crng() to prevent false reports. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Andrew Morton Cc: Jens Axboe Cc: "Theodore Ts'o" Cc: Dmitry Torokhov Cc: Martin K. Petersen Cc: "Michael S. Tsirkin" Cc: Christoph Hellwig Cc: Eric Dumazet Cc: Eric Van Hensbergen Cc: Takashi Iwai Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: Matthew Wilcox Cc: linux-mm@kvack.org Reviewed-by: Andrey Konovalov --- This patch was previously known as "kmsan: unpoisoning buffers from devices etc.", but it turned out to be possible to drop most of the annotations from that patch, so it only relates to /dev/random now. Change-Id: Id460e7a86ce564f1357469f53d0c7410ca08f0e9 --- drivers/char/random.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/char/random.c b/drivers/char/random.c index 0d10e31fd342f..7cd36c726b045 100644 --- a/drivers/char/random.c +++ b/drivers/char/random.c @@ -322,6 +322,7 @@ #include #include #include +#include #include #include #include @@ -1007,6 +1008,11 @@ static void _extract_crng(struct crng_state *crng, spin_lock_irqsave(&crng->lock, flags); if (arch_get_random_long(&v)) crng->state[14] ^= v; + /* + * Regardless of where the random data comes from, KMSAN should treat + * it as initialized. + */ + kmsan_unpoison_shadow(crng->state, sizeof(crng->state)); chacha20_block(&crng->state[0], out); if (crng->state[12] == 0) crng->state[13]++; From patchwork Wed Mar 25 16:12:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458293 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 025716CA for ; Wed, 25 Mar 2020 16:14:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C503820409 for ; Wed, 25 Mar 2020 16:14:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Ap6VLbh1" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C503820409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 694F86B00A8; Wed, 25 Mar 2020 12:14:36 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 61DDF6B00AA; Wed, 25 Mar 2020 12:14:36 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4743E6B00AB; Wed, 25 Mar 2020 12:14:36 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0134.hostedemail.com [216.40.44.134]) by kanga.kvack.org (Postfix) with ESMTP id 24D996B00A8 for ; Wed, 25 Mar 2020 12:14:36 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 01FB95676D for ; Wed, 25 Mar 2020 16:14:36 +0000 (UTC) X-FDA: 76634382552.30.brass10_7f6d17edb5942 X-Spam-Summary: 2,0,0,0ce7b8571e659bb1,d41d8cd98f00b204,3aon7xgykcfk7c945i7ff7c5.3fdc9elo-ddbm13b.fi7@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1541:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3152:3352:3865:3867:3870:3871:3872:3874:4250:4321:5007:6261:6653:6742:6743:7903:8660:9969:10004:10400:11026:11232:11658:11914:12043:12048:12296:12297:12438:12555:12895:12986:13069:13148:13230:13255:13311:13357:13846:14096:14097:14181:14394:14659:14721:21080:21365:21444:21451:21627:21990:30054:30064:30070,0,RBL:209.85.160.201:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: brass10_7f6d17edb5942 X-Filterd-Recvd-Size: 5305 Received: from mail-qt1-f201.google.com (mail-qt1-f201.google.com [209.85.160.201]) by imf13.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:35 +0000 (UTC) Received: by mail-qt1-f201.google.com with SMTP id k46so2365726qta.2 for ; Wed, 25 Mar 2020 09:14:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=J+h0iEnWTkRG2NLExxLPCnEChpv3/sAmomh7lQSL40I=; b=Ap6VLbh1nr/g7HPsN85bH+He50TVNZKjm8mbQ67XUjm7N5GYD3YJI4IkP40aduyrl+ xwjd5PLEboIdfAlynCr2s0bHLCAb9/MFGyAJLO6Dx1INC68alDj08z4Eg3GDiKEXwQjT vqOs8uri5xwH/VytQTSzdcqNDMdRds3xw6cyeR8CaauCTy8PNm9Fx2kEC5FQxByGHyuN v/HM33BPhxYyZqLfWySi1CjPLB2VsIQ9DMPxrDswCNO18NWJNDOc9TUF9A5LE2PV45vL 5z8E7pv8OjWLy8BuiU/sWK+u1/HqEAuJ9oAjEvfqVe3lhbrh6KMOQwPlG5Qc4uV0pjI4 gYYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=J+h0iEnWTkRG2NLExxLPCnEChpv3/sAmomh7lQSL40I=; b=C9VtnVK30dSC1XBM3A/zkjLekdwBK3LtJi57Gi9YNZCgE6/XuQhNm9v44SSTc44s8P bpyEYg9MRk80GCRFbVIfXOZ807DHvduw+NyQCUG1/ntk8Guk551Qk2JKZKH8OmBB2BpN wlXlW75IAkYXjWEcO0wfVo7QchrwGfw5wWe+pDiTm6qQiT9CVgZqy6UdA4P7f/mbMZTW VY0RL6JVpcl9maN9JYGs6bpCQrRXcXm5+F0wQl4wlQeo4V4Bw+LNQL9VxBdl+YaSWtGL iTKJDdyORvl6VVwnhxsToxUZBA4Z/KpOFscwXBf/RkZTm6nXFvt+VFrRCKQYO9ccHu3e 8deg== X-Gm-Message-State: ANhLgQ0SC76Rbf3exF2qYWK0U8QvyZZuDMLH0imSlceb4hI40WSP6VPL i2bjOdylfkGm/3S/0mBtghc88byPoPk= X-Google-Smtp-Source: ADFU+vtc3ssCRbxQQhTVV5kqjyH/E8hfiLbgFgdTIhyuHgKj4w3GvV01MQ0MpCGx8nekkibvHTvg7Uf104c= X-Received: by 2002:a05:6214:186c:: with SMTP id eh12mr3770740qvb.131.1585152874479; Wed, 25 Mar 2020 09:14:34 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:42 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-32-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 31/38] kmsan: virtio: check/unpoison scatterlist in vring_map_one_sg() From: glider@google.com To: Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , "Michael S. Tsirkin" , Jason Wang , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If vring doesn't use the DMA API, KMSAN is unable to tell whether the memory is initialized by hardware. Explicitly call kmsan_handle_dma() from vring_map_one_sg() in this case to prevent false positives. Signed-off-by: Alexander Potapenko Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: "Michael S. Tsirkin" Cc: Jason Wang Cc: linux-mm@kvack.org --- Change-Id: Icc8678289b7084139320fc503898a67aa9803458 --- drivers/virtio/virtio_ring.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 58b96baa8d488..8b6dee1dfde58 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #ifdef DEBUG @@ -326,8 +327,15 @@ static dma_addr_t vring_map_one_sg(const struct vring_virtqueue *vq, struct scatterlist *sg, enum dma_data_direction direction) { - if (!vq->use_dma_api) + if (!vq->use_dma_api) { + /* + * If DMA is not used, KMSAN doesn't know that the scatterlist + * is initialized by the hardware. Explicitly check/unpoison it + * depending on the direction. + */ + kmsan_handle_dma(sg_virt(sg), sg->length, direction); return (dma_addr_t)sg_phys(sg); + } /* * We can't use dma_map_sg, because we don't use scatterlists in From patchwork Wed Mar 25 16:12:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458295 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 460D76CA for ; Wed, 25 Mar 2020 16:14:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 12F842077D for ; Wed, 25 Mar 2020 16:14:57 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="HcF3bPHj" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 12F842077D Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 954F26B00AA; Wed, 25 Mar 2020 12:14:39 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 88F196B00AC; Wed, 25 Mar 2020 12:14:39 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7574B6B00AD; Wed, 25 Mar 2020 12:14:39 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0187.hostedemail.com [216.40.44.187]) by kanga.kvack.org (Postfix) with ESMTP id 527A66B00AA for ; Wed, 25 Mar 2020 12:14:39 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 36BC1824805A for ; Wed, 25 Mar 2020 16:14:39 +0000 (UTC) X-FDA: 76634382678.01.thing48_7fe2f73b81612 X-Spam-Summary: 2,0,0,d1debd5fe9116e90,d41d8cd98f00b204,3byn7xgykcfwafc78laiiaf8.6igfchor-ggep46e.ila@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1534:1541:1593:1594:1711:1730:1747:1777:1792:2393:2553:2559:2562:3138:3139:3140:3141:3142:3152:3352:3865:3867:3868:3870:4250:4321:5007:6261:6653:6742:6743:7875:7903:9969:10004:10400:11026:11658:11914:12043:12048:12295:12296:12297:12438:12555:12895:12986:13069:13311:13357:13846:14096:14097:14181:14394:14659:14721:21080:21365:21444:21451:21611:21627:30054:30064:30090,0,RBL:209.85.128.74:@flex--glider.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:33,LUA_SUMMARY:none X-HE-Tag: thing48_7fe2f73b81612 X-Filterd-Recvd-Size: 4763 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf02.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:38 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id s15so873648wmc.0 for ; Wed, 25 Mar 2020 09:14:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=nJyNHCH3d7fYtWRBOT5bbIl4NMJMAEkhfkpvY6wfoBU=; b=HcF3bPHj7jzjxG9fAHkaIPdFhel6b6msvnw7uic6pLKnoDnvZRY0UTs9JViS+zGiJg oVdeni1+hEzSKIfp5cMnt6G9YsAif/6+YXWYGFnyWMmOzX9F23Vawldjo2jGnhbaw3zq QkpmLmrDyiU7ajfiaxNvSfk8UXIuMdpLam1bE92BRookB9SbFELoUHVAZ1L+Hg0/gD4C QNNk+YRh0hSVfvGDY5J/8DaZb/+XM0Kvy7gdu13fMSPaF74FFVW8yZrCtkUROCg8EB5K saBkvgi+SueT9Yyijqg+rxAkY0p+ULFDlItEwH4/zbZ5iccBj8sD/SoinkCEOtOBlk8T NoCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=nJyNHCH3d7fYtWRBOT5bbIl4NMJMAEkhfkpvY6wfoBU=; b=fQOTb1DX0zec+YG+aPPLNAO8EW73cxli28jcwrUAP3xKC+g9/isCFxBjetjEVsyd3I UGWO77y6f0h9w913eI2Hx5K91plmgN0rZ2Os7xyFL+64jRXCLto/Hjl6uYRRtjS5n5eh ssW2PmyqqDrTyxCK3rns2EE34mitL13dcPYoHkiqEHWtscpRofPojjPOLeUXCANt47qW aknKuMZLBBIfl7k+akoAH8NlbxMGKAKF4bIEkfcOJZVlIzYINq7KgqRdtSL/i6nwu9EK 7EWe3lKwGo5f5MG//BbrogEBxp3P5pYgBTa5NqWrk/1vBhy/01dzDNjW/vHQiqH38kP4 ZALQ== X-Gm-Message-State: ANhLgQ2yLn9mXgv0vP0j1MBUttsI5FGRqT5FdE5i7uLOlNVcEa8XUUC4 7U/LXM27z9kG0tHh0182dgKlR2lccE0= X-Google-Smtp-Source: ADFU+vvINOLjtES2nEdJ/29EHkQUc4ebY9GUL2c0x+lDO36gIfkVy5LB5whEru2DqCemqwVrum3kNZbOpts= X-Received: by 2002:adf:afcb:: with SMTP id y11mr4038593wrd.141.1585152877387; Wed, 25 Mar 2020 09:14:37 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:43 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-33-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 32/38] kmsan: disable strscpy() optimization under KMSAN From: glider@google.com To: Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Disable the efficient 8-byte reading under KMSAN to avoid false positives. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- v4: - actually disable the optimization under KMSAN via max=0 - use IS_ENABLED as requested by Marco Elver Change-Id: I25d1acf5c3df6eff85894cd94f5ddbe93308271c --- lib/string.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/lib/string.c b/lib/string.c index 6012c385fb314..fec929e70f1a5 100644 --- a/lib/string.c +++ b/lib/string.c @@ -202,6 +202,14 @@ ssize_t strscpy(char *dest, const char *src, size_t count) max = 0; #endif + /* + * read_word_at_a_time() below may read uninitialized bytes after the + * trailing zero and use them in comparisons. Disable this optimization + * under KMSAN to prevent false positive reports. + */ + if (IS_ENABLED(CONFIG_KMSAN)) + max = 0; + while (max >= sizeof(unsigned long)) { unsigned long c, data; From patchwork Wed Mar 25 16:12:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458297 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C921F6CA for ; Wed, 25 Mar 2020 16:15:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7BB75206F8 for ; Wed, 25 Mar 2020 16:15:00 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Xda12kqi" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7BB75206F8 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 670416B00AC; Wed, 25 Mar 2020 12:14:42 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5D2A76B00AE; Wed, 25 Mar 2020 12:14:42 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 471DC6B00AF; Wed, 25 Mar 2020 12:14:42 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0161.hostedemail.com [216.40.44.161]) by kanga.kvack.org (Postfix) with ESMTP id 246156B00AC for ; Wed, 25 Mar 2020 12:14:42 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 0B7D9180AC4ED for ; Wed, 25 Mar 2020 16:14:42 +0000 (UTC) X-FDA: 76634382804.06.note46_804c639b02a4b X-Spam-Summary: 2,0,0,1dd4b2f57eb3d454,d41d8cd98f00b204,3cin7xgykcf8difabodlldib.9ljifkru-jjhs79h.lod@flex--glider.bounces.google.com,,RULES_HIT:1:2:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1593:1594:1605:1730:1747:1777:1792:2393:2559:2562:2901:2903:3138:3139:3140:3141:3142:3152:3866:3867:3874:4049:4250:5007:6261:6653:6742:6743:7875:7903:9969:10004:11026:11658:11914:12043:12048:12291:12297:12438:12555:12683:12895:13846:14096:14097:14394:14659:21060:21080:21365:21444:21451:21627:21990:30054:30064,0,RBL:209.85.217.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: note46_804c639b02a4b X-Filterd-Recvd-Size: 10421 Received: from mail-vs1-f73.google.com (mail-vs1-f73.google.com [209.85.217.73]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:41 +0000 (UTC) Received: by mail-vs1-f73.google.com with SMTP id v10so447471vso.12 for ; Wed, 25 Mar 2020 09:14:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=YvwAoG4X3WWEZQ7U1+pxpD7xDTJfH6zyOtSyfbOznuI=; b=Xda12kqiTkKU61XayPhLhCJfPP4aik5cTnB3r4BDF7aef42NSiDya4EP1p2LFU8vOu U/69wPkR5Fkdlt7FwTJh4u4GojivkQFrliunWFUgfO/PLOoXcDF8FSx7LKlhA+1JhlAn kqrNHs31sBKb1dei50n+/MWCcBRfOS/wuKsn7jYfMa4QncNF6gykHEsNfXoi3gr6NJvX vs2qC0FomzfAT9uI7pq9IDwfTuc0lklg8QBPWLCdo/hAuvRj4YK8rrDO8whD5DEzqdpH h1z4DdnhEjrx0vsLBGnNEOFrFmp4pRvvwrVeDZb0rUs/Pp0ItagrgsbRWb6ZziMUePwQ mPSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=YvwAoG4X3WWEZQ7U1+pxpD7xDTJfH6zyOtSyfbOznuI=; b=ghIJKubjHHKCIFq2j4aqWDzwRbdEtpGDLTPTnzo+22lZ8OyckUT7/RFT1P8s8kg5Ao EFlnk5zD/6BJcncJj34WTrCwCtcRSEPWPahAZdfw/QCB1oatXILgSG1TYDRbmtRwPb/q 9q4NC1K8Z+P++RcynRv9l2/w49JWCOdU+DIKjTA4PvZr9GheIplBNOMfY/MYdAR9I9Gf nSVYKawm05+jltvNEV8jdivPGSFGpPs3saj6ilIwelFFXAIQENkeOK8lvMkLwJeGwqAD d7YEfkCJguyUCArQQjli2j733bQRPDZDnw+Q4JxjTBk1AxR7IoVAIOmMCk7jUkRbMOJx qHeQ== X-Gm-Message-State: ANhLgQ0JNNoAJ0ZeiAObRp4HBlYnVLpapI3UkNn7XhXLbN2od80weFJY kBG/KbuXZyfkIez6MKMNVgoUoFZrxAY= X-Google-Smtp-Source: ADFU+vuruuTu00b2lJlnmW/XBY4+Rn+dkjKIWDgWlHfd598bEbLmnPNctxB22uMhZ9ExRb8vwcrwezIXLpg= X-Received: by 2002:ab0:2553:: with SMTP id l19mr2875866uan.128.1585152880676; Wed, 25 Mar 2020 09:14:40 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:44 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-34-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 33/38] kmsan: add iomap support From: glider@google.com To: Christoph Hellwig , "Darrick J. Wong" , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@lst.de, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Functions from lib/iomap.c interact with hardware, so KMSAN must ensure that: - every read function returns an initialized value - every write function checks values before sending them to hardware. Signed-off-by: Alexander Potapenko Cc: Christoph Hellwig Cc: Darrick J. Wong Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org Reviewed-by: Andrey Konovalov --- v4: - adjust sizes of checked memory buffers as requested by Marco Elver Change-Id: Iacd96265e56398d8c111637ddad3cad727e48c8d --- lib/iomap.c | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/lib/iomap.c b/lib/iomap.c index e909ab71e995d..3582e8d1ca34e 100644 --- a/lib/iomap.c +++ b/lib/iomap.c @@ -6,6 +6,7 @@ */ #include #include +#include #include @@ -70,26 +71,31 @@ static void bad_io_access(unsigned long port, const char *access) #define mmio_read64be(addr) swab64(readq(addr)) #endif +__no_sanitize_memory unsigned int ioread8(void __iomem *addr) { IO_COND(addr, return inb(port), return readb(addr)); return 0xff; } +__no_sanitize_memory unsigned int ioread16(void __iomem *addr) { IO_COND(addr, return inw(port), return readw(addr)); return 0xffff; } +__no_sanitize_memory unsigned int ioread16be(void __iomem *addr) { IO_COND(addr, return pio_read16be(port), return mmio_read16be(addr)); return 0xffff; } +__no_sanitize_memory unsigned int ioread32(void __iomem *addr) { IO_COND(addr, return inl(port), return readl(addr)); return 0xffffffff; } +__no_sanitize_memory unsigned int ioread32be(void __iomem *addr) { IO_COND(addr, return pio_read32be(port), return mmio_read32be(addr)); @@ -142,18 +148,21 @@ static u64 pio_read64be_hi_lo(unsigned long port) return lo | (hi << 32); } +__no_sanitize_memory u64 ioread64_lo_hi(void __iomem *addr) { IO_COND(addr, return pio_read64_lo_hi(port), return readq(addr)); return 0xffffffffffffffffULL; } +__no_sanitize_memory u64 ioread64_hi_lo(void __iomem *addr) { IO_COND(addr, return pio_read64_hi_lo(port), return readq(addr)); return 0xffffffffffffffffULL; } +__no_sanitize_memory u64 ioread64be_lo_hi(void __iomem *addr) { IO_COND(addr, return pio_read64be_lo_hi(port), @@ -161,6 +170,7 @@ u64 ioread64be_lo_hi(void __iomem *addr) return 0xffffffffffffffffULL; } +__no_sanitize_memory u64 ioread64be_hi_lo(void __iomem *addr) { IO_COND(addr, return pio_read64be_hi_lo(port), @@ -188,22 +198,32 @@ EXPORT_SYMBOL(ioread64be_hi_lo); void iowrite8(u8 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, outb(val,port), writeb(val, addr)); } void iowrite16(u16 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, outw(val,port), writew(val, addr)); } void iowrite16be(u16 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write16be(val,port), mmio_write16be(val, addr)); } void iowrite32(u32 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, outl(val,port), writel(val, addr)); } void iowrite32be(u32 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write32be(val,port), mmio_write32be(val, addr)); } EXPORT_SYMBOL(iowrite8); @@ -239,24 +259,32 @@ static void pio_write64be_hi_lo(u64 val, unsigned long port) void iowrite64_lo_hi(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64_lo_hi(val, port), writeq(val, addr)); } void iowrite64_hi_lo(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64_hi_lo(val, port), writeq(val, addr)); } void iowrite64be_lo_hi(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64be_lo_hi(val, port), mmio_write64be(val, addr)); } void iowrite64be_hi_lo(u64 val, void __iomem *addr) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(&val, sizeof(val)); IO_COND(addr, pio_write64be_hi_lo(val, port), mmio_write64be(val, addr)); } @@ -328,14 +356,20 @@ static inline void mmio_outsl(void __iomem *addr, const u32 *src, int count) void ioread8_rep(void __iomem *addr, void *dst, unsigned long count) { IO_COND(addr, insb(port,dst,count), mmio_insb(addr, dst, count)); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_shadow(dst, count); } void ioread16_rep(void __iomem *addr, void *dst, unsigned long count) { IO_COND(addr, insw(port,dst,count), mmio_insw(addr, dst, count)); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_shadow(dst, count * 2); } void ioread32_rep(void __iomem *addr, void *dst, unsigned long count) { IO_COND(addr, insl(port,dst,count), mmio_insl(addr, dst, count)); + /* KMSAN must treat values read from devices as initialized. */ + kmsan_unpoison_shadow(dst, count * 4); } EXPORT_SYMBOL(ioread8_rep); EXPORT_SYMBOL(ioread16_rep); @@ -343,14 +377,20 @@ EXPORT_SYMBOL(ioread32_rep); void iowrite8_rep(void __iomem *addr, const void *src, unsigned long count) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(src, count); IO_COND(addr, outsb(port, src, count), mmio_outsb(addr, src, count)); } void iowrite16_rep(void __iomem *addr, const void *src, unsigned long count) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(src, count * 2); IO_COND(addr, outsw(port, src, count), mmio_outsw(addr, src, count)); } void iowrite32_rep(void __iomem *addr, const void *src, unsigned long count) { + /* Make sure uninitialized memory isn't copied to devices. */ + kmsan_check_memory(src, count * 4); IO_COND(addr, outsl(port, src,count), mmio_outsl(addr, src, count)); } EXPORT_SYMBOL(iowrite8_rep); From patchwork Wed Mar 25 16:12:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458299 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EB7E615AB for ; Wed, 25 Mar 2020 16:15:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B81DB206F8 for ; Wed, 25 Mar 2020 16:15:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pJhNRFGm" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B81DB206F8 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7F66A6B00AE; Wed, 25 Mar 2020 12:14:45 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 780E46B00B0; Wed, 25 Mar 2020 12:14:45 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5F7266B00B1; Wed, 25 Mar 2020 12:14:45 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0166.hostedemail.com [216.40.44.166]) by kanga.kvack.org (Postfix) with ESMTP id 3969A6B00AE for ; Wed, 25 Mar 2020 12:14:45 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 1B32675946 for ; Wed, 25 Mar 2020 16:14:45 +0000 (UTC) X-FDA: 76634382930.27.pail35_80bbd499e8c32 X-Spam-Summary: 2,0,0,5a60772eab9fa2ea,d41d8cd98f00b204,3c4n7xgykcgiglidergoogle.comlinux-mmkvack.org@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1534:1541:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3152:3352:3867:3872:4250:4321:5007:6261:6653:6742:6743:7901:7903:8603:9969:10004:10400:11026:11232:11473:11658:11914:12043:12048:12114:12297:12438:12555:12895:13069:13221:13229:13311:13357:13846:14096:14097:14181:14394:14659:14721:21080:21365:21444:21451:21627:21990:30054:30064,0,RBL:209.85.219.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:205,LUA_SUMMARY:none X-HE-Tag: pail35_80bbd499e8c32 X-Filterd-Recvd-Size: 4823 Received: from mail-qv1-f73.google.com (mail-qv1-f73.google.com [209.85.219.73]) by imf27.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:44 +0000 (UTC) Received: by mail-qv1-f73.google.com with SMTP id d7so2129622qvq.12 for ; Wed, 25 Mar 2020 09:14:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=z9PuLLyOHeeG51XvIl54FTMtQOLNwHzklufUiWnqLYk=; b=pJhNRFGmnvflOObu0zmMeGj51ofx56OE60MTPelLGlX61aNEnQylvaU2sH8UbmG4xA Ikfa0PsIFSs5YNepMqLTlzVT5/UDgWLmp1XRWdvJnBDKKd0rBNZVoNWNzt/IlrXDr4/b K9eJa6hmGb0u2eQSSDG+Tye75PGvwNGopQa540RLeHFIkELlUK9UPYuqASuHWAE/3Z7n jMyc9yEqnj0sXWwBh2BKRb6Oi4lSfEXot9c1rFv7XyYP9+2mTp+SFoYcRj+2ctHc7mG1 qq7O5rBPQ3wAcLjLyxS/fczIfCWB3AfjUaWHC11CZx3Hpf9jVVeQwZaK9ZFXUn+1hJtA FlGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=z9PuLLyOHeeG51XvIl54FTMtQOLNwHzklufUiWnqLYk=; b=Ykrq+MiOrriK9lMy0klTFU0UXOZgTZLBQ0HOE32KspRoT9KORtijU4SoXyo94YDIff NLtWFJHCfJJK0PdYLwZmnrHIYPH4IV3DbEt+SOE/ezYNmibWsk9RgnzzYIUx+Jv9TS4k yystg0Om+6X9nrq+VzSbf0jP/uLmDjhh2du17db4nVbrG8JdN6rb2yuF1jUAPQ4ClsoD y2lkJ1OHPva0tjAbeTvgaxx2sc6mg/rnwIuo5kKxke6Pt1aTgbfZwKMoFr+7iKS6w+fo d/MW4vNG+SQtnmhlDTI81bKItV8Z+HIxpCwUEqvXDoVo/0jqSV3h2T6a88PbEMI3qFHj H65w== X-Gm-Message-State: ANhLgQ3gxSZPXCypqIph8zr8n6CqbQNqQWpWluCRf65LHNLqhhnqAsyN RfOBRLBpNr/SnGtQNPGcoAoDhSTToa8= X-Google-Smtp-Source: ADFU+vuaCfzl6DiwCqX9TvQng2NPrWQG2g6QGK54OIbQIQ6eWXEw9irpJAu7MD14PMRt5I6vhV/R6DICe2E= X-Received: by 2002:a0c:f7cf:: with SMTP id f15mr3892168qvo.214.1585152883754; Wed, 25 Mar 2020 09:14:43 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:45 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-35-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 34/38] kmsan: dma: unpoison memory mapped by dma_direct_map_page() From: glider@google.com To: Christoph Hellwig , Marek Szyprowski , Robin Murphy , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN doesn't know about DMA memory writes performed by devices. We unpoison such memory when it's mapped to avoid false positive reports. Signed-off-by: Alexander Potapenko Cc: Christoph Hellwig Cc: Marek Szyprowski Cc: Robin Murphy Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- Change-Id: Ib1019ed531fea69f88b5cdec3d1e27403f2f3d64 --- kernel/dma/direct.c | 1 + 1 file changed, 1 insertion(+) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index a8560052a915f..63dc1a594964a 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -367,6 +367,7 @@ dma_addr_t dma_direct_map_page(struct device *dev, struct page *page, &dma_addr, size, *dev->dma_mask, dev->bus_dma_limit); return DMA_MAPPING_ERROR; } + kmsan_handle_dma(page_address(page) + offset, size, dir); if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) arch_sync_dma_for_device(phys, size, dir); From patchwork Wed Mar 25 16:12:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458301 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 711E615AB for ; Wed, 25 Mar 2020 16:15:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3EB4320409 for ; Wed, 25 Mar 2020 16:15:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="iYsVbAfg" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3EB4320409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8B6416B00B0; Wed, 25 Mar 2020 12:14:48 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 866726B00B2; Wed, 25 Mar 2020 12:14:48 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7565B6B00B3; Wed, 25 Mar 2020 12:14:48 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0183.hostedemail.com [216.40.44.183]) by kanga.kvack.org (Postfix) with ESMTP id 4D4B86B00B0 for ; Wed, 25 Mar 2020 12:14:48 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 37E42180374B0 for ; Wed, 25 Mar 2020 16:14:48 +0000 (UTC) X-FDA: 76634383056.29.spark49_8135f47a1ee19 X-Spam-Summary: 2,0,0,75f4eca94ea582fc,d41d8cd98f00b204,3d4n7xgykcgykpmhivksskpi.gsqpmryb-qqozego.svk@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1541:1593:1594:1711:1730:1747:1777:1792:2393:2553:2559:2562:3138:3139:3140:3141:3142:3152:3352:3866:3867:3870:4321:5007:6261:6653:6742:6743:7903:9969:10004:10400:11026:11658:11914:12043:12048:12296:12297:12438:12555:12895:12986:13069:13161:13229:13311:13357:13846:14096:14097:14181:14394:14659:14721:21080:21365:21444:21451:21627:21740:21990:30054:30064:30090,0,RBL:209.85.222.202:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: spark49_8135f47a1ee19 X-Filterd-Recvd-Size: 5140 Received: from mail-qk1-f202.google.com (mail-qk1-f202.google.com [209.85.222.202]) by imf04.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:47 +0000 (UTC) Received: by mail-qk1-f202.google.com with SMTP id c130so2045216qke.20 for ; Wed, 25 Mar 2020 09:14:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=6Dc/WULxB95F3AB5gER5EcyE/qW7aJyrm6XU3/Fe0ro=; b=iYsVbAfg0lt2SyLUIaY5uUSMBD9BtWBDka8NRqG3vhp+T2imnNK3ZvaJMyoJzZjTe2 gAQ+2fy02Fc260dUv8Ju788jqzU6wHyk0d+tUF2+LVImcqlP/rlhZohtdznRCDuwsIQZ pVL0d+TCyn1xvD/QJ9+7/Ezsp+d0E8/qJP6xyHoM8/caTPlErGZ7GJi5DSrukjxTfNYx rNbbNULqOCp3mZ2ECR/tO6W1eNdmpsp5zkB0uyG4M0xBQumvij9/fUImJxSX067gtYuJ 0YzSh1PxX565KfgEvqsISgywqtlZuGpklpzE9xiTw8PQxDX146nhniuxI6dg08lptWzF gKuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=6Dc/WULxB95F3AB5gER5EcyE/qW7aJyrm6XU3/Fe0ro=; b=fI0F5M/NOj3ovzkQzsP/xkiLnNie8RpRYfOPH2G83tm4NnD8t8jmsWimFmghexXWKX qLJ4B/xqn98VxJO6WjStU9Fy3rJ48kjRs82SD1UujmIwLZFCy7BIyRP0leFw6KD1eV5B eBZ9/2a5TkFagkU9ORjk0zzbvCX6fZYMzL8A8p/XCxl2Mz72Czgki1hktTdEctGWTS4h IXZcduJSLOFEdV77LjyL6jp89S1Fby7Ww/QJ1cArem5NkLKN+k9miA1TfxzLMqJkQeM3 VnwLZGaCFKUXrh7aF94hNSTWS3KAU3OIB7pexds62ot/viRARL80ukblvH/iOg6ywhZC NjGw== X-Gm-Message-State: ANhLgQ1c7daxdoSXDhciQS2Sv+LH5q43mM2Tpn+U7vxPKxHbH/LzM+RS iLPA4atCvbWqlG1boXvt1XmprVFbxDg= X-Google-Smtp-Source: ADFU+vtEwPD0FGW2eyPUiVqUaE0MiMd6oaoua7tt4VM50bjR39X5IxrbKJRRL0TJ4FV77mI3ir6HR9qgHT8= X-Received: by 2002:a0c:fe87:: with SMTP id d7mr2825777qvs.37.1585152887001; Wed, 25 Mar 2020 09:14:47 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:46 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-36-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 35/38] kmsan: disable physical page merging in biovec From: glider@google.com To: Jens Axboe , Andy Lutomirski , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , Christoph Hellwig , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN metadata for consequent physical pages may be inconsequent, therefore accessing such pages together may lead to metadata corruption. We disable merging pages in biovec to prevent such corruptions. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Jens Axboe Cc: Andy Lutomirski Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: Christoph Hellwig Cc: linux-mm@kvack.org --- v4: - use IS_ENABLED instead of #ifdef (as requested by Marco Elver) Change-Id: Id2f2babaf662ac44675c4f2790f4a80ddc328fa7 --- block/blk.h | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/block/blk.h b/block/blk.h index 670337b7cfa0d..065dfee244118 100644 --- a/block/blk.h +++ b/block/blk.h @@ -79,6 +79,13 @@ static inline bool biovec_phys_mergeable(struct request_queue *q, phys_addr_t addr1 = page_to_phys(vec1->bv_page) + vec1->bv_offset; phys_addr_t addr2 = page_to_phys(vec2->bv_page) + vec2->bv_offset; + /* + * Merging consequent physical pages may not work correctly under KMSAN + * if their metadata pages aren't consequent. Just disable merging. + */ + if (IS_ENABLED(CONFIG_KMSAN)) + return false; + if (addr1 + vec1->bv_len != addr2) return false; if (xen_domain() && !xen_biovec_phys_mergeable(vec1, vec2->bv_page)) From patchwork Wed Mar 25 16:12:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458303 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5C0FD913 for ; Wed, 25 Mar 2020 16:15:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1165820409 for ; Wed, 25 Mar 2020 16:15:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="jrhJO5nL" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1165820409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 421456B00B2; Wed, 25 Mar 2020 12:14:52 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3ABF76B00B4; Wed, 25 Mar 2020 12:14:52 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 273FD6B00B5; Wed, 25 Mar 2020 12:14:52 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0165.hostedemail.com [216.40.44.165]) by kanga.kvack.org (Postfix) with ESMTP id 023866B00B2 for ; Wed, 25 Mar 2020 12:14:51 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E40A31CFB6 for ; Wed, 25 Mar 2020 16:14:51 +0000 (UTC) X-FDA: 76634383182.17.boot29_81b7bbd463063 X-Spam-Summary: 2,0,0,f71f4d9b48dc201a,d41d8cd98f00b204,3eyn7xgykcggmrojkxmuumrk.iusrotad-ssqbgiq.uxm@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:1981:2194:2199:2393:2559:2562:3138:3139:3140:3141:3142:3152:3353:3865:3866:3867:3868:3870:3871:3872:4117:4250:4321:4605:5007:6261:6653:6742:6743:7875:7903:7904:9121:9969:10004:10400:11026:11473:11657:11658:11914:12043:12048:12296:12297:12438:12555:12895:12986:13255:13846:14096:14097:14181:14394:14659:14721:21080:21365:21444:21451:21627:30012:30029:30041:30054:30064,0,RBL:209.85.128.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:20,LUA_SUMMARY:none X-HE-Tag: boot29_81b7bbd463063 X-Filterd-Recvd-Size: 6314 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:51 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id n25so866021wmi.5 for ; Wed, 25 Mar 2020 09:14:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=CXGfvgsU88R7r/Q38NQ1qLmAnAQj/bYiiq1RkhMwKVQ=; b=jrhJO5nLX7n7Sdsbo4M5WuoNmmg+4OHZT/NUr049eN1rBintF6ymq14V81QBL8Q3xy vE236CywIMuCsX9bAJ9wNhfs+7RtnFc066RGg5s2r5DFuo1t59QFfVM3rkoM8ofpyPI0 cvIT3Cp2XYXyJj2UmLAgeesrF8Qt2dEgx4qgtpPnuEQzCvOLE1MOSmUw6oXYuY7ZdRWc wiFEWM0TQ2kL9IsI11ECu2oHcPmF/mVevUdM4nYzzF+YTeKc+7e0U2t4GAXrT2rWKtyk pLefmCntf/E2LSlm5iB1iq2/ShL9K42XQylu8KNIZI9lNiDS9yc15azauFsy1KdsDqWR tJYw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=CXGfvgsU88R7r/Q38NQ1qLmAnAQj/bYiiq1RkhMwKVQ=; b=SBdR39aKHPfgMR62v9LwsGDoLwthDuWuqi0uVeHbaDtdu1Op2GhHqAJxJ8s9D2m5DU lA4ycQMzfKBiGJHm35Qg6QMkGWZlErq/DRfYZYQvy2tLApcUdp3iSK4K1OKR9Eh85kui esm1wIKTiewdr5lI9bNtLs7R0Uyp7cfrKzY7vC9DjMAtLc26b4ibWonTgy+EN0ZR8IU7 2uz64ce5+XnJDUwnX4oWE6JJNfYDOqQdqshKvQcbxQf3Ip7bWcrS6hP8OZM35oUe0Rjw ccfy7Z/V/0StJO7WLM7z0KHKSFhMvTbzkA88Q+VhKihPk8lQcG2vDI79yecmCGD7ALLr jpKA== X-Gm-Message-State: ANhLgQ0b2Jht9FSb//ytV9ZfHY3kmeQ92R9Nw7Zm+BSNKAFKxGY7GuyA Nch4tKIn7iOKcylNSRFEFISjLm77cfA= X-Google-Smtp-Source: ADFU+vsKbvfy+LMn86f1LKADSDMeUoVM/ysFo7I1oex2nDTiYHm9Bn8PLBW+TYfxWX5RMlN6GLXpCe8aeME= X-Received: by 2002:a5d:6581:: with SMTP id q1mr4110038wru.17.1585152889861; Wed, 25 Mar 2020 09:14:49 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:47 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-37-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 36/38] x86: kasan: kmsan: support CONFIG_GENERIC_CSUM on x86, enable it for KASAN/KMSAN From: glider@google.com To: Arnd Bergmann , Michal Simek , Andrey Ryabinin , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , Randy Dunlap , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, luto@kernel.org, ard.biesheuvel@linaro.org, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, pmladek@suse.com, cai@lca.pw, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is needed to allow memory tools like KASAN and KMSAN see the memory accesses from the checksum code. Without CONFIG_GENERIC_CSUM the tools can't see memory accesses originating from handwritten assembly code. For KASAN it's a question of detecting more bugs, for KMSAN using the C implementation also helps avoid false positives originating from seemingly uninitialized checksum values. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Arnd Bergmann Cc: Michal Simek Cc: Andrey Ryabinin Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: Randy Dunlap Cc: linux-mm@kvack.org Reviewed-by: Andrey Konovalov --- v2: - dropped the "default n" (as requested by Randy Dunlap) v4: - changed "net:" to "x86:" in the patch name Change-Id: I645e2c097253a8d5717ad87e2e2df6f6f67251f3 --- arch/x86/Kconfig | 4 ++++ arch/x86/include/asm/checksum.h | 10 +++++++--- arch/x86/lib/Makefile | 2 ++ 3 files changed, 13 insertions(+), 3 deletions(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 376c13480def2..c45c937682863 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -274,6 +274,10 @@ config GENERIC_ISA_DMA def_bool y depends on ISA_DMA_API +config GENERIC_CSUM + bool + default y if KMSAN || KASAN + config GENERIC_BUG def_bool y depends on BUG diff --git a/arch/x86/include/asm/checksum.h b/arch/x86/include/asm/checksum.h index d79d1e622dcf1..ab3464cbce26d 100644 --- a/arch/x86/include/asm/checksum.h +++ b/arch/x86/include/asm/checksum.h @@ -1,6 +1,10 @@ /* SPDX-License-Identifier: GPL-2.0 */ -#ifdef CONFIG_X86_32 -# include +#ifdef CONFIG_GENERIC_CSUM +# include #else -# include +# ifdef CONFIG_X86_32 +# include +# else +# include +# endif #endif diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile index 6110bce7237bd..40d6704c4767d 100644 --- a/arch/x86/lib/Makefile +++ b/arch/x86/lib/Makefile @@ -64,7 +64,9 @@ endif lib-$(CONFIG_X86_USE_3DNOW) += mmx_32.o else obj-y += iomap_copy_64.o +ifneq ($(CONFIG_GENERIC_CSUM),y) lib-y += csum-partial_64.o csum-copy_64.o csum-wrappers_64.o +endif lib-y += clear_page_64.o copy_page_64.o lib-y += memmove_64.o memset_64.o lib-y += copy_user_64.o From patchwork Wed Mar 25 16:12:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458305 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 74B61913 for ; Wed, 25 Mar 2020 16:15:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4286120409 for ; Wed, 25 Mar 2020 16:15:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="GjV7/tps" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4286120409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 421106B00B4; Wed, 25 Mar 2020 12:14:55 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 3AAAB6B00B6; Wed, 25 Mar 2020 12:14:55 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D6CF6B00B7; Wed, 25 Mar 2020 12:14:55 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0033.hostedemail.com [216.40.44.33]) by kanga.kvack.org (Postfix) with ESMTP id F064B6B00B4 for ; Wed, 25 Mar 2020 12:14:54 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D98BB1800D3F1 for ; Wed, 25 Mar 2020 16:14:54 +0000 (UTC) X-FDA: 76634383308.14.cat74_822ac67224e4d X-Spam-Summary: 2,0,0,3d4b5db5d841ddb8,d41d8cd98f00b204,3fyn7xgykcgwqvsnobqyyqvo.mywvsxeh-wwufkmu.ybq@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1535:1541:1593:1594:1711:1730:1747:1777:1792:2393:2553:2559:2562:2693:3138:3139:3140:3141:3142:3152:3352:3866:3867:3868:3870:4250:4321:5007:6261:6653:6742:6743:7875:9969:10004:10400:11026:11232:11658:11914:12043:12048:12114:12296:12297:12438:12555:12895:13069:13255:13311:13357:13846:13972:14093:14096:14097:14181:14394:14659:14721:21080:21365:21433:21444:21451:21627:21990:30045:30054:30064:30090,0,RBL:209.85.221.73:@flex--glider.bounces.google.com:.lbl8.mailshell.net-62.18.0.100 66.100.201.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:16,LUA_SUMMARY:none X-HE-Tag: cat74_822ac67224e4d X-Filterd-Recvd-Size: 5399 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf05.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:54 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id f15so1383473wrt.4 for ; Wed, 25 Mar 2020 09:14:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=055LzJgLhVLC5ZMf8m0EmpggO79F1ITnh6Ud/1KeVsw=; b=GjV7/tpshigj3LZmXdUb6+JA4lSxu7/UaFQE6cc3VdH0a0a57/1q7Fhbn3FctF/Njo WPx4LAugBPy2Im/bJVS9jL/P3uBr0w9VYM+2iKqCgVXEddax1PaO6Oh41CJnqBB9cznY 3H/YbjCROooAEBL5eo3Zezh+79YSyN83x06+fGNcdNAmSpckQS+bWeDq8n7BL2WHkeck FeZBjySKIj5iMyyuwTatjNFir9m55mjWDIAk22QMVinRcwsy18d25QCS5R2AxbmelOpZ qAZKWdi2NVEErqdgfr7f3Y+84LqAw8pmbTLun7wjNlpdxl5sDG9GAF5+/k2wfgsnvBbN Ljzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=055LzJgLhVLC5ZMf8m0EmpggO79F1ITnh6Ud/1KeVsw=; b=s/Ky1Q0Fu24ChHEI5UXbYZk/durBtprho4JUoqvswUFRzFrXQZYOOttJHvj96d3jus Hv0z+bACPRLipvRQerMIYTCRUTRVSN7sujX1GlXVZPD76WQxyg4qD0oULvgjzoalOkDI RdNQHI+//M6qZI2L7L8YbKOvgc1+agmX2gMfAAZ/yImyCxJg+b/uzTySFCV6Jh4/BpeX 4AcsoUV4TkfpbRxcQAZIr8tSHhBpNy557S8sBV+hYk156H+KZ/QpMX2qp8A7UMEgFNfp wVxFXuUZSyoddUb0leXOOi+gLQGaSX5019f45RZaE0N+eye7OzUcfqmX2R9CDmaFEFvC rpDA== X-Gm-Message-State: ANhLgQ3ThWFzddnJDssFyoAnL1u1cjBmkFN2xFe0sJ60oVGZz/uQpkQB YO2YB0VONecgbHu+LJWLXsSX0uvcZfQ= X-Google-Smtp-Source: ADFU+vs0z4rY8HwfBn/pDoSCVUX+dnac0GECuda43gq7mk7FGEbrmTBS6WzOyzPoWl5bENu+oowvsx6OmWE= X-Received: by 2002:adf:a1d6:: with SMTP id v22mr4326563wrv.416.1585152893000; Wed, 25 Mar 2020 09:14:53 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:48 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-38-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 37/38] kmsan: x86/uprobes: unpoison regs in arch_uprobe_exception_notify() From: glider@google.com To: Thomas Gleixner , Andrew Morton , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: arch_uprobe_exception_notify() may receive register state without valid KMSAN metadata, which will lead to false positives. Explicitly unpoison args and args->regs to avoid this. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Thomas Gleixner Cc: Andrew Morton Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- This patch was split from "kmsan: disable instrumentation of certain functions" v4: - split this patch away Change-Id: I466ef628b00362ab5eb1852c76baa8cdb06736d9 --- arch/x86/kernel/uprobes.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c index 15e5aad8ac2c1..bc156b016dc57 100644 --- a/arch/x86/kernel/uprobes.c +++ b/arch/x86/kernel/uprobes.c @@ -8,6 +8,7 @@ * Jim Keniston */ #include +#include #include #include #include @@ -997,9 +998,13 @@ int arch_uprobe_post_xol(struct arch_uprobe *auprobe, struct pt_regs *regs) int arch_uprobe_exception_notify(struct notifier_block *self, unsigned long val, void *data) { struct die_args *args = data; - struct pt_regs *regs = args->regs; + struct pt_regs *regs; int ret = NOTIFY_DONE; + kmsan_unpoison_shadow(args, sizeof(*args)); + regs = args->regs; + if (regs) + kmsan_unpoison_shadow(regs, sizeof(*regs)); /* We are only interested in userspace traps */ if (regs && !user_mode(regs)) return NOTIFY_DONE; From patchwork Wed Mar 25 16:12:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11458307 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AD09215AB for ; Wed, 25 Mar 2020 16:15:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 6EAB720409 for ; Wed, 25 Mar 2020 16:15:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="P0RHEu8Y" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 6EAB720409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 733A76B00B6; Wed, 25 Mar 2020 12:14:58 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 6924E6B00B8; Wed, 25 Mar 2020 12:14:58 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 55AC96B00B9; Wed, 25 Mar 2020 12:14:58 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0117.hostedemail.com [216.40.44.117]) by kanga.kvack.org (Postfix) with ESMTP id 305AB6B00B6 for ; Wed, 25 Mar 2020 12:14:58 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 0BFF9181FEBDB for ; Wed, 25 Mar 2020 16:14:58 +0000 (UTC) X-FDA: 76634383476.09.size84_829e7870e383b X-Spam-Summary: 2,0,0,04abafb0c6de7277,d41d8cd98f00b204,3f4n7xgykcg4sxupqdsaasxq.oayxuzgj-yywhmow.ads@flex--glider.bounces.google.com,,RULES_HIT:41:152:355:379:541:800:960:973:988:989:1260:1277:1313:1314:1345:1359:1431:1437:1516:1518:1534:1541:1593:1594:1711:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3152:3352:3865:3867:4321:5007:6261:6653:6742:6743:7903:9969:10004:10400:11026:11232:11658:11914:12043:12048:12296:12297:12438:12555:12895:12986:13069:13161:13229:13311:13357:13846:14096:14097:14181:14394:14659:14721:21080:21365:21444:21451:21627:21990:30054:30064,0,RBL:209.85.128.74:@flex--glider.bounces.google.com:.lbl8.mailshell.net-66.100.201.100 62.18.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:23,LUA_SUMMARY:none X-HE-Tag: size84_829e7870e383b X-Filterd-Recvd-Size: 4821 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Wed, 25 Mar 2020 16:14:57 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id f8so1071585wmh.4 for ; Wed, 25 Mar 2020 09:14:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=YszzUWxckIjfyUu5TKyz3LE5GPIDGFH1V/BAc8tQyGs=; b=P0RHEu8YPElufXzUdEj32UPRGauPMWqr0dywnQW2bkIJU5qxPSjTnIniFnvT9ingRX ugsDgoWBH5ZJRRlM+5KO6BKAPuqpBrUeLW3YaOVSvH4i1KXun+Q8GeOaf7A/FXDDBYnC gsbV4n7TiEl1kdeGLI/EITmxcmbl1IqOceDBl8+NBXoRD+taWzGJ/ppK/j/TdHiyR1IB e5KkebUtrVVPYDezGZwgaXO/UroYr/qtNrKI925pGjt3ymzbWgj1g6kkgrgt3TACvTYU 87SZjCMpc/s42MwwSf8FbsmncXOCKQIlxTaZFUbEBteykR1AdpXu10OeTnNjnwrGIKXC DhlQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=YszzUWxckIjfyUu5TKyz3LE5GPIDGFH1V/BAc8tQyGs=; b=PBN9RbzhtyJlbGR32MY1BCU8xeChhua+zkkbjAcXchsB4UI3XG/aad3XpvBX1HJp7+ YkoMjI4PPPqUpUE8EwzHbcPWl+0yPgbBWYiM1LfSbd2DGDG3q7QAsBhzD56mQWR8Tu0J AGfZLFD1dyUFFL/e6miu2DJU0/WHrXqyarOYWgAqgo7Py/ch2L0RYTAjfUKzQtiCS2Y1 A7Y08yTDTTJK93LS90iL8K19VM1rnWkqcLzwDg7GFvQygWJ3yIZceLGlPtBYwfjkT0u1 8UunXJQdv6IkSV7si2BOwuNlH1Ttw5WBhe1KvWrPAixSg7YP0M0r0dcQacZkSWPuxmdv GEXQ== X-Gm-Message-State: ANhLgQ219MjcprVEJXYSyTKjrZM7VOi7I466cHxPifvSrYrieykxT1Zc mZTvChNk6bbdx4McH6V75cOdvIoqSVw= X-Google-Smtp-Source: ADFU+vvMTmUd5iqI2bNDEFbvVuBLXWg5tf8Ay0akzUBugSJ75ia0RHAbWpGQA6cEwE4JsQNjQZnedM/3kXY= X-Received: by 2002:a05:6000:100f:: with SMTP id a15mr4066224wrx.382.1585152895994; Wed, 25 Mar 2020 09:14:55 -0700 (PDT) Date: Wed, 25 Mar 2020 17:12:49 +0100 In-Reply-To: <20200325161249.55095-1-glider@google.com> Message-Id: <20200325161249.55095-39-glider@google.com> Mime-Version: 1.0 References: <20200325161249.55095-1-glider@google.com> X-Mailer: git-send-email 2.25.1.696.g5e7596f4ac-goog Subject: [PATCH v5 38/38] kmsan: block: skip bio block merging logic for KMSAN From: glider@google.com To: Eric Biggers , Jens Axboe , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: KMSAN doesn't allow treating adjacent memory pages as such, if they were allocated by different alloc_pages() calls. The block layer however does so: adjacent pages end up being used together. To prevent this, make page_is_mergeable() return false under KMSAN. Suggested-by: Eric Biggers Signed-off-by: Alexander Potapenko Cc: Eric Biggers Cc: Jens Axboe Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- Change-Id: Iff367f421d51fac549e31ed122365b7539642cff --- block/bio.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/block/bio.c b/block/bio.c index 0985f34225561..09503ef00bc20 100644 --- a/block/bio.c +++ b/block/bio.c @@ -696,6 +696,8 @@ static inline bool page_is_mergeable(const struct bio_vec *bv, *same_page = ((vec_end_addr & PAGE_MASK) == page_addr); if (!*same_page && pfn_to_page(PFN_DOWN(vec_end_addr)) + 1 != page) return false; + if (!*same_page && IS_ENABLED(CONFIG_KMSAN)) + return false; return true; }