From patchwork Fri Dec 20 18:49:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11306195 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8B8D96C1 for ; Fri, 20 Dec 2019 18:51:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3DA3421927 for ; Fri, 20 Dec 2019 18:51:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="p9VV0cSA" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3DA3421927 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CE0828E01C2; Fri, 20 Dec 2019 13:51:41 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C66A08E019D; Fri, 20 Dec 2019 13:51:41 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B7F948E01C2; Fri, 20 Dec 2019 13:51:41 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0174.hostedemail.com [216.40.44.174]) by kanga.kvack.org (Postfix) with ESMTP id 987378E019D for ; Fri, 20 Dec 2019 13:51:41 -0500 (EST) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with SMTP id 5DB4A180AD81A for ; Fri, 20 Dec 2019 18:51:41 +0000 (UTC) X-FDA: 76286413602.19.club26_831c96774fc15 X-Spam-Summary: 2,0,0,86e8a37cf4366a1e,d41d8cd98f00b204,3pbj9xqykcfc5a723g5dd5a3.1dba7cjm-bb9kz19.dg5@flex--glider.bounces.google.com,:viro@zeniv.linux.org.uk:vegard.nossum@oracle.com:dvyukov@google.com:elver@google.com:andreyknvl@google.com::glider@google.com:adilger.kernel@dilger.ca:akpm@linux-foundation.org:aryabinin@virtuozzo.com:luto@kernel.org:ard.biesheuvel@linaro.org:arnd@arndb.de:hch@infradead.org:hch@lst.de:darrick.wong@oracle.com:davem@davemloft.net:dmitry.torokhov@gmail.com:ebiggers@google.com:edumazet@google.com:ericvh@gmail.com:gregkh@linuxfoundation.org:harry.wentland@amd.com:herbert@gondor.apana.org.au:iii@linux.ibm.com:mingo@elte.hu:jasowang@redhat.com:axboe@kernel.dk:m.szyprowski@samsung.com:mark.rutland@arm.com:martin.petersen@oracle.com:schwidefsky@de.ibm.com:willy@infradead.org:mst@redhat.com:mhocko@suse.com:monstr@monstr.eu:pmladek@suse.com:cai@lca.pw:rdunlap@infradead.org:robin.murphy@arm.com:sergey.senozhatsky@gmail.com:rostedt@goodmis.org:tiwai@suse.c om:tytso X-HE-Tag: club26_831c96774fc15 X-Filterd-Recvd-Size: 14044 Received: from mail-vk1-f201.google.com (mail-vk1-f201.google.com [209.85.221.201]) by imf19.hostedemail.com (Postfix) with ESMTP for ; Fri, 20 Dec 2019 18:51:40 +0000 (UTC) Received: by mail-vk1-f201.google.com with SMTP id z24so4331566vkn.0 for ; Fri, 20 Dec 2019 10:51:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=8svzLjdYc2cFXO8XteriicyiisuHOKZ1Ce1I/3BZlTA=; b=p9VV0cSA2KLlbD3Kp31rvA7XSvQ95y7tnYhlY9LfLYt/m7DDaSyj2P8SI7xXT/IIZF sWbLw6oyADzKNNPlYKFihpZoUCKnmmdA1EPPJnFA9tQARQZwHmfoiWtiiR5X3/1aaKV9 eE6hwaBENoVwUodOq0Dj5kzBIWv60PC14DQmqoNDSBu6mGJlFoHQOYkI2Bdlnmu6ciiC xZ/QJkTm42+gx5N3hQflWVtdCPuhBXgeZneDd8ChQMRa9AfqM8HJ3cvmV356cTdkhj/f Dgk6DmU+pM3S1hqTog9hUBOg3zTu5cwITShY1zzHSO8K2vxtj0AJqX9shPS608/EIeGt S0Ng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8svzLjdYc2cFXO8XteriicyiisuHOKZ1Ce1I/3BZlTA=; b=RCLVLZ6/02eOUqz4MiGrr/FdFVEzwmyPZcygTRyFjKXJUYMh3PcLq32ty4qa2Sscai ffU83T/I9S374Ayp6YG8wwMAcL+Y55zCxATvmqTueeHfVP0sEpvbyM5dE+p7fQW/tgpX m2VQhB0HB12XCn2I9YDHdI/AmR8qgtAsv039dRd9Dh1/004Pl0VuNu9Qdomq2fEatlQP lX3FPilQ/ls0caUNe/tVZ2Z5RJG1VAyw8HKz8KPajTfXZyUm5XpA6IHNjw/hfsVp/T2D sXlyT6Cz3qb+IdcxeIXI57Mnga/hg894Qw7iFiAfbs266YpCL314BlWJSnSwL6v84eur o1dA== X-Gm-Message-State: APjAAAVvHjarwdXDAUz+5zCGGkErq0lTY3POR3nHYgcZBt4MGYx1wrAD hHCdjfg213vIo1MP4Q8IOMz5hY/L1Bo= X-Google-Smtp-Source: APXvYqzRL5ghVXvwbCURY1S52pT/fStfXQPK2bj2fEMRbbfoPOT2hdm9joBYRrJcg/SH8ZGbpD2VAdF6IOI= X-Received: by 2002:a9f:3ecc:: with SMTP id n12mr10848317uaj.45.1576867900165; Fri, 20 Dec 2019 10:51:40 -0800 (PST) Date: Fri, 20 Dec 2019 19:49:44 +0100 In-Reply-To: <20191220184955.223741-1-glider@google.com> Message-Id: <20191220184955.223741-32-glider@google.com> Mime-Version: 1.0 References: <20191220184955.223741-1-glider@google.com> X-Mailer: git-send-email 2.24.1.735.g03f4e72817-goog Subject: [PATCH RFC v4 31/42] kmsan: hooks for copy_to_user() and friends From: glider@google.com To: Alexander Viro , Vegard Nossum , Dmitry Vyukov , Marco Elver , Andrey Konovalov , linux-mm@kvack.org Cc: glider@google.com, adilger.kernel@dilger.ca, akpm@linux-foundation.org, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, mhocko@suse.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, tglx@linutronix.de, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Memory that is copied from userspace must be unpoisoned. Before copying memory to userspace, check it and report an error if it contains uninitialized bits. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Alexander Viro Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: Marco Elver Cc: Andrey Konovalov Cc: linux-mm@kvack.org --- v3: - fixed compilation errors reported by kbuild test bot v4: - minor variable fixes as requested by Andrey Konovalov - simplified code around copy_to_user() hooks Change-Id: I38428b9c7d1909b8441dcec1749b080494a7af99 --- arch/x86/include/asm/uaccess.h | 10 ++++++++++ include/asm-generic/cacheflush.h | 7 ++++++- include/asm-generic/uaccess.h | 12 +++++++++-- include/linux/uaccess.h | 34 ++++++++++++++++++++++++++------ lib/iov_iter.c | 14 +++++++++---- lib/usercopy.c | 8 ++++++-- 6 files changed, 70 insertions(+), 15 deletions(-) diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index 61d93f062a36..bfb55fdba5df 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -6,6 +6,7 @@ */ #include #include +#include #include #include #include @@ -174,6 +175,7 @@ __typeof__(__builtin_choose_expr(sizeof(x) > sizeof(0UL), 0ULL, 0UL)) ASM_CALL_CONSTRAINT \ : "0" (ptr), "i" (sizeof(*(ptr)))); \ (x) = (__force __typeof__(*(ptr))) __val_gu; \ + kmsan_unpoison_shadow(&(x), sizeof(*(ptr))); \ __builtin_expect(__ret_gu, 0); \ }) @@ -248,6 +250,7 @@ extern void __put_user_8(void); __chk_user_ptr(ptr); \ might_fault(); \ __pu_val = x; \ + kmsan_check_memory(&(__pu_val), sizeof(*(ptr))); \ switch (sizeof(*(ptr))) { \ case 1: \ __put_user_x(1, __pu_val, ptr, __ret_pu); \ @@ -270,7 +273,9 @@ extern void __put_user_8(void); #define __put_user_size(x, ptr, size, label) \ do { \ + __typeof__(*(ptr)) __pus_val = x; \ __chk_user_ptr(ptr); \ + kmsan_check_memory(&(__pus_val), size); \ switch (size) { \ case 1: \ __put_user_goto(x, ptr, "b", "b", "iq", label); \ @@ -295,7 +300,9 @@ do { \ */ #define __put_user_size_ex(x, ptr, size) \ do { \ + __typeof__(*(ptr)) __puse_val = x; \ __chk_user_ptr(ptr); \ + kmsan_check_memory(&(__puse_val), size); \ switch (size) { \ case 1: \ __put_user_asm_ex(x, ptr, "b", "b", "iq"); \ @@ -363,6 +370,7 @@ do { \ default: \ (x) = __get_user_bad(); \ } \ + kmsan_unpoison_shadow(&(x), size); \ } while (0) #define __get_user_asm(x, addr, err, itype, rtype, ltype, errret) \ @@ -413,6 +421,7 @@ do { \ default: \ (x) = __get_user_bad(); \ } \ + kmsan_unpoison_shadow(&(x), size); \ } while (0) #define __get_user_asm_ex(x, addr, itype, rtype, ltype) \ @@ -433,6 +442,7 @@ do { \ __typeof__(ptr) __pu_ptr = (ptr); \ __typeof__(size) __pu_size = (size); \ __uaccess_begin(); \ + kmsan_check_memory(&(__pu_val), size); \ __put_user_size(__pu_val, __pu_ptr, __pu_size, __pu_label); \ __pu_err = 0; \ __pu_label: \ diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index a950a22c4890..707531dccf5e 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -4,6 +4,7 @@ /* Keep includes the same across arches. */ #include +#include #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 0 @@ -72,10 +73,14 @@ static inline void flush_cache_vunmap(unsigned long start, unsigned long end) #define copy_to_user_page(vma, page, vaddr, dst, src, len) \ do { \ + kmsan_check_memory(src, len); \ memcpy(dst, src, len); \ flush_icache_user_range(vma, page, vaddr, len); \ } while (0) #define copy_from_user_page(vma, page, vaddr, dst, src, len) \ - memcpy(dst, src, len) + do { \ + memcpy(dst, src, len); \ + kmsan_unpoison_shadow(dst, len); \ + } while (0) #endif /* __ASM_CACHEFLUSH_H */ diff --git a/include/asm-generic/uaccess.h b/include/asm-generic/uaccess.h index e935318804f8..88b626c3ef2d 100644 --- a/include/asm-generic/uaccess.h +++ b/include/asm-generic/uaccess.h @@ -142,7 +142,11 @@ static inline int __access_ok(unsigned long addr, unsigned long size) static inline int __put_user_fn(size_t size, void __user *ptr, void *x) { - return unlikely(raw_copy_to_user(ptr, x, size)) ? -EFAULT : 0; + int n; + + n = raw_copy_to_user(ptr, x, size); + kmsan_copy_to_user(ptr, x, size, n); + return unlikely(n) ? -EFAULT : 0; } #define __put_user_fn(sz, u, k) __put_user_fn(sz, u, k) @@ -203,7 +207,11 @@ extern int __put_user_bad(void) __attribute__((noreturn)); #ifndef __get_user_fn static inline int __get_user_fn(size_t size, const void __user *ptr, void *x) { - return unlikely(raw_copy_from_user(x, ptr, size)) ? -EFAULT : 0; + int res; + + res = raw_copy_from_user(x, ptr, size); + kmsan_unpoison_shadow(x, size - res); + return unlikely(res) ? -EFAULT : 0; } #define __get_user_fn(sz, u, k) __get_user_fn(sz, u, k) diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h index 67f016010aad..efb3cd554140 100644 --- a/include/linux/uaccess.h +++ b/include/linux/uaccess.h @@ -5,6 +5,7 @@ #include #include #include +#include #define uaccess_kernel() segment_eq(get_fs(), KERNEL_DS) @@ -58,18 +59,26 @@ static __always_inline __must_check unsigned long __copy_from_user_inatomic(void *to, const void __user *from, unsigned long n) { + unsigned long res; + kasan_check_write(to, n); check_object_size(to, n, false); - return raw_copy_from_user(to, from, n); + res = raw_copy_from_user(to, from, n); + kmsan_unpoison_shadow(to, n - res); + return res; } static __always_inline __must_check unsigned long __copy_from_user(void *to, const void __user *from, unsigned long n) { + unsigned long res; + might_fault(); kasan_check_write(to, n); check_object_size(to, n, false); - return raw_copy_from_user(to, from, n); + res = raw_copy_from_user(to, from, n); + kmsan_unpoison_shadow(to, n - res); + return res; } /** @@ -88,18 +97,26 @@ __copy_from_user(void *to, const void __user *from, unsigned long n) static __always_inline __must_check unsigned long __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n) { + unsigned long res; + kasan_check_read(from, n); check_object_size(from, n, true); - return raw_copy_to_user(to, from, n); + res = raw_copy_to_user(to, from, n); + kmsan_copy_to_user((const void *)to, from, n, res); + return res; } static __always_inline __must_check unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n) { + unsigned long res; + might_fault(); kasan_check_read(from, n); check_object_size(from, n, true); - return raw_copy_to_user(to, from, n); + res = raw_copy_to_user(to, from, n); + kmsan_copy_to_user((const void *)to, from, n, res); + return res; } #ifdef INLINE_COPY_FROM_USER @@ -107,10 +124,12 @@ static inline __must_check unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n) { unsigned long res = n; + might_fault(); if (likely(access_ok(from, n))) { kasan_check_write(to, n); res = raw_copy_from_user(to, from, n); + kmsan_unpoison_shadow(to, n - res); } if (unlikely(res)) memset(to + (n - res), 0, res); @@ -125,12 +144,15 @@ _copy_from_user(void *, const void __user *, unsigned long); static inline __must_check unsigned long _copy_to_user(void __user *to, const void *from, unsigned long n) { + unsigned long res; + might_fault(); if (access_ok(to, n)) { kasan_check_read(from, n); - n = raw_copy_to_user(to, from, n); + res = raw_copy_to_user(to, from, n); + kmsan_copy_to_user(to, from, n, res); } - return n; + return res; } #else extern __must_check unsigned long diff --git a/lib/iov_iter.c b/lib/iov_iter.c index fb29c02c6a3c..3262db1ace59 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -137,20 +137,26 @@ static int copyout(void __user *to, const void *from, size_t n) { + int res; + if (access_ok(to, n)) { kasan_check_read(from, n); - n = raw_copy_to_user(to, from, n); + res = raw_copy_to_user(to, from, n); + kmsan_copy_to_user(to, from, n, res); } - return n; + return res; } static int copyin(void *to, const void __user *from, size_t n) { + size_t res; + if (access_ok(from, n)) { kasan_check_write(to, n); - n = raw_copy_from_user(to, from, n); + res = raw_copy_from_user(to, from, n); + kmsan_unpoison_shadow(to, n - res); } - return n; + return res; } static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t bytes, diff --git a/lib/usercopy.c b/lib/usercopy.c index cbb4d9ec00f2..5f2c95f6ae95 100644 --- a/lib/usercopy.c +++ b/lib/usercopy.c @@ -1,4 +1,5 @@ // SPDX-License-Identifier: GPL-2.0 +#include #include #include @@ -12,6 +13,7 @@ unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n if (likely(access_ok(from, n))) { kasan_check_write(to, n); res = raw_copy_from_user(to, from, n); + kmsan_unpoison_shadow(to, n - res); } if (unlikely(res)) memset(to + (n - res), 0, res); @@ -23,12 +25,14 @@ EXPORT_SYMBOL(_copy_from_user); #ifndef INLINE_COPY_TO_USER unsigned long _copy_to_user(void __user *to, const void *from, unsigned long n) { + unsigned long res; might_fault(); if (likely(access_ok(to, n))) { kasan_check_read(from, n); - n = raw_copy_to_user(to, from, n); + res = raw_copy_to_user(to, from, n); + kmsan_copy_to_user(to, from, n, res); } - return n; + return res; } EXPORT_SYMBOL(_copy_to_user); #endif