From patchwork Fri Nov 22 11:26:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Potapenko X-Patchwork-Id: 11257905 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9725B14DB for ; Fri, 22 Nov 2019 11:27:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 48DAE20708 for ; Fri, 22 Nov 2019 11:27:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CSW9VRI7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 48DAE20708 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 06A906B04ED; Fri, 22 Nov 2019 06:27:43 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 002DB6B04EF; Fri, 22 Nov 2019 06:27:42 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE0956B04F0; Fri, 22 Nov 2019 06:27:42 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0185.hostedemail.com [216.40.44.185]) by kanga.kvack.org (Postfix) with ESMTP id C8CC26B04ED for ; Fri, 22 Nov 2019 06:27:42 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 615118249980 for ; Fri, 22 Nov 2019 11:27:42 +0000 (UTC) X-FDA: 76183688364.25.sort94_905940ab9035 X-Spam-Summary: 2,0,0,ac852bc34e8f5c42,d41d8cd98f00b204,3lcbxxqykca4uzwrs5u22uzs.q20zw18b-00y9oqy.25u@flex--glider.bounces.google.com,:tglx@linutronix.de:akpm@linux-foundation.org:vegard.nossum@oracle.com:dvyukov@google.com::glider@google.com:viro@zeniv.linux.org.uk:adilger.kernel@dilger.ca:andreyknvl@google.com:aryabinin@virtuozzo.com:luto@kernel.org:ard.biesheuvel@linaro.org:arnd@arndb.de:hch@infradead.org:hch@lst.de:darrick.wong@oracle.com:davem@davemloft.net:dmitry.torokhov@gmail.com:ebiggers@google.com:edumazet@google.com:ericvh@gmail.com:gregkh@linuxfoundation.org:harry.wentland@amd.com:herbert@gondor.apana.org.au:iii@linux.ibm.com:mingo@elte.hu:jasowang@redhat.com:axboe@kernel.dk:m.szyprowski@samsung.com:elver@google.com:mark.rutland@arm.com:martin.petersen@oracle.com:schwidefsky@de.ibm.com:willy@infradead.org:mst@redhat.com:monstr@monstr.eu:pmladek@suse.com:cai@lca.pw:rdunlap@infradead.org:robin.murphy@arm.com:sergey.senozhatsky@gmail.com:rostedt@goodmis.org:tiwai@sus e.com:ty X-HE-Tag: sort94_905940ab9035 X-Filterd-Recvd-Size: 12965 Received: from mail-qk1-f201.google.com (mail-qk1-f201.google.com [209.85.222.201]) by imf48.hostedemail.com (Postfix) with ESMTP for ; Fri, 22 Nov 2019 11:27:41 +0000 (UTC) Received: by mail-qk1-f201.google.com with SMTP id w85so4104048qka.13 for ; Fri, 22 Nov 2019 03:27:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=KTSlFT3pETAokPrnJ7zKTdb8YKPFNtO3d/K2Z2clnp0=; b=CSW9VRI73TXASeIAwXbPB3zOMs81WIP70Xhkyf+krkIGmCGRVaErwefhxXLvjEp5fw DOljBM7cmCFIQsxvEskRmnQCZSjoS+1IN1ATlYBcb7u/BdQBl80IgX6gY0i5doe0DKj3 VLvIcomUgdBZkdqS/kywD18VdpT0SxNHBzNyKVjQzKhcIVuyrnaWoqM8xBWB9DJ4oMfm snlWcMtCfipcM34iFIdpiE/NC+iORsRHjg0hw4ENPxIBu0dbYH5T+BI9Y7pZRLORhRga XyajS3ZRe89gvS+jim2ZPYYdLDyaaL2ejNkF377UYSJ3nlzcjYha2Ye6gc45asqTfU51 KFrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=KTSlFT3pETAokPrnJ7zKTdb8YKPFNtO3d/K2Z2clnp0=; b=Qavlxe/CJddbzOjf4CJxsW1roUDlJDPc8fI8KQ8I8Dd5G6JXD8/o4Kb5wmC1Ll4G4t euVmsRiJ/G1LYmxdm85Oi5K+3BQeSUjluQNL27954Z9aOZfoo0vu2XVSEcsPAMk4Z9Zp 6ix94WR7QIiBS/RiMm5Z6mjJIS/NMmFAOl0qOx0Irc9ooUBlzS3yaexeG8AqssSCyrIX 84nOwouPm2w9t+i5ErG2KvOmXB8MuRjGUp1p0ZY3FO0s4p5iVidPJJmxdJ4fSFFGsoQY TXGOxZk1OisImB1aV6N27LaYbSEMVrg8hp5FlZ0nkae9F+A0lqzTin1x/hXhfDjrJDr0 +2pA== X-Gm-Message-State: APjAAAUcabi+qpOm1NWbv839ZBj8/DwLOvauobwqpeRdkqaZJSX7n1uL 4Tk8pnqp+s+19kCF/LSP1S5Dd7FbaMk= X-Google-Smtp-Source: APXvYqzG8Nb+c0z1/SpjAkPWrKRJ5/jfLyiacI/eNbDU+mQCtWl0zYz5pCAwg2D8aGd5Y8l2rrZ2ks+Lht0= X-Received: by 2002:ae9:e501:: with SMTP id w1mr12331185qkf.271.1574422061094; Fri, 22 Nov 2019 03:27:41 -0800 (PST) Date: Fri, 22 Nov 2019 12:26:09 +0100 In-Reply-To: <20191122112621.204798-1-glider@google.com> Message-Id: <20191122112621.204798-25-glider@google.com> Mime-Version: 1.0 References: <20191122112621.204798-1-glider@google.com> X-Mailer: git-send-email 2.24.0.432.g9d3f5f5b63-goog Subject: [PATCH RFC v3 24/36] kmsan: disable instrumentation of certain functions From: glider@google.com To: Thomas Gleixner , Andrew Morton , Vegard Nossum , Dmitry Vyukov , linux-mm@kvack.org Cc: glider@google.com, viro@zeniv.linux.org.uk, adilger.kernel@dilger.ca, andreyknvl@google.com, aryabinin@virtuozzo.com, luto@kernel.org, ard.biesheuvel@linaro.org, arnd@arndb.de, hch@infradead.org, hch@lst.de, darrick.wong@oracle.com, davem@davemloft.net, dmitry.torokhov@gmail.com, ebiggers@google.com, edumazet@google.com, ericvh@gmail.com, gregkh@linuxfoundation.org, harry.wentland@amd.com, herbert@gondor.apana.org.au, iii@linux.ibm.com, mingo@elte.hu, jasowang@redhat.com, axboe@kernel.dk, m.szyprowski@samsung.com, elver@google.com, mark.rutland@arm.com, martin.petersen@oracle.com, schwidefsky@de.ibm.com, willy@infradead.org, mst@redhat.com, monstr@monstr.eu, pmladek@suse.com, cai@lca.pw, rdunlap@infradead.org, robin.murphy@arm.com, sergey.senozhatsky@gmail.com, rostedt@goodmis.org, tiwai@suse.com, tytso@mit.edu, gor@linux.ibm.com, wsa@the-dreams.de X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some functions are called from handwritten assembly, and therefore don't have their arguments' metadata fully set up by the instrumentation code. Mark them with __no_sanitize_memory to avoid false positives from spreading further. Certain functions perform task switching, so that the value of |current| is different as they proceed. Because KMSAN state pointer is only read once at the beginning of the function, touching it after |current| has changed may be dangerous. Signed-off-by: Alexander Potapenko To: Alexander Potapenko Cc: Thomas Gleixner Cc: Andrew Morton Cc: Vegard Nossum Cc: Dmitry Vyukov Cc: linux-mm@kvack.org --- v3: - removed TODOs from comments Change-Id: I684d23dac5a22eb0a4cea71993cb934302b17cea --- arch/x86/entry/common.c | 1 + arch/x86/include/asm/irq_regs.h | 1 + arch/x86/include/asm/syscall_wrapper.h | 1 + arch/x86/kernel/apic/apic.c | 1 + arch/x86/kernel/dumpstack_64.c | 1 + arch/x86/kernel/process_64.c | 5 +++++ arch/x86/kernel/traps.c | 12 ++++++++++-- arch/x86/kernel/uprobes.c | 7 ++++++- kernel/profile.c | 1 + kernel/sched/core.c | 6 ++++++ 10 files changed, 33 insertions(+), 3 deletions(-) diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c index 3f8e22615812..0dd5b2acb355 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -275,6 +275,7 @@ __visible inline void syscall_return_slowpath(struct pt_regs *regs) } #ifdef CONFIG_X86_64 +__no_sanitize_memory __visible void do_syscall_64(unsigned long nr, struct pt_regs *regs) { struct thread_info *ti; diff --git a/arch/x86/include/asm/irq_regs.h b/arch/x86/include/asm/irq_regs.h index 187ce59aea28..d65a00bd6f02 100644 --- a/arch/x86/include/asm/irq_regs.h +++ b/arch/x86/include/asm/irq_regs.h @@ -14,6 +14,7 @@ DECLARE_PER_CPU(struct pt_regs *, irq_regs); +__no_sanitize_memory static inline struct pt_regs *get_irq_regs(void) { return __this_cpu_read(irq_regs); diff --git a/arch/x86/include/asm/syscall_wrapper.h b/arch/x86/include/asm/syscall_wrapper.h index e046a405743d..43910ce1b53b 100644 --- a/arch/x86/include/asm/syscall_wrapper.h +++ b/arch/x86/include/asm/syscall_wrapper.h @@ -159,6 +159,7 @@ ALLOW_ERROR_INJECTION(__x64_sys##name, ERRNO); \ static long __se_sys##name(__MAP(x,__SC_LONG,__VA_ARGS__)); \ static inline long __do_sys##name(__MAP(x,__SC_DECL,__VA_ARGS__));\ + __no_sanitize_memory \ asmlinkage long __x64_sys##name(const struct pt_regs *regs) \ { \ return __se_sys##name(SC_X86_64_REGS_TO_ARGS(x,__VA_ARGS__));\ diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c index 9e2dd2b296cd..7b24bda22c38 100644 --- a/arch/x86/kernel/apic/apic.c +++ b/arch/x86/kernel/apic/apic.c @@ -1118,6 +1118,7 @@ static void local_apic_timer_interrupt(void) * [ if a single-CPU system runs an SMP kernel then we call the local * interrupt as well. Thus we cannot inline the local irq ... ] */ +__no_sanitize_memory /* |regs| may be uninitialized */ __visible void __irq_entry smp_apic_timer_interrupt(struct pt_regs *regs) { struct pt_regs *old_regs = set_irq_regs(regs); diff --git a/arch/x86/kernel/dumpstack_64.c b/arch/x86/kernel/dumpstack_64.c index 753b8cfe8b8a..ba883d282a43 100644 --- a/arch/x86/kernel/dumpstack_64.c +++ b/arch/x86/kernel/dumpstack_64.c @@ -143,6 +143,7 @@ static bool in_irq_stack(unsigned long *stack, struct stack_info *info) return true; } +__no_sanitize_memory int get_stack_info(unsigned long *stack, struct task_struct *task, struct stack_info *info, unsigned long *visit_mask) { diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c index af64519b2695..70e33150a83a 100644 --- a/arch/x86/kernel/process_64.c +++ b/arch/x86/kernel/process_64.c @@ -500,6 +500,11 @@ void compat_start_thread(struct pt_regs *regs, u32 new_ip, u32 new_sp) * Kprobes not supported here. Set the probe on schedule instead. * Function graph tracer not supported too. */ +/* + * Avoid touching KMSAN state or reporting anything here, as __switch_to() does + * weird things with tasks. + */ +__no_sanitize_memory __visible __notrace_funcgraph struct task_struct * __switch_to(struct task_struct *prev_p, struct task_struct *next_p) { diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index 4bb0f8447112..a94282d1f60b 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -618,7 +618,10 @@ NOKPROBE_SYMBOL(do_int3); * Help handler running on a per-cpu (IST or entry trampoline) stack * to switch to the normal thread stack if the interrupted code was in * user mode. The actual stack switch is done in entry_64.S + * + * This function switches the registers - don't instrument it with KMSAN! */ +__no_sanitize_memory asmlinkage __visible notrace struct pt_regs *sync_regs(struct pt_regs *eregs) { struct pt_regs *regs = (struct pt_regs *)this_cpu_read(cpu_current_top_of_stack) - 1; @@ -634,6 +637,11 @@ struct bad_iret_stack { }; asmlinkage __visible notrace +/* + * Dark magic happening here, let's not instrument this function. + * Also avoid copying any metadata by using raw __memmove(). + */ +__no_sanitize_memory struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s) { /* @@ -648,10 +656,10 @@ struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s) (struct bad_iret_stack *)this_cpu_read(cpu_tss_rw.x86_tss.sp0) - 1; /* Copy the IRET target to the new stack. */ - memmove(&new_stack->regs.ip, (void *)s->regs.sp, 5*8); + __memmove(&new_stack->regs.ip, (void *)s->regs.sp, 5*8); /* Copy the remainder of the stack from the current stack. */ - memmove(new_stack, s, offsetof(struct bad_iret_stack, regs.ip)); + __memmove(new_stack, s, offsetof(struct bad_iret_stack, regs.ip)); BUG_ON(!user_mode(&new_stack->regs)); return new_stack; diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c index 8cd745ef8c7b..bcd4bf5a909f 100644 --- a/arch/x86/kernel/uprobes.c +++ b/arch/x86/kernel/uprobes.c @@ -8,6 +8,7 @@ * Jim Keniston */ #include +#include #include #include #include @@ -997,9 +998,13 @@ int arch_uprobe_post_xol(struct arch_uprobe *auprobe, struct pt_regs *regs) int arch_uprobe_exception_notify(struct notifier_block *self, unsigned long val, void *data) { struct die_args *args = data; - struct pt_regs *regs = args->regs; + struct pt_regs *regs; int ret = NOTIFY_DONE; + kmsan_unpoison_shadow(args, sizeof(*args)); + regs = args->regs; + if (regs) + kmsan_unpoison_shadow(regs, sizeof(*regs)); /* We are only interested in userspace traps */ if (regs && !user_mode(regs)) return NOTIFY_DONE; diff --git a/kernel/profile.c b/kernel/profile.c index af7c94bf5fa1..835a5b66d1a4 100644 --- a/kernel/profile.c +++ b/kernel/profile.c @@ -399,6 +399,7 @@ void profile_hits(int type, void *__pc, unsigned int nr_hits) } EXPORT_SYMBOL_GPL(profile_hits); +__no_sanitize_memory void profile_tick(int type) { struct pt_regs *regs = get_irq_regs(); diff --git a/kernel/sched/core.c b/kernel/sched/core.c index dd05a378631a..674d36fe9d44 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -475,6 +475,7 @@ void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task) put_task_struct(task); } +__no_sanitize_memory /* context switching here */ void wake_up_q(struct wake_q_head *head) { struct wake_q_node *node = head->first; @@ -3180,6 +3181,7 @@ prepare_task_switch(struct rq *rq, struct task_struct *prev, * past. prev == current is still correct but we need to recalculate this_rq * because prev may have moved to another CPU. */ +__no_sanitize_memory /* |current| changes here */ static struct rq *finish_task_switch(struct task_struct *prev) __releases(rq->lock) { @@ -3986,6 +3988,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) * * WARNING: must be called with preemption disabled! */ +__no_sanitize_memory /* |current| changes here */ static void __sched notrace __schedule(bool preempt) { struct task_struct *prev, *next; @@ -4605,6 +4608,7 @@ int task_prio(const struct task_struct *p) * * Return: 1 if the CPU is currently idle. 0 otherwise. */ +__no_sanitize_memory /* nothing to report here */ int idle_cpu(int cpu) { struct rq *rq = cpu_rq(cpu); @@ -6544,6 +6548,7 @@ static struct kmem_cache *task_group_cache __read_mostly; DECLARE_PER_CPU(cpumask_var_t, load_balance_mask); DECLARE_PER_CPU(cpumask_var_t, select_idle_mask); +__no_sanitize_memory void __init sched_init(void) { unsigned long ptr = 0; @@ -6716,6 +6721,7 @@ static inline int preempt_count_equals(int preempt_offset) return (nested == preempt_offset); } +__no_sanitize_memory /* expect the arguments to be initialized */ void __might_sleep(const char *file, int line, int preempt_offset) { /*