From patchwork Tue Oct 5 10:58:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536159 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAB3DC433F5 for ; Tue, 5 Oct 2021 10:59:47 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5F45661139 for ; Tue, 5 Oct 2021 10:59:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5F45661139 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id F20866B006C; Tue, 5 Oct 2021 06:59:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EB154940007; Tue, 5 Oct 2021 06:59:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D47CB6B0072; Tue, 5 Oct 2021 06:59:46 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0178.hostedemail.com [216.40.44.178]) by kanga.kvack.org (Postfix) with ESMTP id C124A6B006C for ; Tue, 5 Oct 2021 06:59:46 -0400 (EDT) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 721272AF1F for ; Tue, 5 Oct 2021 10:59:46 +0000 (UTC) X-FDA: 78662088372.08.06CAE6F Received: from mail-qt1-f202.google.com (mail-qt1-f202.google.com [209.85.160.202]) by imf05.hostedemail.com (Postfix) with ESMTP id 24EAA5072AE0 for ; Tue, 5 Oct 2021 10:59:46 +0000 (UTC) Received: by mail-qt1-f202.google.com with SMTP id z10-20020ac83e0a000000b002a732692afaso7394465qtf.2 for ; Tue, 05 Oct 2021 03:59:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=AjGY1e3Ekh8sgb5t91HD4gatgCiWO+4KwBntiL6GjSk=; b=ZQrlEFr3YQ/+5PCz0GS+jbo9xkwwJ4ivZGR3ebnyZMr1Colxs8QWGY8hV8W0xas9h8 WQDJHHdrEq5JlXN+biRpbf/mNFhbe0EZYmp07Lz+GO2Twg5RZqkzCZ+c6tIb5zySPIrS RZPc9ZskueXz5HUDsfa/9PliBmLVxfNeVgKm0vIEqszSW1eiYiAsiaCRVOgI/dEGiL8F 4iK80wkLYccMV8XGWfh8qATkpPJZeqX7jiaF62slW9X7FXWbGQTnMxTWZcIEY65ZPz30 eBCmrzlZNBW4aHKmRxfWl+NQW5TnCqFPyFTeQdKYp7F4rbep0fiXyyZh/qmOU7JRRIdF flew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=AjGY1e3Ekh8sgb5t91HD4gatgCiWO+4KwBntiL6GjSk=; b=T8u3Y+VsHQEI/oAMLkzzgeqVxOXQr0XJFwEuJtz0DzmJsYxYM9Caj7A8SHOfkDE00x xyeXkULQ5av7O34z84mhRooqWUzr/cRiMePxKZMsTX4O35bWmP7Vj/EzImYgVsNfz6Ps 9MSYAUvMEQ2kapuSowQWdF9RtQps25mziOhRiYFlfOWEcEEgCmypQD1UWk8APcPUwxv+ huDw9y/qke8R/ZjuCl/oh7SRNH1noUX2Uag1guxQoiw2evwLw8p/bAhg4EixVtmRBZr2 X+M5NhJQmPhwFku6tXT29z0EIZ3JwIQ6+agCvDOKpOIcUraWm3LAym7PlXhPxcfElwfj yJFA== X-Gm-Message-State: AOAM530AGmq4K3x8fo0X4hM5FkDf/b2rM5nNscSaga/4xfIp9kdObZzB 2dRjy2Axl0kqqPxhu31GKcmiM6DJTQ== X-Google-Smtp-Source: ABdhPJzVQKEkM4YizKAEiyMCskUOEb29axZkvKzwcJos1d/EXkam0bxDbBYGbv8vlgaFMr4sViAX3Sa3dQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a05:6214:a4d:: with SMTP id ee13mr23024272qvb.6.1633431585442; Tue, 05 Oct 2021 03:59:45 -0700 (PDT) Date: Tue, 5 Oct 2021 12:58:43 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-2-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 01/23] kcsan: Refactor reading of instrumented memory From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 24EAA5072AE0 X-Stat-Signature: wg5uxoagkmq4jbcta9scb75dongwokf6 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ZQrlEFr3; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of 3ITBcYQUKCAQipzivksskpi.gsqpmry1-qqozego.svk@flex--elver.bounces.google.com designates 209.85.160.202 as permitted sender) smtp.mailfrom=3ITBcYQUKCAQipzivksskpi.gsqpmry1-qqozego.svk@flex--elver.bounces.google.com X-HE-Tag: 1633431586-453257 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Factor out the switch statement reading instrumented memory into a helper read_instrumented_memory(). No functional change. Signed-off-by: Marco Elver --- kernel/kcsan/core.c | 51 +++++++++++++++------------------------------ 1 file changed, 17 insertions(+), 34 deletions(-) diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index 4b84c8e7884b..6bfd3040f46b 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -325,6 +325,21 @@ static void delay_access(int type) udelay(delay); } +/* + * Reads the instrumented memory for value change detection; value change + * detection is currently done for accesses up to a size of 8 bytes. + */ +static __always_inline u64 read_instrumented_memory(const volatile void *ptr, size_t size) +{ + switch (size) { + case 1: return READ_ONCE(*(const u8 *)ptr); + case 2: return READ_ONCE(*(const u16 *)ptr); + case 4: return READ_ONCE(*(const u32 *)ptr); + case 8: return READ_ONCE(*(const u64 *)ptr); + default: return 0; /* Ignore; we do not diff the values. */ + } +} + void kcsan_save_irqtrace(struct task_struct *task) { #ifdef CONFIG_TRACE_IRQFLAGS @@ -482,23 +497,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned * Read the current value, to later check and infer a race if the data * was modified via a non-instrumented access, e.g. from a device. */ - old = 0; - switch (size) { - case 1: - old = READ_ONCE(*(const u8 *)ptr); - break; - case 2: - old = READ_ONCE(*(const u16 *)ptr); - break; - case 4: - old = READ_ONCE(*(const u32 *)ptr); - break; - case 8: - old = READ_ONCE(*(const u64 *)ptr); - break; - default: - break; /* ignore; we do not diff the values */ - } + old = read_instrumented_memory(ptr, size); /* * Delay this thread, to increase probability of observing a racy @@ -511,23 +510,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned * racy access. */ access_mask = ctx->access_mask; - new = 0; - switch (size) { - case 1: - new = READ_ONCE(*(const u8 *)ptr); - break; - case 2: - new = READ_ONCE(*(const u16 *)ptr); - break; - case 4: - new = READ_ONCE(*(const u32 *)ptr); - break; - case 8: - new = READ_ONCE(*(const u64 *)ptr); - break; - default: - break; /* ignore; we do not diff the values */ - } + new = read_instrumented_memory(ptr, size); diff = old ^ new; if (access_mask) From patchwork Tue Oct 5 10:58:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536161 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BFD3DC433FE for ; Tue, 5 Oct 2021 10:59:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 71C216117A for ; Tue, 5 Oct 2021 10:59:49 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 71C216117A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 16213940008; Tue, 5 Oct 2021 06:59:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0EE83940007; Tue, 5 Oct 2021 06:59:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EA825940008; Tue, 5 Oct 2021 06:59:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0119.hostedemail.com [216.40.44.119]) by kanga.kvack.org (Postfix) with ESMTP id DB087940007 for ; Tue, 5 Oct 2021 06:59:48 -0400 (EDT) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 87F60181D75C7 for ; Tue, 5 Oct 2021 10:59:48 +0000 (UTC) X-FDA: 78662088456.28.C8D6EB9 Received: from mail-qt1-f202.google.com (mail-qt1-f202.google.com [209.85.160.202]) by imf03.hostedemail.com (Postfix) with ESMTP id 45E073000CB0 for ; Tue, 5 Oct 2021 10:59:48 +0000 (UTC) Received: by mail-qt1-f202.google.com with SMTP id k1-20020ac80201000000b002a7200b449eso11248570qtg.18 for ; Tue, 05 Oct 2021 03:59:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=o6tCUIHZ5VDO2w0cRCfWBwR+yqfihIBzxgT5IEbFQNw=; b=qOYA0jMM+/K12dDkdqs5dZ3iMCJm3H1O1uaYwVLBeKrAhWN7jVjWdUTboBH62QGYBH K7sUdAzFd8gD1odKu/+wFFrgA8t6Kq0PG3NwF/V/QBb6aCzlrc5PsXuOzcgQy0KUXb5M QBUBbQzS4bpYxLCwaXrAKKVfnJT80/k/eMJeNwG90yvhariPjIdMkROHst61Ds2Nrkww 6amOm2FKDDIl4zRWFKQYTrjQhVJel4eUDAvbWBj5aOz7X8h+Lwdb8TydU/s8PFnhfF88 GNct/WFveKORSyhAc8FPCo2dMHN33ZDlj/EnBENx6DK4m+MRQKBSoxYA9tfhP79S+vhA OGHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=o6tCUIHZ5VDO2w0cRCfWBwR+yqfihIBzxgT5IEbFQNw=; b=ZlcR7IJceOuco+zR/M6DCOOrnt5SFckEPV9DebLjaOPR8MMcehIL7iINPhM5axe++P kpPq0SvVRz6TgnZaf4XIBkZDyAgm8xhEhV41ugR73IIGERPF31h6LujlSQx6lQwxWeRg 04B1CfLt6B+IywYzA560mQGfuLP3JVF3uETNQVUhadE4zcvkatVCspM2t+llw5NOi0CT 2/rijS7+hW8R3oCMAA2uEQPYsWFM5SPiM6KRJLcNky9lAQnXPoh9gs/6VvdY3PX13S7o iToPDxji/3aAcIOtr7/YQvKUiD6zJmbiDZOSK36o/yPnEhhwMCPfte1ezFyIDLkSLxv/ rCxA== X-Gm-Message-State: AOAM533wBod8xBYbB7KIlp5w7Hf50SJVxuO7fMJHSeV9z7pCkrtVuL28 va5FFC9WU7HVvjYB4D26Vin44A+jiA== X-Google-Smtp-Source: ABdhPJyTEl2z+SbGZhkD9pQ6CXRsL39PESmKScxZvPEVS4ezV0+8cNls5zkvZs5Q6ztfstx525hmvU838w== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a05:6214:c1:: with SMTP id f1mr7500180qvs.9.1633431587608; Tue, 05 Oct 2021 03:59:47 -0700 (PDT) Date: Tue, 5 Oct 2021 12:58:44 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-3-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 02/23] kcsan: Remove redundant zero-initialization of globals From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 45E073000CB0 X-Stat-Signature: aot59p9tc6985hwid94n6n3faxfqd3k3 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=qOYA0jMM; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf03.hostedemail.com: domain of 3IzBcYQUKCAYkr1kxmuumrk.iusrot03-ssq1giq.uxm@flex--elver.bounces.google.com designates 209.85.160.202 as permitted sender) smtp.mailfrom=3IzBcYQUKCAYkr1kxmuumrk.iusrot03-ssq1giq.uxm@flex--elver.bounces.google.com X-HE-Tag: 1633431588-95287 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: They are implicitly zero-initialized, remove explicit initialization. It keeps the upcoming additions to kcsan_ctx consistent with the rest. No functional change intended. Signed-off-by: Marco Elver --- init/init_task.c | 9 +-------- kernel/kcsan/core.c | 5 ----- 2 files changed, 1 insertion(+), 13 deletions(-) diff --git a/init/init_task.c b/init/init_task.c index 2d024066e27b..61700365ce58 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -181,14 +181,7 @@ struct task_struct init_task .kasan_depth = 1, #endif #ifdef CONFIG_KCSAN - .kcsan_ctx = { - .disable_count = 0, - .atomic_next = 0, - .atomic_nest_count = 0, - .in_flat_atomic = false, - .access_mask = 0, - .scoped_accesses = {LIST_POISON1, NULL}, - }, + .kcsan_ctx = { .scoped_accesses = {LIST_POISON1, NULL} }, #endif #ifdef CONFIG_TRACE_IRQFLAGS .softirqs_enabled = 1, diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index 6bfd3040f46b..e34a1710b7bc 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -44,11 +44,6 @@ bool kcsan_enabled; /* Per-CPU kcsan_ctx for interrupts */ static DEFINE_PER_CPU(struct kcsan_ctx, kcsan_cpu_ctx) = { - .disable_count = 0, - .atomic_next = 0, - .atomic_nest_count = 0, - .in_flat_atomic = false, - .access_mask = 0, .scoped_accesses = {LIST_POISON1, NULL}, }; From patchwork Tue Oct 5 10:58:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536163 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 62245C4332F for ; Tue, 5 Oct 2021 10:59:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F0799615A4 for ; Tue, 5 Oct 2021 10:59:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org F0799615A4 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 7CAD9940009; Tue, 5 Oct 2021 06:59:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7540F940007; Tue, 5 Oct 2021 06:59:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5CE78940009; Tue, 5 Oct 2021 06:59:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0107.hostedemail.com [216.40.44.107]) by kanga.kvack.org (Postfix) with ESMTP id 48D7F940007 for ; Tue, 5 Oct 2021 06:59:51 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id ECD438249980 for ; Tue, 5 Oct 2021 10:59:50 +0000 (UTC) X-FDA: 78662088540.38.0553E3B Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf03.hostedemail.com (Postfix) with ESMTP id A13123000CA5 for ; Tue, 5 Oct 2021 10:59:50 +0000 (UTC) Received: by mail-yb1-f201.google.com with SMTP id i83-20020a256d56000000b005b706d1417bso21323823ybc.6 for ; Tue, 05 Oct 2021 03:59:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=kUUBsv4TZ+s6mxdftD0CD4qwCQSHLXZlKlhseQGR4HI=; b=rEZQ6gHEpvFmnU3JJ2KadwYQcU/JSBtWD9kFPefHUesYhWgD+zWgArmWGmRaRUoRtk VDKK8Clkd1IXKKUXE7uGOCQiR/a8VLzgq14e4z3yFqsyEMB8J+yA2fLfPmDZ1cgwu7IA hxWJQYHsv0nxh98TCmcq99vXZHi81m/dePuKk+O8qPlob0rH4fJEMk6xuMnX/JJw3dJK w+/HEGUet8yHVB4XffG2TDeLzpNjvUlMe1PO08esWDxORp5yjfb6ZGwfrmryknbMGoVf IweJBPgmp5Gr9RwWgQoSzOGRDC7O4CUXO3DyTZtoWRfCwGXfU1t0hBdbgyv0icSZytvz vwrw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=kUUBsv4TZ+s6mxdftD0CD4qwCQSHLXZlKlhseQGR4HI=; b=YeUF1u9hPHfgUda6ORhC0gWsPQf2gl9DIzJ1hyZpP8Ho1udRZLrCiUqNPZOon6CCCN Mi62QSLy+txG2wvL1HfMQdQGR5HE5s7PlanRrbZubfnfS5eTGBw19adrNPiaIJme7hKn tGxaujiXJsnQ2vftZxnSESrPzTPhoZtaKByU51zqDOTxlXTZ/AzCsTH7mKcAeFxJ9w8c xVPl2E9yOjGikGqgxSZlQ3bosnh4Q3YBksis0z8v5eUYtJ5rDeN2Ts3ezSeLHZ+gXyeS fUXcEFyb/XNoZ9xWIlnmygy4b8Zo9dl5qAl5wa2fbjnpFjYIQQX5vsjUwVfAah72mvpY M+Kw== X-Gm-Message-State: AOAM533KtMfJoh+rCu6uI8kzssGH0G3EkU/0EXOL1tZ1x6PONQ1hgsN4 sroyNZa77NymsHa7Ewo/MBvIJBw2HA== X-Google-Smtp-Source: ABdhPJxRi+Ql2gxjFY5+cYUsl3wIa8YCDZUdUIEKXnqibIFEFifviy3RA6gYeFrxOW2C0fskgkXzwtNlZw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a5b:f03:: with SMTP id x3mr20807187ybr.546.1633431589883; Tue, 05 Oct 2021 03:59:49 -0700 (PDT) Date: Tue, 5 Oct 2021 12:58:45 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-4-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 03/23] kcsan: Avoid checking scoped accesses from nested contexts From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Rspamd-Queue-Id: A13123000CA5 X-Stat-Signature: gdcwjjj4attwojkw31autnm4a4hd4quw Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=rEZQ6gHE; spf=pass (imf03.hostedemail.com: domain of 3JTBcYQUKCAgmt3mzowwotm.kwutqv25-uus3iks.wzo@flex--elver.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3JTBcYQUKCAgmt3mzowwotm.kwutqv25-uus3iks.wzo@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam06 X-HE-Tag: 1633431590-489843 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Avoid checking scoped accesses from nested contexts (such as nested interrupts or in scheduler code) which share the same kcsan_ctx. This is to avoid detecting false positive races of accesses in the same thread with currently scoped accesses: consider setting up a watchpoint for a non-scoped (normal) access that also "conflicts" with a current scoped access. In a nested interrupt (or in the scheduler), which shares the same kcsan_ctx, we cannot check scoped accesses set up in the parent context -- simply ignore them in this case. With the introduction of kcsan_ctx::disable_scoped, we can also clean up kcsan_check_scoped_accesses()'s recursion guard, and do not need to modify the list's prev pointer. Signed-off-by: Marco Elver --- include/linux/kcsan.h | 1 + kernel/kcsan/core.c | 18 +++++++++++++++--- 2 files changed, 16 insertions(+), 3 deletions(-) diff --git a/include/linux/kcsan.h b/include/linux/kcsan.h index fc266ecb2a4d..13cef3458fed 100644 --- a/include/linux/kcsan.h +++ b/include/linux/kcsan.h @@ -21,6 +21,7 @@ */ struct kcsan_ctx { int disable_count; /* disable counter */ + int disable_scoped; /* disable scoped access counter */ int atomic_next; /* number of following atomic ops */ /* diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index e34a1710b7bc..bd359f8ee63a 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -204,15 +204,17 @@ check_access(const volatile void *ptr, size_t size, int type, unsigned long ip); static noinline void kcsan_check_scoped_accesses(void) { struct kcsan_ctx *ctx = get_ctx(); - struct list_head *prev_save = ctx->scoped_accesses.prev; struct kcsan_scoped_access *scoped_access; - ctx->scoped_accesses.prev = NULL; /* Avoid recursion. */ + if (ctx->disable_scoped) + return; + + ctx->disable_scoped++; list_for_each_entry(scoped_access, &ctx->scoped_accesses, list) { check_access(scoped_access->ptr, scoped_access->size, scoped_access->type, scoped_access->ip); } - ctx->scoped_accesses.prev = prev_save; + ctx->disable_scoped--; } /* Rules for generic atomic accesses. Called from fast-path. */ @@ -465,6 +467,15 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned goto out; } + /* + * Avoid races of scoped accesses from nested interrupts (or scheduler). + * Assume setting up a watchpoint for a non-scoped (normal) access that + * also conflicts with a current scoped access. In a nested interrupt, + * which shares the context, it would check a conflicting scoped access. + * To avoid, disable scoped access checking. + */ + ctx->disable_scoped++; + /* * Save and restore the IRQ state trace touched by KCSAN, since KCSAN's * runtime is entered for every memory access, and potentially useful @@ -578,6 +589,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned if (!kcsan_interrupt_watcher) local_irq_restore(irq_flags); kcsan_restore_irqtrace(current); + ctx->disable_scoped--; out: user_access_restore(ua_flags); } From patchwork Tue Oct 5 10:58:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536165 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68453C433EF for ; Tue, 5 Oct 2021 10:59:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ECEEC6117A for ; Tue, 5 Oct 2021 10:59:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org ECEEC6117A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 9511894000A; Tue, 5 Oct 2021 06:59:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B321940007; Tue, 5 Oct 2021 06:59:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 708EB94000A; Tue, 5 Oct 2021 06:59:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0098.hostedemail.com [216.40.44.98]) by kanga.kvack.org (Postfix) with ESMTP id 5B595940007 for ; Tue, 5 Oct 2021 06:59:53 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 0D05918116273 for ; Tue, 5 Oct 2021 10:59:53 +0000 (UTC) X-FDA: 78662088666.09.79A3186 Received: from mail-qt1-f201.google.com (mail-qt1-f201.google.com [209.85.160.201]) by imf15.hostedemail.com (Postfix) with ESMTP id B0092D0017C8 for ; Tue, 5 Oct 2021 10:59:52 +0000 (UTC) Received: by mail-qt1-f201.google.com with SMTP id d9-20020ac86149000000b002a6d33107c5so22908032qtm.8 for ; Tue, 05 Oct 2021 03:59:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=sF+4kx+T3D01wstzWXsF22SbY5+tngXnOpA0D2EXd+U=; b=L/Iwt9q6BXQDuYPSQUtrtPtcxeiRCQ6vJophsUh08KGeCKCDiEKhOOutV1dp8fUAH3 bYTr+KDYPtnoicqxfWcMtJfFgCk//7mceeyLQ/UkPTNHnpik2EsiYq5uOGcGFGMDKDRs CRlLtmKFQcgNNAkhKPctfi2PTxU1dMTPNneMMwv9szF2chipeZkLsXGuqG0hHfRGmAKZ 5hGsBviLo6Xvm6QE+1Z/fq4e8015zZkoQ3xImrH196f9upEkfOA1bYZ5l9VFytZE+IKF szjV7uVuH1xXcf9x1RvrdzJtdq4TfyYuwgW+P6FPu40kBZHQNWWzQuh+iNpOthvypkEN ZxjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=sF+4kx+T3D01wstzWXsF22SbY5+tngXnOpA0D2EXd+U=; b=mA9pgYYbH+KUVxBQX5bkO0YIk03BkbsD2mXqxcxvkk0GFrudF+sz8AjLuwn4pw+ptF kkfzZ3mCg4jQAkCzB/hylHVm9JxOWGGQLpFTl76yufvFBGqJ/ynbC+IJ0cBosCHjiPMD OyWRZVMsB02y2wvcfAP7OO2MQhhO6ml0W1pXXnY2cTRxbKiqDjPSCdwQrT+jnfyd+h4F F3d69mZPjqpJ93I91KtmNisWCUrraHuNGZhm7l4H770ZmWQ2c6XEjFO4reys7toEnrwk Phb2OiihPr8Krg7tsudkG1oDGqInjuVN4DK8xSw1REI9UMJ39nOkIYMogEFdUcl2yrLQ AT8w== X-Gm-Message-State: AOAM532Ew2M4S+W2BrCFe84DKqLghEKWQapG5A2L7/6SFwkPCvg6jD9B cnlHRupqTiTcE5Wctq/CIq/ORiDYvw== X-Google-Smtp-Source: ABdhPJwUFcyrtvYB9nrrvbZ8eRv7rjsRLNkdX5u1k4BFTlcHw9S023KAzXsQZhrp/h68NsFEpbGfI07gzQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a05:6214:1710:: with SMTP id db16mr11738028qvb.1.1633431592048; Tue, 05 Oct 2021 03:59:52 -0700 (PDT) Date: Tue, 5 Oct 2021 12:58:46 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-5-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 04/23] kcsan: Add core support for a subset of weak memory modeling From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: B0092D0017C8 X-Stat-Signature: gqpftqt8nn85nyheimsoudm4aidoh8q4 Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="L/Iwt9q6"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of 3KDBcYQUKCAspw6p2rzzrwp.nzxwty58-xxv6lnv.z2r@flex--elver.bounces.google.com designates 209.85.160.201 as permitted sender) smtp.mailfrom=3KDBcYQUKCAspw6p2rzzrwp.nzxwty58-xxv6lnv.z2r@flex--elver.bounces.google.com X-HE-Tag: 1633431592-926049 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add support for modeling a subset of weak memory, which will enable detection of a subset of data races due to missing memory barriers. KCSAN's approach to detecting missing memory barriers is based on modeling access reordering, and enabled if `CONFIG_KCSAN_WEAK_MEMORY=y`, which depends on `CONFIG_KCSAN_STRICT=y`. The feature can be enabled or disabled at boot and runtime via the `kcsan.weak_memory` boot parameter. Each memory access for which a watchpoint is set up, is also selected for simulated reordering within the scope of its function (at most 1 in-flight access). We are limited to modeling the effects of "buffering" (delaying the access), since the runtime cannot "prefetch" accesses (therefore no acquire modeling). Once an access has been selected for reordering, it is checked along every other access until the end of the function scope. If an appropriate memory barrier is encountered, the access will no longer be considered for reordering. When the result of a memory operation should be ordered by a barrier, KCSAN can then detect data races where the conflict only occurs as a result of a missing barrier due to reordering accesses. Suggested-by: Dmitry Vyukov Signed-off-by: Marco Elver --- include/linux/kcsan-checks.h | 10 +- include/linux/kcsan.h | 10 +- include/linux/sched.h | 3 + kernel/kcsan/core.c | 222 ++++++++++++++++++++++++++++++++--- lib/Kconfig.kcsan | 16 +++ scripts/Makefile.kcsan | 9 +- 6 files changed, 251 insertions(+), 19 deletions(-) diff --git a/include/linux/kcsan-checks.h b/include/linux/kcsan-checks.h index 5f5965246877..a1c6a89fde71 100644 --- a/include/linux/kcsan-checks.h +++ b/include/linux/kcsan-checks.h @@ -99,7 +99,15 @@ void kcsan_set_access_mask(unsigned long mask); /* Scoped access information. */ struct kcsan_scoped_access { - struct list_head list; + union { + struct list_head list; /* scoped_accesses list */ + /* + * Not an entry in scoped_accesses list; stack depth from where + * the access was initialized. + */ + int stack_depth; + }; + /* Access information. */ const volatile void *ptr; size_t size; diff --git a/include/linux/kcsan.h b/include/linux/kcsan.h index 13cef3458fed..c07c71f5ba4f 100644 --- a/include/linux/kcsan.h +++ b/include/linux/kcsan.h @@ -49,8 +49,16 @@ struct kcsan_ctx { */ unsigned long access_mask; - /* List of scoped accesses. */ + /* List of scoped accesses; likely to be empty. */ struct list_head scoped_accesses; + +#ifdef CONFIG_KCSAN_WEAK_MEMORY + /* + * Scoped access for modeling access reordering to detect missing memory + * barriers; only keep 1 to keep fast-path complexity manageable. + */ + struct kcsan_scoped_access reorder_access; +#endif }; /** diff --git a/include/linux/sched.h b/include/linux/sched.h index e12b524426b0..c6a96f0cc984 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1343,6 +1343,9 @@ struct task_struct { #ifdef CONFIG_TRACE_IRQFLAGS struct irqtrace_events kcsan_save_irqtrace; #endif +#ifdef CONFIG_KCSAN_WEAK_MEMORY + int kcsan_stack_depth; +#endif #endif #if IS_ENABLED(CONFIG_KUNIT) diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index bd359f8ee63a..0e180d1cd53d 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -20,6 +21,8 @@ #include #include +#include + #include "encoding.h" #include "kcsan.h" #include "permissive.h" @@ -29,6 +32,7 @@ unsigned int kcsan_udelay_task = CONFIG_KCSAN_UDELAY_TASK; unsigned int kcsan_udelay_interrupt = CONFIG_KCSAN_UDELAY_INTERRUPT; static long kcsan_skip_watch = CONFIG_KCSAN_SKIP_WATCH; static bool kcsan_interrupt_watcher = IS_ENABLED(CONFIG_KCSAN_INTERRUPT_WATCHER); +static bool kcsan_weak_memory = IS_ENABLED(CONFIG_KCSAN_WEAK_MEMORY); #ifdef MODULE_PARAM_PREFIX #undef MODULE_PARAM_PREFIX @@ -39,6 +43,9 @@ module_param_named(udelay_task, kcsan_udelay_task, uint, 0644); module_param_named(udelay_interrupt, kcsan_udelay_interrupt, uint, 0644); module_param_named(skip_watch, kcsan_skip_watch, long, 0644); module_param_named(interrupt_watcher, kcsan_interrupt_watcher, bool, 0444); +#ifdef CONFIG_KCSAN_WEAK_MEMORY +module_param_named(weak_memory, kcsan_weak_memory, bool, 0644); +#endif bool kcsan_enabled; @@ -102,6 +109,22 @@ static DEFINE_PER_CPU(long, kcsan_skip); /* For kcsan_prandom_u32_max(). */ static DEFINE_PER_CPU(u32, kcsan_rand_state); +#if !defined(CONFIG_ARCH_WANTS_NO_INSTR) || defined(CONFIG_STACK_VALIDATION) +/* + * Arch does not rely on noinstr, or objtool will remove memory barrier + * instrumentation, and no instrumentation of noinstr code is expected. + */ +#define kcsan_noinstr +static inline bool within_noinstr(unsigned long ip) { return false; } +#else +#define kcsan_noinstr noinstr +static __always_inline bool within_noinstr(unsigned long ip) +{ + return (unsigned long)__noinstr_text_start <= ip && + ip < (unsigned long)__noinstr_text_end; +} +#endif + static __always_inline atomic_long_t *find_watchpoint(unsigned long addr, size_t size, bool expect_write, @@ -351,6 +374,67 @@ void kcsan_restore_irqtrace(struct task_struct *task) #endif } +static __always_inline int get_kcsan_stack_depth(void) +{ +#ifdef CONFIG_KCSAN_WEAK_MEMORY + return current->kcsan_stack_depth; +#else + BUILD_BUG(); + return 0; +#endif +} + +static __always_inline void add_kcsan_stack_depth(int val) +{ +#ifdef CONFIG_KCSAN_WEAK_MEMORY + current->kcsan_stack_depth += val; +#else + BUILD_BUG(); +#endif +} + +static __always_inline struct kcsan_scoped_access *get_reorder_access(struct kcsan_ctx *ctx) +{ +#ifdef CONFIG_KCSAN_WEAK_MEMORY + return ctx->disable_scoped ? NULL : &ctx->reorder_access; +#else + return NULL; +#endif +} + +static __always_inline bool +find_reorder_access(struct kcsan_ctx *ctx, const volatile void *ptr, size_t size, + int type, unsigned long ip) +{ + struct kcsan_scoped_access *reorder_access = get_reorder_access(ctx); + + if (!reorder_access) + return false; + + /* + * Note: If accesses are repeated while reorder_access is identical, + * never matches the new access, because !(type & KCSAN_ACCESS_SCOPED). + */ + return reorder_access->ptr == ptr && reorder_access->size == size && + reorder_access->type == type && reorder_access->ip == ip; +} + +static inline void +set_reorder_access(struct kcsan_ctx *ctx, const volatile void *ptr, size_t size, + int type, unsigned long ip) +{ + struct kcsan_scoped_access *reorder_access = get_reorder_access(ctx); + + if (!reorder_access || !kcsan_weak_memory) + return; + + reorder_access->ptr = ptr; + reorder_access->size = size; + reorder_access->type = type | KCSAN_ACCESS_SCOPED; + reorder_access->ip = ip; + reorder_access->stack_depth = get_kcsan_stack_depth(); +} + /* * Pull everything together: check_access() below contains the performance * critical operations; the fast-path (including check_access) functions should @@ -389,8 +473,10 @@ static noinline void kcsan_found_watchpoint(const volatile void *ptr, * The access_mask check relies on value-change comparison. To avoid * reporting a race where e.g. the writer set up the watchpoint, but the * reader has access_mask!=0, we have to ignore the found watchpoint. + * + * reorder_access is never created from an access with access_mask set. */ - if (ctx->access_mask) + if (ctx->access_mask && !find_reorder_access(ctx, ptr, size, type, ip)) return; /* @@ -440,11 +526,13 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned const bool is_assert = (type & KCSAN_ACCESS_ASSERT) != 0; atomic_long_t *watchpoint; u64 old, new, diff; - unsigned long access_mask; enum kcsan_value_change value_change = KCSAN_VALUE_CHANGE_MAYBE; + bool interrupt_watcher = kcsan_interrupt_watcher; unsigned long ua_flags = user_access_save(); struct kcsan_ctx *ctx = get_ctx(); + unsigned long access_mask = ctx->access_mask; unsigned long irq_flags = 0; + bool is_reorder_access; /* * Always reset kcsan_skip counter in slow-path to avoid underflow; see @@ -467,6 +555,17 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned goto out; } + /* + * The local CPU cannot observe reordering of its own accesses, and + * therefore we need to take care of 2 cases to avoid false positives: + * + * 1. Races of the reordered access with interrupts. To avoid, if + * the current access is reorder_access, disable interrupts. + * 2. Avoid races of scoped accesses from nested interrupts (below). + */ + is_reorder_access = find_reorder_access(ctx, ptr, size, type, ip); + if (is_reorder_access) + interrupt_watcher = false; /* * Avoid races of scoped accesses from nested interrupts (or scheduler). * Assume setting up a watchpoint for a non-scoped (normal) access that @@ -482,7 +581,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned * information is lost if dirtied by KCSAN. */ kcsan_save_irqtrace(current); - if (!kcsan_interrupt_watcher) + if (!interrupt_watcher) local_irq_save(irq_flags); watchpoint = insert_watchpoint((unsigned long)ptr, size, is_write); @@ -503,7 +602,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned * Read the current value, to later check and infer a race if the data * was modified via a non-instrumented access, e.g. from a device. */ - old = read_instrumented_memory(ptr, size); + old = is_reorder_access ? 0 : read_instrumented_memory(ptr, size); /* * Delay this thread, to increase probability of observing a racy @@ -515,8 +614,17 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned * Re-read value, and check if it is as expected; if not, we infer a * racy access. */ - access_mask = ctx->access_mask; - new = read_instrumented_memory(ptr, size); + if (!is_reorder_access) { + new = read_instrumented_memory(ptr, size); + } else { + /* + * Reordered accesses cannot be used for value change detection, + * because the memory location may no longer be accessible and + * could result in a fault. + */ + new = 0; + access_mask = 0; + } diff = old ^ new; if (access_mask) @@ -585,11 +693,20 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned */ remove_watchpoint(watchpoint); atomic_long_dec(&kcsan_counters[KCSAN_COUNTER_USED_WATCHPOINTS]); + out_unlock: - if (!kcsan_interrupt_watcher) + if (!interrupt_watcher) local_irq_restore(irq_flags); kcsan_restore_irqtrace(current); ctx->disable_scoped--; + + /* + * Reordered accesses cannot be used for value change detection, + * therefore never consider for reordering if access_mask is set. + * ASSERT_EXCLUSIVE are not real accesses, ignore them as well. + */ + if (!access_mask && !is_assert) + set_reorder_access(ctx, ptr, size, type, ip); out: user_access_restore(ua_flags); } @@ -597,7 +714,6 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned static __always_inline void check_access(const volatile void *ptr, size_t size, int type, unsigned long ip) { - const bool is_write = (type & KCSAN_ACCESS_WRITE) != 0; atomic_long_t *watchpoint; long encoded_watchpoint; @@ -608,12 +724,14 @@ check_access(const volatile void *ptr, size_t size, int type, unsigned long ip) if (unlikely(size == 0)) return; +again: /* * Avoid user_access_save in fast-path: find_watchpoint is safe without * user_access_save, as the address that ptr points to is only used to * check if a watchpoint exists; ptr is never dereferenced. */ - watchpoint = find_watchpoint((unsigned long)ptr, size, !is_write, + watchpoint = find_watchpoint((unsigned long)ptr, size, + !(type & KCSAN_ACCESS_WRITE), &encoded_watchpoint); /* * It is safe to check kcsan_is_enabled() after find_watchpoint in the @@ -627,9 +745,42 @@ check_access(const volatile void *ptr, size_t size, int type, unsigned long ip) else { struct kcsan_ctx *ctx = get_ctx(); /* Call only once in fast-path. */ - if (unlikely(should_watch(ctx, ptr, size, type))) + if (unlikely(should_watch(ctx, ptr, size, type))) { kcsan_setup_watchpoint(ptr, size, type, ip); - else if (unlikely(ctx->scoped_accesses.prev)) + return; + } + + if (!(type & KCSAN_ACCESS_SCOPED)) { + struct kcsan_scoped_access *reorder_access = get_reorder_access(ctx); + + if (reorder_access) { + /* + * reorder_access check: simulates reordering of + * the access after subsequent operations. + */ + ptr = reorder_access->ptr; + type = reorder_access->type; + ip = reorder_access->ip; + /* + * Upon a nested interrupt, this context's + * reorder_access can be modified (shared ctx). + * We know that upon return, reorder_access is + * always invalidated by setting size to 0 via + * __tsan_func_exit(). Therefore we must read + * and check size after the other fields. + */ + barrier(); + size = READ_ONCE(reorder_access->size); + if (size) + goto again; + } + } + + /* + * Always checked last, right before returning from runtime; + * if reorder_access is valid, checked after it was checked. + */ + if (unlikely(ctx->scoped_accesses.prev)) kcsan_check_scoped_accesses(); } } @@ -916,19 +1067,60 @@ DEFINE_TSAN_VOLATILE_READ_WRITE(8); DEFINE_TSAN_VOLATILE_READ_WRITE(16); /* - * The below are not required by KCSAN, but can still be emitted by the - * compiler. + * Function entry and exit are used to determine the validty of reorder_access. + * Reordering of the access ends at the end of the function scope where the + * access happened. This is done for two reasons: + * + * 1. Artificially limits the scope where missing barriers are detected. + * This minimizes false positives due to uninstrumented functions that + * contain the required barriers but were missed. + * + * 2. Simplifies generating the stack trace of the access. */ void __tsan_func_entry(void *call_pc); -void __tsan_func_entry(void *call_pc) +kcsan_noinstr void __tsan_func_entry(void *call_pc) { + if (!IS_ENABLED(CONFIG_KCSAN_WEAK_MEMORY) || within_noinstr(_RET_IP_)) + return; + + instrumentation_begin(); + add_kcsan_stack_depth(1); + instrumentation_end(); } EXPORT_SYMBOL(__tsan_func_entry); + void __tsan_func_exit(void); -void __tsan_func_exit(void) +kcsan_noinstr void __tsan_func_exit(void) { + struct kcsan_scoped_access *reorder_access; + + if (!IS_ENABLED(CONFIG_KCSAN_WEAK_MEMORY) || within_noinstr(_RET_IP_)) + return; + + instrumentation_begin(); + reorder_access = get_reorder_access(get_ctx()); + if (!reorder_access) + goto out; + + if (get_kcsan_stack_depth() <= reorder_access->stack_depth) { + /* + * Access check to catch cases where write without a barrier + * (supposed release) was last access in function: because + * instrumentation is inserted before the real access, a data + * race due to the write giving up a c-s would only be caught if + * we do the conflicting access after. + */ + check_access(reorder_access->ptr, reorder_access->size, + reorder_access->type, reorder_access->ip); + reorder_access->size = 0; + reorder_access->stack_depth = INT_MIN; + } +out: + add_kcsan_stack_depth(-1); + instrumentation_end(); } EXPORT_SYMBOL(__tsan_func_exit); + void __tsan_init(void); void __tsan_init(void) { diff --git a/lib/Kconfig.kcsan b/lib/Kconfig.kcsan index e0a93ffdef30..55b33bc11824 100644 --- a/lib/Kconfig.kcsan +++ b/lib/Kconfig.kcsan @@ -191,6 +191,22 @@ config KCSAN_STRICT closely aligns with the rules defined by the Linux-kernel memory consistency model (LKMM). +config KCSAN_WEAK_MEMORY + bool "Enable weak memory modeling to detect missing memory barriers" + default y + depends on KCSAN_STRICT + help + Enable support for modeling a subset of weak memory, which allows + detecting a subset of data races due to missing memory barriers. + + Depends on KCSAN_STRICT, because the options strenghtening certain + plain accesses by default (depending on !KCSAN_STRICT) reduce the + ability to detect any data races invoving reordered accesses, in + particular reordered writes. + + Weak memory modeling relies on additional instrumentation and may + affect performance. + config KCSAN_REPORT_VALUE_CHANGE_ONLY bool "Only report races where watcher observed a data value change" default y diff --git a/scripts/Makefile.kcsan b/scripts/Makefile.kcsan index 37cb504c77e1..4c7f0d282e42 100644 --- a/scripts/Makefile.kcsan +++ b/scripts/Makefile.kcsan @@ -9,7 +9,12 @@ endif # Keep most options here optional, to allow enabling more compilers if absence # of some options does not break KCSAN nor causes false positive reports. -export CFLAGS_KCSAN := -fsanitize=thread \ - $(call cc-option,$(call cc-param,tsan-instrument-func-entry-exit=0) -fno-optimize-sibling-calls) \ +kcsan-cflags := -fsanitize=thread -fno-optimize-sibling-calls \ $(call cc-option,$(call cc-param,tsan-compound-read-before-write=1),$(call cc-option,$(call cc-param,tsan-instrument-read-before-write=1))) \ $(call cc-param,tsan-distinguish-volatile=1) + +ifndef CONFIG_KCSAN_WEAK_MEMORY +kcsan-cflags += $(call cc-option,$(call cc-param,tsan-instrument-func-entry-exit=0)) +endif + +export CFLAGS_KCSAN := $(kcsan-cflags) From patchwork Tue Oct 5 10:58:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536167 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB943C433EF for ; Tue, 5 Oct 2021 10:59:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5BA1761139 for ; Tue, 5 Oct 2021 10:59:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 5BA1761139 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 0021C94000B; Tue, 5 Oct 2021 06:59:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EA618940007; Tue, 5 Oct 2021 06:59:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CD1AA94000B; Tue, 5 Oct 2021 06:59:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0137.hostedemail.com [216.40.44.137]) by kanga.kvack.org (Postfix) with ESMTP id BC791940007 for ; Tue, 5 Oct 2021 06:59:56 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 7E313181A3497 for ; Tue, 5 Oct 2021 10:59:56 +0000 (UTC) X-FDA: 78662088792.06.53D37A4 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf02.hostedemail.com (Postfix) with ESMTP id 1048E70044C6 for ; Tue, 5 Oct 2021 10:59:55 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id l9-20020adfc789000000b00160111fd4e8so5595238wrg.17 for ; Tue, 05 Oct 2021 03:59:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Htf1lRwWL3x2PhAFdQAPTvq0cnntLMp/giO1a42w/Is=; b=VeFEuwcJhezOrnbVpRzjv+WCSOe799bM7EFIlxEzsai3n5VAJ8VP1f78T0I/3GWTsy yp1pUTD5ZsbkynhaQbi7qejmaoBlQQs3jPUvd6PTHltc8QRk8zypUw9JvH+uTl4KilDi 7BoZPfb7clfXe7PJ9ed/Y1joi51HhtC9lnGUd+PCRHXvOQnlv6+FndOB0LHMbyuQuj/H SinI+LelYDWL7FaUB6sl9UXosnQY3JAJfSgap2yj5qVou5iqPHRXkqgU8un4buodnOWf P2185JgppTBAOiQkSh2n2O1h5t60S3r6n2eg7xFt61/ktJnCvWfUATCAe7fuY0azsYcX LD2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Htf1lRwWL3x2PhAFdQAPTvq0cnntLMp/giO1a42w/Is=; b=gCh4hLwiwtQJiUi7njbaY0wewq1aDID4+j6nFIXHF91rA22ysqIeVCRU/ZYxulBcDz YuFt/uY/Kd1qJW9wz7hISEGuIFmtR0/7AbZPxZXspSeYGZS9kBTjHKdIBhLfWTTF2VfX ZfbPuOeBlUsoibIUpRfxLqjnls26F+01SoWLrEX+AW0tP3hgDndw6mu+rOFTHWjsqsdm ghVxr1GR6YhuWc+On4hScEVOwOpxW1Ghhiv+qBtzAVmVlKwYrfDBUx0l4AGlp2u392uM uKavs1G0rJipKqtmlVBOHORFBZuj/mLxby9XgVLXn0aMjfioAc/98CohEUAo5Bz9L3Fr 2mWQ== X-Gm-Message-State: AOAM532LGc+gr+UVK2lSwCsORSY9Ook4yYia0mPggEfeFrZ/LlZZWtsB tqHnMZTeFtkltOsEOz3qF3NzNu2ydg== X-Google-Smtp-Source: ABdhPJwexyj5xdJSPE3ds+/0EMdQypB7Z4N3Vbiq2RRi7vODI2+QsFXloE5E11tzz8SPF6nTMojtXCATbA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a05:6000:144e:: with SMTP id v14mr20924428wrx.228.1633431594443; Tue, 05 Oct 2021 03:59:54 -0700 (PDT) Date: Tue, 5 Oct 2021 12:58:47 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-6-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 05/23] kcsan: Add core memory barrier instrumentation functions From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=VeFEuwcJ; spf=pass (imf02.hostedemail.com: domain of 3KjBcYQUKCA0ry8r4t11tyr.p1zyv07A-zzx8npx.14t@flex--elver.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3KjBcYQUKCA0ry8r4t11tyr.p1zyv07A-zzx8npx.14t@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 1048E70044C6 X-Stat-Signature: iiysmc5r79xetggpgex5txi4fsgpq9fp X-HE-Tag: 1633431595-917975 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add the core memory barrier instrumentation functions. These invalidate the current in-flight reordered access based on the rules for the respective barrier types and in-flight access type. Signed-off-by: Marco Elver --- include/linux/kcsan-checks.h | 41 ++++++++++++++++++++++++++++++++++-- kernel/kcsan/core.c | 36 +++++++++++++++++++++++++++++++ 2 files changed, 75 insertions(+), 2 deletions(-) diff --git a/include/linux/kcsan-checks.h b/include/linux/kcsan-checks.h index a1c6a89fde71..c9e7c39a7d7b 100644 --- a/include/linux/kcsan-checks.h +++ b/include/linux/kcsan-checks.h @@ -36,6 +36,26 @@ */ void __kcsan_check_access(const volatile void *ptr, size_t size, int type); +/** + * __kcsan_mb - full memory barrier instrumentation + */ +void __kcsan_mb(void); + +/** + * __kcsan_wmb - write memory barrier instrumentation + */ +void __kcsan_wmb(void); + +/** + * __kcsan_rmb - read memory barrier instrumentation + */ +void __kcsan_rmb(void); + +/** + * __kcsan_release - release barrier instrumentation + */ +void __kcsan_release(void); + /** * kcsan_disable_current - disable KCSAN for the current context * @@ -159,6 +179,10 @@ void kcsan_end_scoped_access(struct kcsan_scoped_access *sa); static inline void __kcsan_check_access(const volatile void *ptr, size_t size, int type) { } +static inline void __kcsan_mb(void) { } +static inline void __kcsan_wmb(void) { } +static inline void __kcsan_rmb(void) { } +static inline void __kcsan_release(void) { } static inline void kcsan_disable_current(void) { } static inline void kcsan_enable_current(void) { } static inline void kcsan_enable_current_nowarn(void) { } @@ -191,12 +215,25 @@ static inline void kcsan_end_scoped_access(struct kcsan_scoped_access *sa) { } */ #define __kcsan_disable_current kcsan_disable_current #define __kcsan_enable_current kcsan_enable_current_nowarn -#else +#else /* __SANITIZE_THREAD__ */ static inline void kcsan_check_access(const volatile void *ptr, size_t size, int type) { } static inline void __kcsan_enable_current(void) { } static inline void __kcsan_disable_current(void) { } -#endif +#endif /* __SANITIZE_THREAD__ */ + +#if defined(CONFIG_KCSAN_WEAK_MEMORY) && \ + (defined(__SANITIZE_THREAD__) || defined(__KCSAN_INSTRUMENT_BARRIERS__)) +#define kcsan_mb __kcsan_mb +#define kcsan_wmb __kcsan_wmb +#define kcsan_rmb __kcsan_rmb +#define kcsan_release __kcsan_release +#else /* CONFIG_KCSAN_WEAK_MEMORY && (__SANITIZE_THREAD__ || __KCSAN_INSTRUMENT_BARRIERS__) */ +static inline void kcsan_mb(void) { } +static inline void kcsan_wmb(void) { } +static inline void kcsan_rmb(void) { } +static inline void kcsan_release(void) { } +#endif /* CONFIG_KCSAN_WEAK_MEMORY && (__SANITIZE_THREAD__ || __KCSAN_INSTRUMENT_BARRIERS__) */ /** * __kcsan_check_read - check regular read access for races diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index 0e180d1cd53d..47c95ccff19f 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -955,6 +955,28 @@ void __kcsan_check_access(const volatile void *ptr, size_t size, int type) } EXPORT_SYMBOL(__kcsan_check_access); +#define DEFINE_MEMORY_BARRIER(name, order_before_cond) \ + kcsan_noinstr void __kcsan_##name(void) \ + { \ + struct kcsan_scoped_access *sa; \ + if (within_noinstr(_RET_IP_)) \ + return; \ + instrumentation_begin(); \ + sa = get_reorder_access(get_ctx()); \ + if (!sa) \ + goto out; \ + if (order_before_cond) \ + sa->size = 0; \ + out: \ + instrumentation_end(); \ + } \ + EXPORT_SYMBOL(__kcsan_##name) + +DEFINE_MEMORY_BARRIER(mb, true); +DEFINE_MEMORY_BARRIER(wmb, sa->type & (KCSAN_ACCESS_WRITE | KCSAN_ACCESS_COMPOUND)); +DEFINE_MEMORY_BARRIER(rmb, !(sa->type & KCSAN_ACCESS_WRITE) || (sa->type & KCSAN_ACCESS_COMPOUND)); +DEFINE_MEMORY_BARRIER(release, true); + /* * KCSAN uses the same instrumentation that is emitted by supported compilers * for ThreadSanitizer (TSAN). @@ -1143,10 +1165,19 @@ EXPORT_SYMBOL(__tsan_init); * functions, whose job is to also execute the operation itself. */ +static __always_inline void kcsan_atomic_release(int memorder) +{ + if (memorder == __ATOMIC_RELEASE || + memorder == __ATOMIC_SEQ_CST || + memorder == __ATOMIC_ACQ_REL) + __kcsan_release(); +} + #define DEFINE_TSAN_ATOMIC_LOAD_STORE(bits) \ u##bits __tsan_atomic##bits##_load(const u##bits *ptr, int memorder); \ u##bits __tsan_atomic##bits##_load(const u##bits *ptr, int memorder) \ { \ + kcsan_atomic_release(memorder); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, KCSAN_ACCESS_ATOMIC, _RET_IP_); \ } \ @@ -1156,6 +1187,7 @@ EXPORT_SYMBOL(__tsan_init); void __tsan_atomic##bits##_store(u##bits *ptr, u##bits v, int memorder); \ void __tsan_atomic##bits##_store(u##bits *ptr, u##bits v, int memorder) \ { \ + kcsan_atomic_release(memorder); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, \ KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC, _RET_IP_); \ @@ -1168,6 +1200,7 @@ EXPORT_SYMBOL(__tsan_init); u##bits __tsan_atomic##bits##_##op(u##bits *ptr, u##bits v, int memorder); \ u##bits __tsan_atomic##bits##_##op(u##bits *ptr, u##bits v, int memorder) \ { \ + kcsan_atomic_release(memorder); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, \ KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | \ @@ -1200,6 +1233,7 @@ EXPORT_SYMBOL(__tsan_init); int __tsan_atomic##bits##_compare_exchange_##strength(u##bits *ptr, u##bits *exp, \ u##bits val, int mo, int fail_mo) \ { \ + kcsan_atomic_release(mo); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, \ KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | \ @@ -1215,6 +1249,7 @@ EXPORT_SYMBOL(__tsan_init); u##bits __tsan_atomic##bits##_compare_exchange_val(u##bits *ptr, u##bits exp, u##bits val, \ int mo, int fail_mo) \ { \ + kcsan_atomic_release(mo); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, \ KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | \ @@ -1246,6 +1281,7 @@ DEFINE_TSAN_ATOMIC_OPS(64); void __tsan_atomic_thread_fence(int memorder); void __tsan_atomic_thread_fence(int memorder) { + kcsan_atomic_release(memorder); __atomic_thread_fence(memorder); } EXPORT_SYMBOL(__tsan_atomic_thread_fence); From patchwork Tue Oct 5 10:58:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536169 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 239DCC4332F for ; Tue, 5 Oct 2021 11:00:00 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C8F3761409 for ; Tue, 5 Oct 2021 10:59:59 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org C8F3761409 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 6E27494000C; Tue, 5 Oct 2021 06:59:59 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6414D940007; Tue, 5 Oct 2021 06:59:59 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4E22394000C; Tue, 5 Oct 2021 06:59:59 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0162.hostedemail.com [216.40.44.162]) by kanga.kvack.org (Postfix) with ESMTP id 3F3D9940007 for ; Tue, 5 Oct 2021 06:59:59 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 051758249980 for ; Tue, 5 Oct 2021 10:59:59 +0000 (UTC) X-FDA: 78662088918.14.EACE23F Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf10.hostedemail.com (Postfix) with ESMTP id 9FDD4600395C for ; Tue, 5 Oct 2021 10:59:58 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id f11-20020a7bcd0b000000b0030d72d5d0bcso1175890wmj.7 for ; Tue, 05 Oct 2021 03:59:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=8oN1Tye/sfpVxOoza0f1n5GzdtTeuegmg43uvYtdPIg=; b=JR+7ySSAMEpVVTM6D2YKngKa55UUG1VVZQnyRxXIyzXpzoQNrVnKws9O6c14cXxPwd INzUT4hbmUIZKpi/WwWkbvlBLP3f6RJ5sb4NKLOjGuW/HYy6T641SHN9h1zbgCuD7oYE 80xrasUK/KIw88sVd48bY+UtIi0HACEexOxFhdW/QYxL0sgz51eAxG1F6rhB2duP2kS3 AyXSGkUVQuwNUtXc6OB0F8lOBPqeTTAtqSv/bg6+JyxEHwfmQuUrZAwiS73926J4xrGP m/QvobQGvrhCojz9NZ4rux+arbUnUXzgp3RSeNgCjKRaFJWFGXvb0zMeuxHLqpfYZOsX tA7Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=8oN1Tye/sfpVxOoza0f1n5GzdtTeuegmg43uvYtdPIg=; b=13q6aIUzAW21BZ5xQHG+nDODnchKT8CyD8ckg5a0u63fHkEoH2MG5x0yOJ/ElRH1ii mxREqimAdlrH4++f4ezb8k/egnvgo7N8PUHLbzBx404cqiFAMijK7XQq2Ks9i42zX7Le YBxYNFEXflPh52kUqsdU7D/fm5MGwl68AuGoghqyOvojXZzOu1qEESBJxpvgBZU62BEf L378j/wTCDTAAjPQoWb9DqNzu79FBzU6mB1TSz4NM6Dzc7EMH7JaDjYbD49kFDNnLGdJ N0IvPcMe6HEiTB1Qq1I0dbmRjUeXLRlNehAe576p8JjJYZixaOiqeaAi9idBgWRz47d6 aPhQ== X-Gm-Message-State: AOAM5304ClN/6F/ZIE81iFZ+/Ovh1i/qcruEF06yU23i6LKn+BUCRUM3 AS5ItSXDjcQGXpu0GJl2NRg/QFZBAw== X-Google-Smtp-Source: ABdhPJy7sDB39VOxOxsRAKuuX8CFXz7P4SVeWe0jzLrvHE/6h7oC9R7eAIwMkt4fHa1OL+Wfu4H07SBgjw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a05:600c:3b26:: with SMTP id m38mr553969wms.0.1633431596812; Tue, 05 Oct 2021 03:59:56 -0700 (PDT) Date: Tue, 5 Oct 2021 12:58:48 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-7-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 06/23] kcsan, kbuild: Add option for barrier instrumentation only From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 9FDD4600395C X-Stat-Signature: fftfuxcdzxtsgh4gqmkx9rdm7n13bjx6 Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=JR+7ySSA; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of 3LDBcYQUKCA8t0At6v33v0t.r310x29C-11zAprz.36v@flex--elver.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3LDBcYQUKCA8t0At6v33v0t.r310x29C-11zAprz.36v@flex--elver.bounces.google.com X-HE-Tag: 1633431598-511810 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Source files that disable KCSAN via KCSAN_SANITIZE := n, remove all instrumentation, including explicit barrier instrumentation. With instrumentation for memory barriers, in few places it is required to enable just the explicit instrumentation for memory barriers to avoid false positives. Providing the Makefile variable KCSAN_INSTRUMENT_BARRIERS_obj.o or KCSAN_INSTRUMENT_BARRIERS (for all files) set to 'y' only enables the explicit barrier instrumentation. Signed-off-by: Marco Elver --- scripts/Makefile.lib | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib index 54582673fc1a..2118f63b2bc5 100644 --- a/scripts/Makefile.lib +++ b/scripts/Makefile.lib @@ -182,6 +182,11 @@ ifeq ($(CONFIG_KCSAN),y) _c_flags += $(if $(patsubst n%,, \ $(KCSAN_SANITIZE_$(basetarget).o)$(KCSAN_SANITIZE)y), \ $(CFLAGS_KCSAN)) +# Some uninstrumented files provide implied barriers required to avoid false +# positives: set KCSAN_INSTRUMENT_BARRIERS for barrier instrumentation only. +_c_flags += $(if $(patsubst n%,, \ + $(KCSAN_INSTRUMENT_BARRIERS_$(basetarget).o)$(KCSAN_INSTRUMENT_BARRIERS)n), \ + -D__KCSAN_INSTRUMENT_BARRIERS__) endif # $(srctree)/$(src) for including checkin headers from generated source files From patchwork Tue Oct 5 10:58:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536171 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2499DC43217 for ; Tue, 5 Oct 2021 11:00:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B6BF96117A for ; Tue, 5 Oct 2021 11:00:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org B6BF96117A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 164E494000D; Tue, 5 Oct 2021 07:00:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 09DCA940007; Tue, 5 Oct 2021 07:00:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D956B94000D; Tue, 5 Oct 2021 07:00:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0103.hostedemail.com [216.40.44.103]) by kanga.kvack.org (Postfix) with ESMTP id C2194940007 for ; Tue, 5 Oct 2021 07:00:00 -0400 (EDT) Received: from smtpin37.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 75F4E18263721 for ; Tue, 5 Oct 2021 11:00:00 +0000 (UTC) X-FDA: 78662088960.37.486ADD6 Received: from mail-qt1-f202.google.com (mail-qt1-f202.google.com [209.85.160.202]) by imf18.hostedemail.com (Postfix) with ESMTP id 1C5C34002821 for ; Tue, 5 Oct 2021 10:59:59 +0000 (UTC) Received: by mail-qt1-f202.google.com with SMTP id s10-20020ac80d8a000000b002a753776238so762399qti.15 for ; Tue, 05 Oct 2021 03:59:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=L+fd37WmAy2s0UaVk4scjeutYzqXSn/VtNkU4zfPMW8=; b=qNNeZUdV2JGQxqGAo0/w6qMrKU3JqbUS9b0UVnph3olpHC5JuF75PQnrF4tPQkaaHk uxAiGqYUYiZzozbFUT442/Q47um4vQjTpcSyhBKv9wrbtGVBMTYLAlw+3qhp0Htre9a0 w1LGGJDilUSoft9nkOwZPxOJJphY5JlzaV3yd0hK10HtSab4buxxfEGxBJVxwotHlLWe ldMX0ltHwaGlkflpxIrbB2XQVE9eDPHfd+0ATGTLzGBMApuyBfBVj5xYBwyJBLsVfhOv dIpkvoWqnmm7/3PM3DIjKLk4cVMlKH4X+BSsVH7yq/ZooQiPRnS35swDppdCwbcRedEx votQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=L+fd37WmAy2s0UaVk4scjeutYzqXSn/VtNkU4zfPMW8=; b=f6mp3p83kCCYBJEcb9+9r30rNHMYbVs6JBxwPmRNshNuL9FKIef8E3/ipRmG6Kgns8 DQRQc4+i8n4gk1Jt/31tYlRpu0yb8tGqdlc8MBEI727j1lILljmOW8G9LOKxf4RtyXvJ CLoT/z2sUtlCziNG0cGG+xpLGEiNVa2tzHF/IhoiPEqUwMh1kosDVw1Y2DldI7D2jNBt Hpi04OddF3ioj78ZEjHhtXVo+imN81E8yulrkoBo+zTfE1JeKq379AD1tbPRRMsqHSIX qK39Q2fTd8DM2xN9JbpWdH+B7YfjDLmWMTQeUqiHvTneDpsMh4CScvNT0DJVZo1U3axD 2bnw== X-Gm-Message-State: AOAM531DGtpgxxoBHhL0r6PmOjB163OhH9pCLHyYhz4WsPaFJXpy82gc 37oB+PMmyo2sQLCJ1ucw48LpL0NPqw== X-Google-Smtp-Source: ABdhPJxH1CU4wC0r2IQDVPstmeRQev0ImQ7cg+e7CJoF4IERulSuQC3qqUPSqtroDZUVOx2jurztMzMl6g== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a05:6214:1083:: with SMTP id o3mr435430qvr.57.1633431599435; Tue, 05 Oct 2021 03:59:59 -0700 (PDT) Date: Tue, 5 Oct 2021 12:58:49 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-8-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 07/23] kcsan: Call scoped accesses reordered in reports From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Rspamd-Queue-Id: 1C5C34002821 X-Stat-Signature: 1rurg586pe7bxhgjfynafr541hmnktym Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=qNNeZUdV; spf=pass (imf18.hostedemail.com: domain of 3LzBcYQUKCBIw3Dw9y66y3w.u64305CF-442Dsu2.69y@flex--elver.bounces.google.com designates 209.85.160.202 as permitted sender) smtp.mailfrom=3LzBcYQUKCBIw3Dw9y66y3w.u64305CF-442Dsu2.69y@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam06 X-HE-Tag: 1633431599-722052 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The scoping of an access simply denotes the scope in which it may be reordered. However, in reports, it'll be less confusing to say the access is "reordered". This is more accurate when the race occurred. Signed-off-by: Marco Elver --- kernel/kcsan/kcsan_test.c | 4 ++-- kernel/kcsan/report.c | 16 ++++++++-------- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/kernel/kcsan/kcsan_test.c b/kernel/kcsan/kcsan_test.c index 660729238588..6e3c2b8bc608 100644 --- a/kernel/kcsan/kcsan_test.c +++ b/kernel/kcsan/kcsan_test.c @@ -213,9 +213,9 @@ static bool report_matches(const struct expect_report *r) const bool is_atomic = (ty & KCSAN_ACCESS_ATOMIC); const bool is_scoped = (ty & KCSAN_ACCESS_SCOPED); const char *const access_type_aux = - (is_atomic && is_scoped) ? " (marked, scoped)" + (is_atomic && is_scoped) ? " (marked, reordered)" : (is_atomic ? " (marked)" - : (is_scoped ? " (scoped)" : "")); + : (is_scoped ? " (reordered)" : "")); if (i == 1) { /* Access 2 */ diff --git a/kernel/kcsan/report.c b/kernel/kcsan/report.c index fc15077991c4..1b0e050bdf6a 100644 --- a/kernel/kcsan/report.c +++ b/kernel/kcsan/report.c @@ -215,9 +215,9 @@ static const char *get_access_type(int type) if (type & KCSAN_ACCESS_ASSERT) { if (type & KCSAN_ACCESS_SCOPED) { if (type & KCSAN_ACCESS_WRITE) - return "assert no accesses (scoped)"; + return "assert no accesses (reordered)"; else - return "assert no writes (scoped)"; + return "assert no writes (reordered)"; } else { if (type & KCSAN_ACCESS_WRITE) return "assert no accesses"; @@ -240,17 +240,17 @@ static const char *get_access_type(int type) case KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC: return "read-write (marked)"; case KCSAN_ACCESS_SCOPED: - return "read (scoped)"; + return "read (reordered)"; case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_ATOMIC: - return "read (marked, scoped)"; + return "read (marked, reordered)"; case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_WRITE: - return "write (scoped)"; + return "write (reordered)"; case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC: - return "write (marked, scoped)"; + return "write (marked, reordered)"; case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE: - return "read-write (scoped)"; + return "read-write (reordered)"; case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC: - return "read-write (marked, scoped)"; + return "read-write (marked, reordered)"; default: BUG(); } From patchwork Tue Oct 5 10:58:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536173 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1DD9C43219 for ; Tue, 5 Oct 2021 11:00:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 59A3B6117A for ; Tue, 5 Oct 2021 11:00:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 59A3B6117A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id C5EBD94000E; Tue, 5 Oct 2021 07:00:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BE65B940007; Tue, 5 Oct 2021 07:00:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A398B94000E; Tue, 5 Oct 2021 07:00:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0046.hostedemail.com [216.40.44.46]) by kanga.kvack.org (Postfix) with ESMTP id 92349940007 for ; Tue, 5 Oct 2021 07:00:03 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 53C9C182BF51C for ; Tue, 5 Oct 2021 11:00:03 +0000 (UTC) X-FDA: 78662089086.02.9F5365B Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf19.hostedemail.com (Postfix) with ESMTP id 1E941B001CE3 for ; Tue, 5 Oct 2021 11:00:02 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id 5-20020a1c00050000b02902e67111d9f0so8969904wma.4 for ; Tue, 05 Oct 2021 04:00:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xsbn5DHOInVTvHdCjIbH8F7lYj2M2KByY98qL4y3tHQ=; b=GzYNn8BbV6HkxgVdAh/WhkAVRVFifsX3fAkp8fuVoAbra5X82gtJ5hOoDMKsnHhLwC mnx7wKKhxnC9J06fLnhTs/xSclQNx4hwLNnYCZymjlGPonilYIh7ZmRmdGaX29tLzIRe M/j6QrWCMbfWV4Uu86XFyHKI9s9H6sXo+yJKUz1mUCWwgwYOJGiFIolnc4BC6eKk7wcc 6o3T4EGFdqZ5IxBb7RCK1gZZcmJPDF/AUCprOJ4NoNdyrcmFb3fuHUWzHWh165fY7Y+9 yWwiA81IolLhAEqe9OyucVzO5vf2e4p0TazEtV3o/92kFxe0WM8a08FV6GXuJapgpPZu qmdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xsbn5DHOInVTvHdCjIbH8F7lYj2M2KByY98qL4y3tHQ=; b=34wyRkR4ryiauRcwCTYcTBJWHhtPV4Ti2WyqwbGODYsTHWjN/B3bKP/wLMJR3dx4gS OkRjsIg5Y6jKskPnH+Bd0LJAaKZUsvz99oKxqawiERwD8tv57HTK4xfKsto3mr09C7U+ +h0UhIREpSxUcZlXbm22hO9cy8yYtvLW+vVhqpz0mpN/QRiF9o3NpufVbK+Utw/jrWdb W4YjdaUdkYlvrOaRvC47vYfemf9v6ByosPg6jWuK7TIeJY3eYbTSqtWBFwgJVU4VH1xc IKNiJo3TXCiUWeoBPNDuvGQ0pn4zO+lHtJIy5s2hprlbVq1XF23xYqrz8Lmzv7V5hYDR 0cvA== X-Gm-Message-State: AOAM533HwD1D2iuxYpFb6IkREPaFWDOSf/SOb0EHF0a9ozEkEk6vXC4Q jRmXlUrtOmUAcFXAbipbaQNSfCaPCA== X-Google-Smtp-Source: ABdhPJyQ9+XR3YU56FBJU8DeE3vu/iZuM2wNV7QdNqbMLHMu3vYpUyN6kv4zopypBzI8an4m0Im5e2ujqw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:adf:a35d:: with SMTP id d29mr20140800wrb.318.1633431601839; Tue, 05 Oct 2021 04:00:01 -0700 (PDT) Date: Tue, 5 Oct 2021 12:58:50 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-9-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 08/23] kcsan: Show location access was reordered to From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 1E941B001CE3 X-Stat-Signature: 9f4degudqysgeajp4ejbocnbgeirzqgm Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=GzYNn8Bb; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 3MTBcYQUKCBQy5FyB08805y.w86527EH-664Fuw4.8B0@flex--elver.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3MTBcYQUKCBQy5FyB08805y.w86527EH-664Fuw4.8B0@flex--elver.bounces.google.com X-HE-Tag: 1633431602-618481 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Also show the location the access was reordered to. An example report: | ================================================================== | BUG: KCSAN: data-race in test_kernel_wrong_memorder / test_kernel_wrong_memorder | | read-write to 0xffffffffc01e61a8 of 8 bytes by task 2311 on cpu 5: | test_kernel_wrong_memorder+0x57/0x90 | access_thread+0x99/0xe0 | kthread+0x2ba/0x2f0 | ret_from_fork+0x22/0x30 | | read-write (reordered) to 0xffffffffc01e61a8 of 8 bytes by task 2310 on cpu 7: | test_kernel_wrong_memorder+0x57/0x90 | access_thread+0x99/0xe0 | kthread+0x2ba/0x2f0 | ret_from_fork+0x22/0x30 | | | +-> reordered to: test_kernel_wrong_memorder+0x80/0x90 | | Reported by Kernel Concurrency Sanitizer on: | CPU: 7 PID: 2310 Comm: access_thread Not tainted 5.14.0-rc1+ #18 | Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 | ================================================================== Signed-off-by: Marco Elver --- kernel/kcsan/report.c | 35 +++++++++++++++++++++++------------ 1 file changed, 23 insertions(+), 12 deletions(-) diff --git a/kernel/kcsan/report.c b/kernel/kcsan/report.c index 1b0e050bdf6a..67794404042a 100644 --- a/kernel/kcsan/report.c +++ b/kernel/kcsan/report.c @@ -308,10 +308,12 @@ static int get_stack_skipnr(const unsigned long stack_entries[], int num_entries /* * Skips to the first entry that matches the function of @ip, and then replaces - * that entry with @ip, returning the entries to skip. + * that entry with @ip, returning the entries to skip with @replaced containing + * the replaced entry. */ static int -replace_stack_entry(unsigned long stack_entries[], int num_entries, unsigned long ip) +replace_stack_entry(unsigned long stack_entries[], int num_entries, unsigned long ip, + unsigned long *replaced) { unsigned long symbolsize, offset; unsigned long target_func; @@ -330,6 +332,7 @@ replace_stack_entry(unsigned long stack_entries[], int num_entries, unsigned lon func -= offset; if (func == target_func) { + *replaced = stack_entries[skip]; stack_entries[skip] = ip; return skip; } @@ -342,9 +345,10 @@ replace_stack_entry(unsigned long stack_entries[], int num_entries, unsigned lon } static int -sanitize_stack_entries(unsigned long stack_entries[], int num_entries, unsigned long ip) +sanitize_stack_entries(unsigned long stack_entries[], int num_entries, unsigned long ip, + unsigned long *replaced) { - return ip ? replace_stack_entry(stack_entries, num_entries, ip) : + return ip ? replace_stack_entry(stack_entries, num_entries, ip, replaced) : get_stack_skipnr(stack_entries, num_entries); } @@ -360,6 +364,14 @@ static int sym_strcmp(void *addr1, void *addr2) return strncmp(buf1, buf2, sizeof(buf1)); } +static void +print_stack_trace(unsigned long stack_entries[], int num_entries, unsigned long reordered_to) +{ + stack_trace_print(stack_entries, num_entries, 0); + if (reordered_to) + pr_err(" |\n +-> reordered to: %pS\n", (void *)reordered_to); +} + static void print_verbose_info(struct task_struct *task) { if (!task) @@ -378,10 +390,12 @@ static void print_report(enum kcsan_value_change value_change, struct other_info *other_info, u64 old, u64 new, u64 mask) { + unsigned long reordered_to = 0; unsigned long stack_entries[NUM_STACK_ENTRIES] = { 0 }; int num_stack_entries = stack_trace_save(stack_entries, NUM_STACK_ENTRIES, 1); - int skipnr = sanitize_stack_entries(stack_entries, num_stack_entries, ai->ip); + int skipnr = sanitize_stack_entries(stack_entries, num_stack_entries, ai->ip, &reordered_to); unsigned long this_frame = stack_entries[skipnr]; + unsigned long other_reordered_to = 0; unsigned long other_frame = 0; int other_skipnr = 0; /* silence uninit warnings */ @@ -394,7 +408,7 @@ static void print_report(enum kcsan_value_change value_change, if (other_info) { other_skipnr = sanitize_stack_entries(other_info->stack_entries, other_info->num_stack_entries, - other_info->ai.ip); + other_info->ai.ip, &other_reordered_to); other_frame = other_info->stack_entries[other_skipnr]; /* @value_change is only known for the other thread */ @@ -434,10 +448,9 @@ static void print_report(enum kcsan_value_change value_change, other_info->ai.cpu_id); /* Print the other thread's stack trace. */ - stack_trace_print(other_info->stack_entries + other_skipnr, + print_stack_trace(other_info->stack_entries + other_skipnr, other_info->num_stack_entries - other_skipnr, - 0); - + other_reordered_to); if (IS_ENABLED(CONFIG_KCSAN_VERBOSE)) print_verbose_info(other_info->task); @@ -451,9 +464,7 @@ static void print_report(enum kcsan_value_change value_change, get_thread_desc(ai->task_pid), ai->cpu_id); } /* Print stack trace of this thread. */ - stack_trace_print(stack_entries + skipnr, num_stack_entries - skipnr, - 0); - + print_stack_trace(stack_entries + skipnr, num_stack_entries - skipnr, reordered_to); if (IS_ENABLED(CONFIG_KCSAN_VERBOSE)) print_verbose_info(current); From patchwork Tue Oct 5 10:58:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9D00C433F5 for ; Tue, 5 Oct 2021 11:00:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8C9726124F for ; Tue, 5 Oct 2021 11:00:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8C9726124F Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id A8C0794000F; Tue, 5 Oct 2021 07:00:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 99CA5940007; Tue, 5 Oct 2021 07:00:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7F07D94000F; Tue, 5 Oct 2021 07:00:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0063.hostedemail.com [216.40.44.63]) by kanga.kvack.org (Postfix) with ESMTP id 699DD940007 for ; Tue, 5 Oct 2021 07:00:05 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 33FD12D255 for ; Tue, 5 Oct 2021 11:00:05 +0000 (UTC) X-FDA: 78662089170.31.548152F Received: from mail-qt1-f202.google.com (mail-qt1-f202.google.com [209.85.160.202]) by imf15.hostedemail.com (Postfix) with ESMTP id D1849D0017DF for ; Tue, 5 Oct 2021 11:00:04 +0000 (UTC) Received: by mail-qt1-f202.google.com with SMTP id y25-20020ac87059000000b002a71d24c242so11872540qtm.0 for ; Tue, 05 Oct 2021 04:00:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=VR1v+uOEj5JHWgZG/AjKjsmQRdRMH76n4CGK16N25tc=; b=OOmVz5m9sgHxZ5tpr/f91NKUEC1uc9rMyBTH60E4A2rtnobKjQUKS5BS5vP0IphX4h wZwGLnUaMH/gItLkkn3k+udKE5o/pECwpdVoaNBrXEfthT0ThUaifj4RWj+cabsTGBGf l1ZHm1JhkDyaGdhfJDbvEYYUbwneLAxtPO0KzSK63NWqe69yIu5BETJMFPUjGr/reJ5p 8C70N/BQQzpJJdfVViy3y/O1XL0IgkIsY0a+Rycv8smMQBb9va/r8WGrDJttmTTfKgVt P/LE7k6AHKligqpuKS3WqrmkDoaFPkHsV8/Gm5n/MWjw9huBNrITLIWi2bSp/lTViEvC YuYg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=VR1v+uOEj5JHWgZG/AjKjsmQRdRMH76n4CGK16N25tc=; b=hSngJyI8PqTQTuKlCPKqBYD7QhbgZS9sGFcDTAhq07yHTwM2TNt4kzK2Q69o+6kmdh GuUf4izMseQih76qTaFvmp3I9JkMZiJqaMDvjY+eJogTGDv0X1F3EWvBmIor/GZw+HVd ILFNMSMf1mWARIp/jBmsY1eocUvjHqMTJRCCVHJxHOHsNiTkYlNreBrwtFt/RDTLTnk4 AvDpl4SHRMkot92PYEIBFxrsYVLlYVB7Iqej3oaE0pmCvbxIVU9UerQe0FNGlzTBpCfN ik421gDZSE5mEPGFYFlaKWzQtCDSgfsUX6WJ3gg2kgMcALC/kOm9mV7DahUqK0yy8SBh o6cQ== X-Gm-Message-State: AOAM532kXKwgl3k4ihggqzcQOIkQEgoVpNysFnWto1AGfuAqUfC8eula svsm+HmRqH6KaGWSDPR8lhrOXYWDtA== X-Google-Smtp-Source: ABdhPJzKD6+pLzVxtocJ8QXUwjt+bDfPEzIrOBgSPHmtyTm0gDoWxM9bjjQ9uNCP0wR03odJaPoCD9G9MA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a05:6214:1022:: with SMTP id k2mr27293030qvr.53.1633431604158; Tue, 05 Oct 2021 04:00:04 -0700 (PDT) Date: Tue, 5 Oct 2021 12:58:51 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-10-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 09/23] kcsan: Document modeling of weak memory From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=OOmVz5m9; spf=pass (imf15.hostedemail.com: domain of 3NDBcYQUKCBc18I1E3BB381.zB985AHK-997Ixz7.BE3@flex--elver.bounces.google.com designates 209.85.160.202 as permitted sender) smtp.mailfrom=3NDBcYQUKCBc18I1E3BB381.zB985AHK-997Ixz7.BE3@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: D1849D0017DF X-Stat-Signature: t8o3y51oq8mmxdfjafb9g5qiey6c35xc X-HE-Tag: 1633431604-646600 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Document how KCSAN models a subset of weak memory and the subset of missing memory barriers it can detect as a result. Signed-off-by: Marco Elver --- Documentation/dev-tools/kcsan.rst | 72 +++++++++++++++++++++++++------ 1 file changed, 59 insertions(+), 13 deletions(-) diff --git a/Documentation/dev-tools/kcsan.rst b/Documentation/dev-tools/kcsan.rst index 7db43c7c09b8..4fc3773fead9 100644 --- a/Documentation/dev-tools/kcsan.rst +++ b/Documentation/dev-tools/kcsan.rst @@ -204,17 +204,17 @@ Ultimately this allows to determine the possible executions of concurrent code, and if that code is free from data races. KCSAN is aware of *marked atomic operations* (``READ_ONCE``, ``WRITE_ONCE``, -``atomic_*``, etc.), but is oblivious of any ordering guarantees and simply -assumes that memory barriers are placed correctly. In other words, KCSAN -assumes that as long as a plain access is not observed to race with another -conflicting access, memory operations are correctly ordered. - -This means that KCSAN will not report *potential* data races due to missing -memory ordering. Developers should therefore carefully consider the required -memory ordering requirements that remain unchecked. If, however, missing -memory ordering (that is observable with a particular compiler and -architecture) leads to an observable data race (e.g. entering a critical -section erroneously), KCSAN would report the resulting data race. +``atomic_*``, etc.), and a subset of ordering guarantees implied by memory +barriers. With ``CONFIG_KCSAN_WEAK_MEMORY=y``, KCSAN models load or store +buffering, and can detect missing ``smp_mb()``, ``smp_wmb()``, ``smp_rmb()``, +``smp_store_release()``, and all ``atomic_*`` operations with equivalent +implied barriers. + +Note, KCSAN will not report all data races due to missing memory ordering, +specifically where a memory barrier would be required to prohibit subsequent +memory operation from reordering before the barrier. Developers should +therefore carefully consider the required memory ordering requirements that +remain unchecked. Race Detection Beyond Data Races -------------------------------- @@ -268,6 +268,52 @@ marked operations, if all accesses to a variable that is accessed concurrently are properly marked, KCSAN will never trigger a watchpoint and therefore never report the accesses. +Modeling Weak Memory +~~~~~~~~~~~~~~~~~~~~ + +KCSAN's approach to detecting data races due to missing memory barriers is +based on modeling access reordering (with ``CONFIG_KCSAN_WEAK_MEMORY=y``). +Each plain memory access for which a watchpoint is set up, is also selected for +simulated reordering within the scope of its function (at most 1 in-flight +access). + +Once an access has been selected for reordering, it is checked along every +other access until the end of the function scope. If an appropriate memory +barrier is encountered, the access will no longer be considered for simulated +reordering. + +When the result of a memory operation should be ordered by a barrier, KCSAN can +then detect data races where the conflict only occurs as a result of a missing +barrier. Consider the example:: + + int x, flag; + void T1(void) + { + x = 1; // data race! + WRITE_ONCE(flag, 1); // correct: smp_store_release(&flag, 1) + } + void T2(void) + { + while (!READ_ONCE(flag)); // correct: smp_load_acquire(&flag) + ... = x; // data race! + } + +When weak memory modeling is enabled, KCSAN can consider ``x`` in ``T1`` for +simulated reordering. After the write of ``flag``, ``x`` is again checked for +concurrent accesses: because ``T2`` is able to proceed after the write of +``flag``, a data race is detected. With the correct barriers in place, ``x`` +would not be considered for reordering after the proper release of ``flag``, +and no data race would be detected. + +Deliberate trade-offs in complexity but also practical limitations mean only a +subset of data races due to missing memory barriers can be detected. Recall +that watchpoints are only set up for plain accesses, and the only access type +for which KCSAN simulates reordering. This means reordering of marked accesses +is not modeled. Furthermore, with the currently available compiler support, the +implementation is limited to modeling the effects of "buffering" (delaying +accesses), since the runtime cannot "prefetch" accesses. One implication of +this is that acquire operations do not require barrier instrumentation. + Key Properties ~~~~~~~~~~~~~~ @@ -290,8 +336,8 @@ Key Properties 4. **Detects Racy Writes from Devices:** Due to checking data values upon setting up watchpoints, racy writes from devices can also be detected. -5. **Memory Ordering:** KCSAN is *not* explicitly aware of the LKMM's ordering - rules; this may result in missed data races (false negatives). +5. **Memory Ordering:** KCSAN is aware of only a subset of LKMM ordering rules; + this may result in missed data races (false negatives). 6. **Analysis Accuracy:** For observed executions, due to using a sampling strategy, the analysis is *unsound* (false negatives possible), but aims to From patchwork Tue Oct 5 10:58:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536177 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A72CC433FE for ; Tue, 5 Oct 2021 11:00:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3DE21611F0 for ; Tue, 5 Oct 2021 11:00:09 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 3DE21611F0 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id EA961940010; Tue, 5 Oct 2021 07:00:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E09F6940007; Tue, 5 Oct 2021 07:00:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C35A6940010; Tue, 5 Oct 2021 07:00:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0108.hostedemail.com [216.40.44.108]) by kanga.kvack.org (Postfix) with ESMTP id AF602940007 for ; Tue, 5 Oct 2021 07:00:07 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 6090230143 for ; Tue, 5 Oct 2021 11:00:07 +0000 (UTC) X-FDA: 78662089254.38.DF5C7FA Received: from mail-qv1-f74.google.com (mail-qv1-f74.google.com [209.85.219.74]) by imf03.hostedemail.com (Postfix) with ESMTP id 1011D3000214 for ; Tue, 5 Oct 2021 11:00:06 +0000 (UTC) Received: by mail-qv1-f74.google.com with SMTP id e2-20020ad45582000000b0037e7bdc88d4so20967911qvx.2 for ; Tue, 05 Oct 2021 04:00:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rJFOB4F2I3Uhw/IG/H4qYMYPgwEvxNpylG6MakN87zU=; b=VqUHau3tp8S/srrbCcE7sDSQdUlERKQHPnririg4TcJtNPikiJctVLPLHxbbLPOEGk ix7C/oQHZ7x9CfMPwQ616lL1QXThsPphbb+TYiAC7sVrbkR/LHUSsmNTkKvnh8h6bAJZ vUlaCh+R1qD1dV/9/GlD/upY0FMODU8ugujEEXPA7dJzlEBOxhnfVeLROB0XF6n1bdkv eO948IRqF8Kuw74NWrCZeqENcAZHaPqib3RHLJBsesYKKhj2iiq8+wEMyHWo996rPyTH ZYmRsqYz79lD+tOaE7TINqgf1sT53hvqkIatynMlQB4P1EeYzO8mS+00nKcdviDzLSsF AkTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rJFOB4F2I3Uhw/IG/H4qYMYPgwEvxNpylG6MakN87zU=; b=O1/OM0I9ebbgUxyMQsQ/SAtQNybTiwMHtXHyLWa9wEQm8zdT2TZytUvOka74Zd3QhU 54VqvPx8BlRj/oV4cUGmHpvlx0zuD/NuvLq+J3OfKWTeN9oMSB8KJ/2vRLyLDhDeG35o ZAT4QWzw4bTAmCWoH0gGcKZRonsJL2P3mL3an3wqmB5gYp/m4PygMfITtZdMwLV6Ext0 AbiZPewL6F1zR7qYWlU9QkiH4SoT2ZGgZRuVkOmkVUZNREaKp/FVsmO3pLflnlRF3s2v RHQKzSyQlB3ga1tqFRwcMRehg1yQmoe63KZCjl31FqnewfhkX8e7U8G/mzEJW0ClYeto J7ug== X-Gm-Message-State: AOAM533Zw12QrNkk9xhk6aAmvEttYGrzoXydCSP4xET+axSSvpaypwIR IlWCr4nYv2QN5cPqOyTja8ZNoYs9mg== X-Google-Smtp-Source: ABdhPJy3BsDKQ+AY9cSYFCLsMVdTybXuqqN2zKFCxs7txWXmI/mDw3rr3ePJHL6v07J0TeeBT24h0LMjXA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a05:6214:3ea:: with SMTP id cf10mr27289434qvb.53.1633431606422; Tue, 05 Oct 2021 04:00:06 -0700 (PDT) Date: Tue, 5 Oct 2021 12:58:52 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-11-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 10/23] kcsan: test: Match reordered or normal accesses From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 1011D3000214 X-Stat-Signature: 8gyemzzhz5smxkmqewporx7ysz13cume Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=VqUHau3t; spf=pass (imf03.hostedemail.com: domain of 3NjBcYQUKCBk3AK3G5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--elver.bounces.google.com designates 209.85.219.74 as permitted sender) smtp.mailfrom=3NjBcYQUKCBk3AK3G5DD5A3.1DBA7CJM-BB9Kz19.DG5@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1633431606-126499 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Due to reordering accesses with weak memory modeling, any access can now appear as "(reordered)". Match any permutation of accesses if CONFIG_KCSAN_WEAK_MEMORY=y, so that we effectively match an access if it is denoted "(reordered)" or not. Signed-off-by: Marco Elver --- kernel/kcsan/kcsan_test.c | 92 +++++++++++++++++++++++++++------------ 1 file changed, 63 insertions(+), 29 deletions(-) diff --git a/kernel/kcsan/kcsan_test.c b/kernel/kcsan/kcsan_test.c index 6e3c2b8bc608..ec054879201b 100644 --- a/kernel/kcsan/kcsan_test.c +++ b/kernel/kcsan/kcsan_test.c @@ -151,7 +151,7 @@ struct expect_report { /* Check observed report matches information in @r. */ __no_kcsan -static bool report_matches(const struct expect_report *r) +static bool __report_matches(const struct expect_report *r) { const bool is_assert = (r->access[0].type | r->access[1].type) & KCSAN_ACCESS_ASSERT; bool ret = false; @@ -253,6 +253,40 @@ static bool report_matches(const struct expect_report *r) return ret; } +static __always_inline const struct expect_report * +__report_set_scoped(struct expect_report *r, int accesses) +{ + BUILD_BUG_ON(accesses > 3); + + if (accesses & 1) + r->access[0].type |= KCSAN_ACCESS_SCOPED; + else + r->access[0].type &= ~KCSAN_ACCESS_SCOPED; + + if (accesses & 2) + r->access[1].type |= KCSAN_ACCESS_SCOPED; + else + r->access[1].type &= ~KCSAN_ACCESS_SCOPED; + + return r; +} + +__no_kcsan +static bool report_matches_any_reordered(struct expect_report *r) +{ + return __report_matches(__report_set_scoped(r, 0)) || + __report_matches(__report_set_scoped(r, 1)) || + __report_matches(__report_set_scoped(r, 2)) || + __report_matches(__report_set_scoped(r, 3)); +} + +#ifdef CONFIG_KCSAN_WEAK_MEMORY +/* Due to reordering accesses, any access may appear as "(reordered)". */ +#define report_matches report_matches_any_reordered +#else +#define report_matches __report_matches +#endif + /* ===== Test kernels ===== */ static long test_sink; @@ -438,13 +472,13 @@ static noinline void test_kernel_xor_1bit(void) __no_kcsan static void test_basic(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_write, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, }, }; - static const struct expect_report never = { + struct expect_report never = { .access = { { test_kernel_read, &test_var, sizeof(test_var), 0 }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, @@ -469,14 +503,14 @@ static void test_basic(struct kunit *test) __no_kcsan static void test_concurrent_races(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { /* NULL will match any address. */ { test_kernel_rmw_array, NULL, 0, __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) }, { test_kernel_rmw_array, NULL, 0, __KCSAN_ACCESS_RW(0) }, }, }; - static const struct expect_report never = { + struct expect_report never = { .access = { { test_kernel_rmw_array, NULL, 0, 0 }, { test_kernel_rmw_array, NULL, 0, 0 }, @@ -498,13 +532,13 @@ static void test_concurrent_races(struct kunit *test) __no_kcsan static void test_novalue_change(struct kunit *test) { - const struct expect_report expect_rw = { + struct expect_report expect_rw = { .access = { { test_kernel_write_nochange, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, }, }; - const struct expect_report expect_ww = { + struct expect_report expect_ww = { .access = { { test_kernel_write_nochange, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, { test_kernel_write_nochange, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, @@ -530,13 +564,13 @@ static void test_novalue_change(struct kunit *test) __no_kcsan static void test_novalue_change_exception(struct kunit *test) { - const struct expect_report expect_rw = { + struct expect_report expect_rw = { .access = { { test_kernel_write_nochange_rcu, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, }, }; - const struct expect_report expect_ww = { + struct expect_report expect_ww = { .access = { { test_kernel_write_nochange_rcu, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, { test_kernel_write_nochange_rcu, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, @@ -556,7 +590,7 @@ static void test_novalue_change_exception(struct kunit *test) __no_kcsan static void test_unknown_origin(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_read, &test_var, sizeof(test_var), 0 }, { NULL }, @@ -578,7 +612,7 @@ static void test_unknown_origin(struct kunit *test) __no_kcsan static void test_write_write_assume_atomic(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_write, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, { test_kernel_write, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, @@ -604,7 +638,7 @@ static void test_write_write_assume_atomic(struct kunit *test) __no_kcsan static void test_write_write_struct(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_write_struct, &test_struct, sizeof(test_struct), KCSAN_ACCESS_WRITE }, { test_kernel_write_struct, &test_struct, sizeof(test_struct), KCSAN_ACCESS_WRITE }, @@ -626,7 +660,7 @@ static void test_write_write_struct(struct kunit *test) __no_kcsan static void test_write_write_struct_part(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_write_struct, &test_struct, sizeof(test_struct), KCSAN_ACCESS_WRITE }, { test_kernel_write_struct_part, &test_struct.val[3], sizeof(test_struct.val[3]), KCSAN_ACCESS_WRITE }, @@ -658,7 +692,7 @@ static void test_read_atomic_write_atomic(struct kunit *test) __no_kcsan static void test_read_plain_atomic_write(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_read, &test_var, sizeof(test_var), 0 }, { test_kernel_write_atomic, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC }, @@ -679,7 +713,7 @@ static void test_read_plain_atomic_write(struct kunit *test) __no_kcsan static void test_read_plain_atomic_rmw(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_read, &test_var, sizeof(test_var), 0 }, { test_kernel_atomic_rmw, &test_var, sizeof(test_var), @@ -701,13 +735,13 @@ static void test_read_plain_atomic_rmw(struct kunit *test) __no_kcsan static void test_zero_size_access(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_write_struct, &test_struct, sizeof(test_struct), KCSAN_ACCESS_WRITE }, { test_kernel_write_struct, &test_struct, sizeof(test_struct), KCSAN_ACCESS_WRITE }, }, }; - const struct expect_report never = { + struct expect_report never = { .access = { { test_kernel_write_struct, &test_struct, sizeof(test_struct), KCSAN_ACCESS_WRITE }, { test_kernel_read_struct_zero_size, &test_struct.val[3], 0, 0 }, @@ -741,7 +775,7 @@ static void test_data_race(struct kunit *test) __no_kcsan static void test_assert_exclusive_writer(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_assert_writer, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT }, { test_kernel_write_nochange, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, @@ -759,7 +793,7 @@ static void test_assert_exclusive_writer(struct kunit *test) __no_kcsan static void test_assert_exclusive_access(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_assert_access, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, @@ -777,19 +811,19 @@ static void test_assert_exclusive_access(struct kunit *test) __no_kcsan static void test_assert_exclusive_access_writer(struct kunit *test) { - const struct expect_report expect_access_writer = { + struct expect_report expect_access_writer = { .access = { { test_kernel_assert_access, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE }, { test_kernel_assert_writer, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT }, }, }; - const struct expect_report expect_access_access = { + struct expect_report expect_access_access = { .access = { { test_kernel_assert_access, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE }, { test_kernel_assert_access, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE }, }, }; - const struct expect_report never = { + struct expect_report never = { .access = { { test_kernel_assert_writer, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT }, { test_kernel_assert_writer, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT }, @@ -813,7 +847,7 @@ static void test_assert_exclusive_access_writer(struct kunit *test) __no_kcsan static void test_assert_exclusive_bits_change(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_assert_bits_change, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT }, { test_kernel_change_bits, &test_var, sizeof(test_var), @@ -844,13 +878,13 @@ static void test_assert_exclusive_bits_nochange(struct kunit *test) __no_kcsan static void test_assert_exclusive_writer_scoped(struct kunit *test) { - const struct expect_report expect_start = { + struct expect_report expect_start = { .access = { { test_kernel_assert_writer_scoped, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_SCOPED }, { test_kernel_write_nochange, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, }, }; - const struct expect_report expect_inscope = { + struct expect_report expect_inscope = { .access = { { test_enter_scope, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_SCOPED }, { test_kernel_write_nochange, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, @@ -871,16 +905,16 @@ static void test_assert_exclusive_writer_scoped(struct kunit *test) __no_kcsan static void test_assert_exclusive_access_scoped(struct kunit *test) { - const struct expect_report expect_start1 = { + struct expect_report expect_start1 = { .access = { { test_kernel_assert_access_scoped, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_SCOPED }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, }, }; - const struct expect_report expect_start2 = { + struct expect_report expect_start2 = { .access = { expect_start1.access[0], expect_start1.access[0] }, }; - const struct expect_report expect_inscope = { + struct expect_report expect_inscope = { .access = { { test_enter_scope, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_SCOPED }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, @@ -985,7 +1019,7 @@ static void test_atomic_builtins(struct kunit *test) __no_kcsan static void test_1bit_value_change(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_read, &test_var, sizeof(test_var), 0 }, { test_kernel_xor_1bit, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) }, From patchwork Tue Oct 5 10:58:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70272C433F5 for ; Tue, 5 Oct 2021 11:00:12 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0627E615A3 for ; Tue, 5 Oct 2021 11:00:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 0627E615A3 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 952A8940011; Tue, 5 Oct 2021 07:00:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8DA38940007; Tue, 5 Oct 2021 07:00:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 72E0C940011; Tue, 5 Oct 2021 07:00:11 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0004.hostedemail.com [216.40.44.4]) by kanga.kvack.org (Postfix) with ESMTP id 5FAE8940007 for ; Tue, 5 Oct 2021 07:00:11 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 12D97182C7839 for ; Tue, 5 Oct 2021 11:00:11 +0000 (UTC) X-FDA: 78662089422.01.1E4649C Received: from mail-lf1-f74.google.com (mail-lf1-f74.google.com [209.85.167.74]) by imf23.hostedemail.com (Postfix) with ESMTP id AD8A69001B27 for ; Tue, 5 Oct 2021 11:00:10 +0000 (UTC) Received: by mail-lf1-f74.google.com with SMTP id x33-20020a0565123fa100b003fcfd99073dso16646401lfa.6 for ; Tue, 05 Oct 2021 04:00:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ZtjlTRd9cPGbU2ksHL1X0mLktYMB6yEg4HDsxE/XxIE=; b=DKByp7rhVVGn+G+7cpdRuP7Ianaa0r7mtLK+VFL9r5RpHo3iydgXDRTFRz48FGduO0 mtD8jJ2ACAbsS9HT3HaPp+f33NtX1fJ7tJf8MYiHhrACMUS/A/h3FdPfcunz0HnOnlWC WVFB8fLnpexAbBiBwhHdKu4hUCA6qfMzJAQHEF0FLOnbeYtVDVsqTRBrbPEPPOT/+OFO Rgm+zGqbtP+mWrn6DPMuSH2/m50x2ApfdzvkGz62+GgLk5c/KPr2AKwNQIm856JH1r9a HxbgivYTjuRZeNm1m0B3bb4ElXOdOTPTJ03RJfh7HKpkaVUTuC1dOemc8kNfSCKk/HrU KzfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ZtjlTRd9cPGbU2ksHL1X0mLktYMB6yEg4HDsxE/XxIE=; b=wrLjTj/li9bbZXShIfzk+Fjd1uK9jDI0O+HF5RWZcuFEDLVGoIfAELww7u0WykQ8u9 J/HaevCDLSPI2o4hIHvI8qnuXbF3+Fv/AefHmP0/59jgNkBNHf+2rnW+DJkpq4qgkUYb 8TmaqGI4Raa1NO9Ba4aDyvNgN8MomrZ701yE4/VGVDSTVgZkGUAvgm1jq4BPIsXujnS8 4ibuQCXZxrtM9V5jZzoVqiuvQOC6CaCkELZjqZ5GbGwRquxHsDtHjTDm2sdcIjwch3th JWJZOHE3VBeL8uGlT+JtzmZ8PaRc5GjsEV+J/ek+XJGyHBNQ6DzI+OoX4B1wCce4iyuX LmDA== X-Gm-Message-State: AOAM533oy1CqNUSP+AWHyLmsXd5/wRrJF4xbSJ03qTFGjIkREQEUs88C uOelMl8n1VHvo67wNIXF6YRBLK/dnQ== X-Google-Smtp-Source: ABdhPJy5EHykNsLsJK40jBXniA9gMVXOMtrUEA2MTw/F7xg13PQuC8CKIubUA9R78D8xX6LS7eXP8LcT+Q== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a05:6512:b0f:: with SMTP id w15mr2729100lfu.164.1633431608867; Tue, 05 Oct 2021 04:00:08 -0700 (PDT) Date: Tue, 5 Oct 2021 12:58:53 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-12-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 11/23] kcsan: test: Add test cases for memory barrier instrumentation From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Rspamd-Queue-Id: AD8A69001B27 X-Stat-Signature: pt5zw18xht4abxyoa3d3co1zf7mki53x Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=DKByp7rh; spf=pass (imf23.hostedemail.com: domain of 3ODBcYQUKCBs5CM5I7FF7C5.3FDC9ELO-DDBM13B.FI7@flex--elver.bounces.google.com designates 209.85.167.74 as permitted sender) smtp.mailfrom=3ODBcYQUKCBs5CM5I7FF7C5.3FDC9ELO-DDBM13B.FI7@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam06 X-HE-Tag: 1633431610-448691 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Adds test cases to check that memory barriers are instrumented correctly, and detection of missing memory barriers is working as intended if CONFIG_KCSAN_STRICT=y. Signed-off-by: Marco Elver --- kernel/kcsan/kcsan_test.c | 320 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 320 insertions(+) diff --git a/kernel/kcsan/kcsan_test.c b/kernel/kcsan/kcsan_test.c index ec054879201b..469f3044a5b3 100644 --- a/kernel/kcsan/kcsan_test.c +++ b/kernel/kcsan/kcsan_test.c @@ -16,9 +16,12 @@ #define pr_fmt(fmt) "kcsan_test: " fmt #include +#include +#include #include #include #include +#include #include #include #include @@ -305,6 +308,16 @@ static DEFINE_SEQLOCK(test_seqlock); __no_kcsan static noinline void sink_value(long v) { WRITE_ONCE(test_sink, v); } +/* + * Generates a delay and some accesses that enter the runtime but do not produce + * data races. + */ +static noinline void test_delay(int iter) +{ + while (iter--) + sink_value(READ_ONCE(test_sink)); +} + static noinline void test_kernel_read(void) { sink_value(test_var); } static noinline void test_kernel_write(void) @@ -466,8 +479,220 @@ static noinline void test_kernel_xor_1bit(void) kcsan_nestable_atomic_end(); } +#define TEST_KERNEL_LOCKED(name, acquire, release) \ + static noinline void test_kernel_##name(void) \ + { \ + long *flag = &test_struct.val[0]; \ + long v = 0; \ + if (!(acquire)) \ + return; \ + while (v++ < 100) { \ + test_var++; \ + barrier(); \ + } \ + release; \ + test_delay(10); \ + } + +TEST_KERNEL_LOCKED(with_memorder, + cmpxchg_acquire(flag, 0, 1) == 0, + smp_store_release(flag, 0)); +TEST_KERNEL_LOCKED(wrong_memorder, + cmpxchg_relaxed(flag, 0, 1) == 0, + WRITE_ONCE(*flag, 0)); +TEST_KERNEL_LOCKED(atomic_builtin_with_memorder, + __atomic_compare_exchange_n(flag, &v, 1, 0, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED), + __atomic_store_n(flag, 0, __ATOMIC_RELEASE)); +TEST_KERNEL_LOCKED(atomic_builtin_wrong_memorder, + __atomic_compare_exchange_n(flag, &v, 1, 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED), + __atomic_store_n(flag, 0, __ATOMIC_RELAXED)); + /* ===== Test cases ===== */ +/* + * Tests that various barriers have the expected effect on internal state. Not + * exhaustive on atomic_t operations. Unlike the selftest, also checks for + * too-strict barrier instrumentation; these can be tolerated, because it does + * not cause false positives, but at least we should be aware of such cases. + */ +__no_kcsan +static void test_barrier_nothreads(struct kunit *test) +{ +#ifdef CONFIG_KCSAN_WEAK_MEMORY + struct kcsan_scoped_access *reorder_access = ¤t->kcsan_ctx.reorder_access; +#else + struct kcsan_scoped_access *reorder_access = NULL; +#endif + arch_spinlock_t arch_spinlock = __ARCH_SPIN_LOCK_UNLOCKED; + DEFINE_SPINLOCK(spinlock); + DEFINE_MUTEX(mutex); + atomic_t dummy; + + KCSAN_TEST_REQUIRES(test, reorder_access != NULL); + KCSAN_TEST_REQUIRES(test, IS_ENABLED(CONFIG_SMP)); + +#define __KCSAN_EXPECT_BARRIER(access_type, barrier, order_before, name) \ + do { \ + reorder_access->type = (access_type) | KCSAN_ACCESS_SCOPED; \ + reorder_access->size = sizeof(test_var); \ + barrier; \ + KUNIT_EXPECT_EQ_MSG(test, reorder_access->size, \ + order_before ? 0 : sizeof(test_var), \ + "improperly instrumented type=(" #access_type "): " name); \ + } while (0) +#define KCSAN_EXPECT_READ_BARRIER(b, o) __KCSAN_EXPECT_BARRIER(0, b, o, #b) +#define KCSAN_EXPECT_WRITE_BARRIER(b, o) __KCSAN_EXPECT_BARRIER(KCSAN_ACCESS_WRITE, b, o, #b) +#define KCSAN_EXPECT_RW_BARRIER(b, o) __KCSAN_EXPECT_BARRIER(KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE, b, o, #b) + + /* Force creating a valid entry in reorder_access first. */ + test_var = 0; + while (test_var++ < 1000000 && reorder_access->size != sizeof(test_var)) + __kcsan_check_read(&test_var, sizeof(test_var)); + KUNIT_ASSERT_EQ(test, reorder_access->size, sizeof(test_var)); + + kcsan_nestable_atomic_begin(); /* No watchpoints in called functions. */ + + KCSAN_EXPECT_READ_BARRIER(mb(), true); + KCSAN_EXPECT_READ_BARRIER(wmb(), false); + KCSAN_EXPECT_READ_BARRIER(rmb(), true); + KCSAN_EXPECT_READ_BARRIER(smp_mb(), true); + KCSAN_EXPECT_READ_BARRIER(smp_wmb(), false); + KCSAN_EXPECT_READ_BARRIER(smp_rmb(), true); + KCSAN_EXPECT_READ_BARRIER(dma_wmb(), false); + KCSAN_EXPECT_READ_BARRIER(dma_rmb(), true); + KCSAN_EXPECT_READ_BARRIER(smp_mb__before_atomic(), true); + KCSAN_EXPECT_READ_BARRIER(smp_mb__after_atomic(), true); + KCSAN_EXPECT_READ_BARRIER(smp_mb__after_spinlock(), true); + KCSAN_EXPECT_READ_BARRIER(smp_store_mb(test_var, 0), true); + KCSAN_EXPECT_READ_BARRIER(smp_load_acquire(&test_var), false); + KCSAN_EXPECT_READ_BARRIER(smp_store_release(&test_var, 0), true); + KCSAN_EXPECT_READ_BARRIER(xchg(&test_var, 0), true); + KCSAN_EXPECT_READ_BARRIER(xchg_release(&test_var, 0), true); + KCSAN_EXPECT_READ_BARRIER(xchg_relaxed(&test_var, 0), false); + KCSAN_EXPECT_READ_BARRIER(cmpxchg(&test_var, 0, 0), true); + KCSAN_EXPECT_READ_BARRIER(cmpxchg_release(&test_var, 0, 0), true); + KCSAN_EXPECT_READ_BARRIER(cmpxchg_relaxed(&test_var, 0, 0), false); + KCSAN_EXPECT_READ_BARRIER(atomic_read(&dummy), false); + KCSAN_EXPECT_READ_BARRIER(atomic_read_acquire(&dummy), false); + KCSAN_EXPECT_READ_BARRIER(atomic_set(&dummy, 0), false); + KCSAN_EXPECT_READ_BARRIER(atomic_set_release(&dummy, 0), true); + KCSAN_EXPECT_READ_BARRIER(atomic_add(1, &dummy), false); + KCSAN_EXPECT_READ_BARRIER(atomic_add_return(1, &dummy), true); + KCSAN_EXPECT_READ_BARRIER(atomic_add_return_acquire(1, &dummy), false); + KCSAN_EXPECT_READ_BARRIER(atomic_add_return_release(1, &dummy), true); + KCSAN_EXPECT_READ_BARRIER(atomic_add_return_relaxed(1, &dummy), false); + KCSAN_EXPECT_READ_BARRIER(atomic_fetch_add(1, &dummy), true); + KCSAN_EXPECT_READ_BARRIER(atomic_fetch_add_acquire(1, &dummy), false); + KCSAN_EXPECT_READ_BARRIER(atomic_fetch_add_release(1, &dummy), true); + KCSAN_EXPECT_READ_BARRIER(atomic_fetch_add_relaxed(1, &dummy), false); + KCSAN_EXPECT_READ_BARRIER(test_and_set_bit(0, &test_var), true); + KCSAN_EXPECT_READ_BARRIER(test_and_clear_bit(0, &test_var), true); + KCSAN_EXPECT_READ_BARRIER(test_and_change_bit(0, &test_var), true); + KCSAN_EXPECT_READ_BARRIER(clear_bit_unlock(0, &test_var), true); + KCSAN_EXPECT_READ_BARRIER(__clear_bit_unlock(0, &test_var), true); + KCSAN_EXPECT_READ_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var), true); + KCSAN_EXPECT_READ_BARRIER(arch_spin_lock(&arch_spinlock), false); + KCSAN_EXPECT_READ_BARRIER(arch_spin_unlock(&arch_spinlock), true); + KCSAN_EXPECT_READ_BARRIER(spin_lock(&spinlock), false); + KCSAN_EXPECT_READ_BARRIER(spin_unlock(&spinlock), true); + KCSAN_EXPECT_READ_BARRIER(mutex_lock(&mutex), false); + KCSAN_EXPECT_READ_BARRIER(mutex_unlock(&mutex), true); + + KCSAN_EXPECT_WRITE_BARRIER(mb(), true); + KCSAN_EXPECT_WRITE_BARRIER(wmb(), true); + KCSAN_EXPECT_WRITE_BARRIER(rmb(), false); + KCSAN_EXPECT_WRITE_BARRIER(smp_mb(), true); + KCSAN_EXPECT_WRITE_BARRIER(smp_wmb(), true); + KCSAN_EXPECT_WRITE_BARRIER(smp_rmb(), false); + KCSAN_EXPECT_WRITE_BARRIER(dma_wmb(), true); + KCSAN_EXPECT_WRITE_BARRIER(dma_rmb(), false); + KCSAN_EXPECT_WRITE_BARRIER(smp_mb__before_atomic(), true); + KCSAN_EXPECT_WRITE_BARRIER(smp_mb__after_atomic(), true); + KCSAN_EXPECT_WRITE_BARRIER(smp_mb__after_spinlock(), true); + KCSAN_EXPECT_WRITE_BARRIER(smp_store_mb(test_var, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(smp_load_acquire(&test_var), false); + KCSAN_EXPECT_WRITE_BARRIER(smp_store_release(&test_var, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(xchg(&test_var, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(xchg_release(&test_var, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(xchg_relaxed(&test_var, 0), false); + KCSAN_EXPECT_WRITE_BARRIER(cmpxchg(&test_var, 0, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(cmpxchg_release(&test_var, 0, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(cmpxchg_relaxed(&test_var, 0, 0), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_read(&dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_read_acquire(&dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_set(&dummy, 0), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_set_release(&dummy, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(atomic_add(1, &dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_add_return(1, &dummy), true); + KCSAN_EXPECT_WRITE_BARRIER(atomic_add_return_acquire(1, &dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_add_return_release(1, &dummy), true); + KCSAN_EXPECT_WRITE_BARRIER(atomic_add_return_relaxed(1, &dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_fetch_add(1, &dummy), true); + KCSAN_EXPECT_WRITE_BARRIER(atomic_fetch_add_acquire(1, &dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_fetch_add_release(1, &dummy), true); + KCSAN_EXPECT_WRITE_BARRIER(atomic_fetch_add_relaxed(1, &dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(test_and_set_bit(0, &test_var), true); + KCSAN_EXPECT_WRITE_BARRIER(test_and_clear_bit(0, &test_var), true); + KCSAN_EXPECT_WRITE_BARRIER(test_and_change_bit(0, &test_var), true); + KCSAN_EXPECT_WRITE_BARRIER(clear_bit_unlock(0, &test_var), true); + KCSAN_EXPECT_WRITE_BARRIER(__clear_bit_unlock(0, &test_var), true); + KCSAN_EXPECT_WRITE_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var), true); + KCSAN_EXPECT_WRITE_BARRIER(arch_spin_lock(&arch_spinlock), false); + KCSAN_EXPECT_WRITE_BARRIER(arch_spin_unlock(&arch_spinlock), true); + KCSAN_EXPECT_WRITE_BARRIER(spin_lock(&spinlock), false); + KCSAN_EXPECT_WRITE_BARRIER(spin_unlock(&spinlock), true); + KCSAN_EXPECT_WRITE_BARRIER(mutex_lock(&mutex), false); + KCSAN_EXPECT_WRITE_BARRIER(mutex_unlock(&mutex), true); + + KCSAN_EXPECT_RW_BARRIER(mb(), true); + KCSAN_EXPECT_RW_BARRIER(wmb(), true); + KCSAN_EXPECT_RW_BARRIER(rmb(), true); + KCSAN_EXPECT_RW_BARRIER(smp_mb(), true); + KCSAN_EXPECT_RW_BARRIER(smp_wmb(), true); + KCSAN_EXPECT_RW_BARRIER(smp_rmb(), true); + KCSAN_EXPECT_RW_BARRIER(dma_wmb(), true); + KCSAN_EXPECT_RW_BARRIER(dma_rmb(), true); + KCSAN_EXPECT_RW_BARRIER(smp_mb__before_atomic(), true); + KCSAN_EXPECT_RW_BARRIER(smp_mb__after_atomic(), true); + KCSAN_EXPECT_RW_BARRIER(smp_mb__after_spinlock(), true); + KCSAN_EXPECT_RW_BARRIER(smp_store_mb(test_var, 0), true); + KCSAN_EXPECT_RW_BARRIER(smp_load_acquire(&test_var), false); + KCSAN_EXPECT_RW_BARRIER(smp_store_release(&test_var, 0), true); + KCSAN_EXPECT_RW_BARRIER(xchg(&test_var, 0), true); + KCSAN_EXPECT_RW_BARRIER(xchg_release(&test_var, 0), true); + KCSAN_EXPECT_RW_BARRIER(xchg_relaxed(&test_var, 0), false); + KCSAN_EXPECT_RW_BARRIER(cmpxchg(&test_var, 0, 0), true); + KCSAN_EXPECT_RW_BARRIER(cmpxchg_release(&test_var, 0, 0), true); + KCSAN_EXPECT_RW_BARRIER(cmpxchg_relaxed(&test_var, 0, 0), false); + KCSAN_EXPECT_RW_BARRIER(atomic_read(&dummy), false); + KCSAN_EXPECT_RW_BARRIER(atomic_read_acquire(&dummy), false); + KCSAN_EXPECT_RW_BARRIER(atomic_set(&dummy, 0), false); + KCSAN_EXPECT_RW_BARRIER(atomic_set_release(&dummy, 0), true); + KCSAN_EXPECT_RW_BARRIER(atomic_add(1, &dummy), false); + KCSAN_EXPECT_RW_BARRIER(atomic_add_return(1, &dummy), true); + KCSAN_EXPECT_RW_BARRIER(atomic_add_return_acquire(1, &dummy), false); + KCSAN_EXPECT_RW_BARRIER(atomic_add_return_release(1, &dummy), true); + KCSAN_EXPECT_RW_BARRIER(atomic_add_return_relaxed(1, &dummy), false); + KCSAN_EXPECT_RW_BARRIER(atomic_fetch_add(1, &dummy), true); + KCSAN_EXPECT_RW_BARRIER(atomic_fetch_add_acquire(1, &dummy), false); + KCSAN_EXPECT_RW_BARRIER(atomic_fetch_add_release(1, &dummy), true); + KCSAN_EXPECT_RW_BARRIER(atomic_fetch_add_relaxed(1, &dummy), false); + KCSAN_EXPECT_RW_BARRIER(test_and_set_bit(0, &test_var), true); + KCSAN_EXPECT_RW_BARRIER(test_and_clear_bit(0, &test_var), true); + KCSAN_EXPECT_RW_BARRIER(test_and_change_bit(0, &test_var), true); + KCSAN_EXPECT_RW_BARRIER(clear_bit_unlock(0, &test_var), true); + KCSAN_EXPECT_RW_BARRIER(__clear_bit_unlock(0, &test_var), true); + KCSAN_EXPECT_RW_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var), true); + KCSAN_EXPECT_RW_BARRIER(arch_spin_lock(&arch_spinlock), false); + KCSAN_EXPECT_RW_BARRIER(arch_spin_unlock(&arch_spinlock), true); + KCSAN_EXPECT_RW_BARRIER(spin_lock(&spinlock), false); + KCSAN_EXPECT_RW_BARRIER(spin_unlock(&spinlock), true); + KCSAN_EXPECT_RW_BARRIER(mutex_lock(&mutex), false); + KCSAN_EXPECT_RW_BARRIER(mutex_unlock(&mutex), true); + + kcsan_nestable_atomic_end(); +} + /* Simple test with normal data race. */ __no_kcsan static void test_basic(struct kunit *test) @@ -1039,6 +1264,90 @@ static void test_1bit_value_change(struct kunit *test) KUNIT_EXPECT_TRUE(test, match); } +__no_kcsan +static void test_correct_barrier(struct kunit *test) +{ + struct expect_report expect = { + .access = { + { test_kernel_with_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) }, + { test_kernel_with_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(0) }, + }, + }; + bool match_expect = false; + + test_struct.val[0] = 0; /* init unlocked */ + begin_test_checks(test_kernel_with_memorder, test_kernel_with_memorder); + do { + match_expect = report_matches_any_reordered(&expect); + } while (!end_test_checks(match_expect)); + KUNIT_EXPECT_FALSE(test, match_expect); +} + +__no_kcsan +static void test_missing_barrier(struct kunit *test) +{ + struct expect_report expect = { + .access = { + { test_kernel_wrong_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) }, + { test_kernel_wrong_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(0) }, + }, + }; + bool match_expect = false; + + test_struct.val[0] = 0; /* init unlocked */ + begin_test_checks(test_kernel_wrong_memorder, test_kernel_wrong_memorder); + do { + match_expect = report_matches_any_reordered(&expect); + } while (!end_test_checks(match_expect)); + if (IS_ENABLED(CONFIG_KCSAN_WEAK_MEMORY)) + KUNIT_EXPECT_TRUE(test, match_expect); + else + KUNIT_EXPECT_FALSE(test, match_expect); +} + +__no_kcsan +static void test_atomic_builtins_correct_barrier(struct kunit *test) +{ + struct expect_report expect = { + .access = { + { test_kernel_atomic_builtin_with_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) }, + { test_kernel_atomic_builtin_with_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(0) }, + }, + }; + bool match_expect = false; + + test_struct.val[0] = 0; /* init unlocked */ + begin_test_checks(test_kernel_atomic_builtin_with_memorder, + test_kernel_atomic_builtin_with_memorder); + do { + match_expect = report_matches_any_reordered(&expect); + } while (!end_test_checks(match_expect)); + KUNIT_EXPECT_FALSE(test, match_expect); +} + +__no_kcsan +static void test_atomic_builtins_missing_barrier(struct kunit *test) +{ + struct expect_report expect = { + .access = { + { test_kernel_atomic_builtin_wrong_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) }, + { test_kernel_atomic_builtin_wrong_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(0) }, + }, + }; + bool match_expect = false; + + test_struct.val[0] = 0; /* init unlocked */ + begin_test_checks(test_kernel_atomic_builtin_wrong_memorder, + test_kernel_atomic_builtin_wrong_memorder); + do { + match_expect = report_matches_any_reordered(&expect); + } while (!end_test_checks(match_expect)); + if (IS_ENABLED(CONFIG_KCSAN_WEAK_MEMORY)) + KUNIT_EXPECT_TRUE(test, match_expect); + else + KUNIT_EXPECT_FALSE(test, match_expect); +} + /* * Generate thread counts for all test cases. Values generated are in interval * [2, 5] followed by exponentially increasing thread counts from 8 to 32. @@ -1088,6 +1397,7 @@ static const void *nthreads_gen_params(const void *prev, char *desc) #define KCSAN_KUNIT_CASE(test_name) KUNIT_CASE_PARAM(test_name, nthreads_gen_params) static struct kunit_case kcsan_test_cases[] = { + KUNIT_CASE(test_barrier_nothreads), KCSAN_KUNIT_CASE(test_basic), KCSAN_KUNIT_CASE(test_concurrent_races), KCSAN_KUNIT_CASE(test_novalue_change), @@ -1112,6 +1422,10 @@ static struct kunit_case kcsan_test_cases[] = { KCSAN_KUNIT_CASE(test_seqlock_noreport), KCSAN_KUNIT_CASE(test_atomic_builtins), KCSAN_KUNIT_CASE(test_1bit_value_change), + KCSAN_KUNIT_CASE(test_correct_barrier), + KCSAN_KUNIT_CASE(test_missing_barrier), + KCSAN_KUNIT_CASE(test_atomic_builtins_correct_barrier), + KCSAN_KUNIT_CASE(test_atomic_builtins_missing_barrier), {}, }; @@ -1176,6 +1490,9 @@ static int test_init(struct kunit *test) observed.nlines = 0; spin_unlock_irqrestore(&observed.lock, flags); + if (strstr(test->name, "nothreads")) + return 0; + if (!torture_init_begin((char *)test->name, 1)) return -EBUSY; @@ -1218,6 +1535,9 @@ static void test_exit(struct kunit *test) struct task_struct **stop_thread; int i; + if (strstr(test->name, "nothreads")) + return; + if (torture_cleanup_begin()) return; From patchwork Tue Oct 5 10:58:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536197 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EEA1C433EF for ; Tue, 5 Oct 2021 11:01:13 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1DCEE614C8 for ; Tue, 5 Oct 2021 11:01:13 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1DCEE614C8 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id AFDAD900002; Tue, 5 Oct 2021 07:01:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id AAD1B6B0071; Tue, 5 Oct 2021 07:01:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 94E44900002; Tue, 5 Oct 2021 07:01:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0047.hostedemail.com [216.40.44.47]) by kanga.kvack.org (Postfix) with ESMTP id 875686B006C for ; Tue, 5 Oct 2021 07:01:12 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 445BF18263721 for ; Tue, 5 Oct 2021 11:01:12 +0000 (UTC) X-FDA: 78662091984.19.04E0649 Received: from mail-qt1-f202.google.com (mail-qt1-f202.google.com [209.85.160.202]) by imf15.hostedemail.com (Postfix) with ESMTP id D644DD0017E5 for ; Tue, 5 Oct 2021 11:00:11 +0000 (UTC) Received: by mail-qt1-f202.google.com with SMTP id s10-20020ac80d8a000000b002a753776238so762907qti.15 for ; Tue, 05 Oct 2021 04:00:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=qua6hkgX1DqgYC81lcf9MuNL3xFoQ65uNHSjN4wVu7E=; b=p1/pKdO+SM/iSx7K48QqvvkZX8FRzxvzYLgyl79HaAerm0m5U1Bp+IoclavLshBho3 9v5r/FDpEkKHjM73/00RJzHlPnJYWGnBbgiPdsMTyqBVXLZlxU2uLtOwr24IAOBwdgIq 720NGi4mOnutTp5+XTPjJbCKHbF/pTi0cGjx2S3ZuVxIApBJcDo0W5f46EUTC03hjuxR ldCPYX1y7fMSAA8nTXfIXJV74VGl/mBa9CIkNqVC3xF0G1TnEx6IWSv5Y4OmJr3crRuQ GoLKr0jisSBGsmq5thpuIc+3+6B5odAgmhDZKnxQoP8WOzjXERpMFOtl0K6B3PCvje/j 938g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=qua6hkgX1DqgYC81lcf9MuNL3xFoQ65uNHSjN4wVu7E=; b=WdD0kMB788KUj9dHvI+DRTcXoi9HVC9++dfiDIx2RJjyIVgfQaJoz1V9OA79Y1RCO3 4ZE0HgJKX5XIU6gQm6025HR1aOEyTyzc9UZvZKLOMtWpTG8r0uIRwMQUml2+M8v0DhE4 8FITtqVBeFQxQ53FrLlYoeEGByVJ1ILrSR68ORYixLBZlvYQ4pPKMNkP1abKWWVeP1BT nf4Un8nbWt7MdBkwpeVr1BUsvBgFgQL7anujYnmElgTkR8/3G1T1PBw4P68mn4uRxtc0 nBMht8YqOAxAQ1IP4ZCldeasoY+z5MQ1EcTzt854Le5noF3J5HyNRVWh8SeyZL4vnGO9 Cvjw== X-Gm-Message-State: AOAM532NERZPl5/Kp1MbRSWQnJRUqXo65PsWzR95KWI/wPdQMHs1EF8r kiVa1Vrl+582aerErzAqJwIV3TpdZw== X-Google-Smtp-Source: ABdhPJxjHycsrzsJBhlbfWkRoL+tWY9Y18zzBZYEfmQkvpWJfrNPv+9OM/BxphLh6Kg/b61r/C2Vuw5AJg== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a05:6214:1083:: with SMTP id o3mr436313qvr.57.1633431611175; Tue, 05 Oct 2021 04:00:11 -0700 (PDT) Date: Tue, 5 Oct 2021 12:58:54 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-13-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 12/23] kcsan: Ignore GCC 11+ warnings about TSan runtime support From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: D644DD0017E5 X-Stat-Signature: ksunmrxm7yonjufwqgawx1h64degtqmr Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="p1/pKdO+"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of 3OzBcYQUKCB48FP8LAIIAF8.6IGFCHOR-GGEP46E.ILA@flex--elver.bounces.google.com designates 209.85.160.202 as permitted sender) smtp.mailfrom=3OzBcYQUKCB48FP8LAIIAF8.6IGFCHOR-GGEP46E.ILA@flex--elver.bounces.google.com X-HE-Tag: 1633431611-559244 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: GCC 11 has introduced a new warning option, -Wtsan [1], to warn about unsupported operations in the TSan runtime. But KCSAN != TSan runtime, so none of the warnings apply. [1] https://gcc.gnu.org/onlinedocs/gcc-11.1.0/gcc/Warning-Options.html Ignore the warnings. Currently the warning only fires in the test for __atomic_thread_fence(): kernel/kcsan/kcsan_test.c: In function ‘test_atomic_builtins’: kernel/kcsan/kcsan_test.c:1234:17: warning: ‘atomic_thread_fence’ is not supported with ‘-fsanitize=thread’ [-Wtsan] 1234 | __atomic_thread_fence(__ATOMIC_SEQ_CST); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ which exists to ensure the KCSAN runtime keeps supporting the builtin instrumentation. Signed-off-by: Marco Elver --- scripts/Makefile.kcsan | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/scripts/Makefile.kcsan b/scripts/Makefile.kcsan index 4c7f0d282e42..19f693b68a96 100644 --- a/scripts/Makefile.kcsan +++ b/scripts/Makefile.kcsan @@ -13,6 +13,12 @@ kcsan-cflags := -fsanitize=thread -fno-optimize-sibling-calls \ $(call cc-option,$(call cc-param,tsan-compound-read-before-write=1),$(call cc-option,$(call cc-param,tsan-instrument-read-before-write=1))) \ $(call cc-param,tsan-distinguish-volatile=1) +ifdef CONFIG_CC_IS_GCC +# GCC started warning about operations unsupported by the TSan runtime. But +# KCSAN != TSan, so just ignore these warnings. +kcsan-cflags += -Wno-tsan +endif + ifndef CONFIG_KCSAN_WEAK_MEMORY kcsan-cflags += $(call cc-option,$(call cc-param,tsan-instrument-func-entry-exit=0)) endif From patchwork Tue Oct 5 10:58:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536181 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DE054C433EF for ; Tue, 5 Oct 2021 11:00:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8BB546117A for ; Tue, 5 Oct 2021 11:00:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8BB546117A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 29A55940012; Tue, 5 Oct 2021 07:00:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1FCF3940007; Tue, 5 Oct 2021 07:00:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 076B6940012; Tue, 5 Oct 2021 07:00:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0115.hostedemail.com [216.40.44.115]) by kanga.kvack.org (Postfix) with ESMTP id E78AE940007 for ; Tue, 5 Oct 2021 07:00:14 -0400 (EDT) Received: from smtpin35.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id A60E6182C1605 for ; Tue, 5 Oct 2021 11:00:14 +0000 (UTC) X-FDA: 78662089548.35.85B9E13 Received: from mail-qv1-f73.google.com (mail-qv1-f73.google.com [209.85.219.73]) by imf16.hostedemail.com (Postfix) with ESMTP id 466D8F002494 for ; Tue, 5 Oct 2021 11:00:14 +0000 (UTC) Received: by mail-qv1-f73.google.com with SMTP id kc13-20020a056214410d00b00382bc805781so8936886qvb.12 for ; Tue, 05 Oct 2021 04:00:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=R/r9nmRkVgWsFVPaDE2PBCRFObTgRkO7hDmVE/TxXNA=; b=OjjLE4Wh6AYKWokluP/mF5nFTjYdcMtY4NLf8BENLvbJRKRwX52uepyd1M7Ezuz17b sC4CF5a7b1Z4yzN5qeSfyuCkboawhqITlUEnhYa2DsMLa4NbB4RvjsePZdDE3D9Gqex+ +ynVNJjX3EGqUOvI9G+AgjX5HFiaKX8Y/GQkbXe8ydrcK7AHgaS5BOgy+mX0E6UcqtzO ZiWCcV5BYYdigDR+Vbzj85RtMy3YXHCGELTgmiaoBmKghrgjCClJYJYriMxAZpSBRikY /Xy3uNPOwAtMxSG76cC+Okb//mmCPPHOYJeXSKvnCWjcFOsudJAgAExYSqYlMHjpwvhx lGyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=R/r9nmRkVgWsFVPaDE2PBCRFObTgRkO7hDmVE/TxXNA=; b=BgQJgxIvXo9hMzTEDE6jOMdVT9gNlNIjQff5pe8U3Qze9pKIC4DMzuMYzJTZj2AqYE G1nDdopE1RCK0EGUH7GTdReYTtFgH3GtO5ZoNzB5yDJLZbeVX1qNIZgoxU9J9mm7rblM UAHqxg5//rTOXltqErQI4lCYJlzceE+7cD1j8oIwcEMJG1fT/gYEgXx2mUaLmjtuSxg7 8VqaMoQgr/7tgWtTz0DyDJXFJWE+mB+Nx1ROAj9hMt2baSPCbzAOols5MGMP9c6iQbkb xhCmqTxN3WdA7Z7dqnKif2CDJdE2LAIARzmKkjkSGXjXj75H8/y3bRa62qv5sSKKPywu O3Mw== X-Gm-Message-State: AOAM531HjTKImYch2frw4hkJ/4U7iwygZAcVdacc72ZQ/J54yyQnFGcN d7jtUsLTSh/r12OzfX14fIQobTOtqQ== X-Google-Smtp-Source: ABdhPJyw9Ey4C3zZMePTRKeRL2WwPD4M+zIMz7eaYdTiP+MxPTRyLIA7eV3Xb4zRsVOQoAdqx3iXdq6cjg== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a05:6214:c47:: with SMTP id r7mr10449171qvj.12.1633431613511; Tue, 05 Oct 2021 04:00:13 -0700 (PDT) Date: Tue, 5 Oct 2021 12:58:55 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-14-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 13/23] kcsan: selftest: Add test case to check memory barrier instrumentation From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=OjjLE4Wh; spf=pass (imf16.hostedemail.com: domain of 3PTBcYQUKCCAAHRANCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--elver.bounces.google.com designates 209.85.219.73 as permitted sender) smtp.mailfrom=3PTBcYQUKCCAAHRANCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 466D8F002494 X-Stat-Signature: fr75cna9cc8bzh1stwpkupib1m9op18e X-HE-Tag: 1633431614-391077 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Memory barrier instrumentation is crucial to avoid false positives. To avoid surprises, run a simple test case in the boot-time selftest to ensure memory barriers are still instrumented correctly. Signed-off-by: Marco Elver --- kernel/kcsan/Makefile | 2 + kernel/kcsan/selftest.c | 141 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 143 insertions(+) diff --git a/kernel/kcsan/Makefile b/kernel/kcsan/Makefile index c2bb07f5bcc7..ff47e896de3b 100644 --- a/kernel/kcsan/Makefile +++ b/kernel/kcsan/Makefile @@ -11,6 +11,8 @@ CFLAGS_core.o := $(call cc-option,-fno-conserve-stack) \ -fno-stack-protector -DDISABLE_BRANCH_PROFILING obj-y := core.o debugfs.o report.o + +KCSAN_INSTRUMENT_BARRIERS_selftest.o := y obj-$(CONFIG_KCSAN_SELFTEST) += selftest.o CFLAGS_kcsan_test.o := $(CFLAGS_KCSAN) -g -fno-omit-frame-pointer diff --git a/kernel/kcsan/selftest.c b/kernel/kcsan/selftest.c index b4295a3892b7..08c6b84b9ebe 100644 --- a/kernel/kcsan/selftest.c +++ b/kernel/kcsan/selftest.c @@ -7,10 +7,15 @@ #define pr_fmt(fmt) "kcsan: " fmt +#include +#include #include +#include #include #include #include +#include +#include #include #include "encoding.h" @@ -103,6 +108,141 @@ static bool __init test_matching_access(void) return true; } +/* + * Correct memory barrier instrumentation is critical to avoiding false + * positives: simple test to check at boot certain barriers are always properly + * instrumented. See kcsan_test for a more complete test. + */ +static bool __init test_barrier(void) +{ +#ifdef CONFIG_KCSAN_WEAK_MEMORY + struct kcsan_scoped_access *reorder_access = ¤t->kcsan_ctx.reorder_access; +#else + struct kcsan_scoped_access *reorder_access = NULL; +#endif + bool ret = true; + arch_spinlock_t arch_spinlock = __ARCH_SPIN_LOCK_UNLOCKED; + DEFINE_SPINLOCK(spinlock); + atomic_t dummy; + long test_var; + + if (!reorder_access || !IS_ENABLED(CONFIG_SMP)) + return true; + +#define __KCSAN_CHECK_BARRIER(access_type, barrier, name) \ + do { \ + reorder_access->type = (access_type) | KCSAN_ACCESS_SCOPED; \ + reorder_access->size = 1; \ + barrier; \ + if (reorder_access->size != 0) { \ + pr_err("improperly instrumented type=(" #access_type "): " name "\n"); \ + ret = false; \ + } \ + } while (0) +#define KCSAN_CHECK_READ_BARRIER(b) __KCSAN_CHECK_BARRIER(0, b, #b) +#define KCSAN_CHECK_WRITE_BARRIER(b) __KCSAN_CHECK_BARRIER(KCSAN_ACCESS_WRITE, b, #b) +#define KCSAN_CHECK_RW_BARRIER(b) __KCSAN_CHECK_BARRIER(KCSAN_ACCESS_WRITE | KCSAN_ACCESS_COMPOUND, b, #b) + + kcsan_nestable_atomic_begin(); /* No watchpoints in called functions. */ + + KCSAN_CHECK_READ_BARRIER(mb()); + KCSAN_CHECK_READ_BARRIER(rmb()); + KCSAN_CHECK_READ_BARRIER(smp_mb()); + KCSAN_CHECK_READ_BARRIER(smp_rmb()); + KCSAN_CHECK_READ_BARRIER(dma_rmb()); + KCSAN_CHECK_READ_BARRIER(smp_mb__before_atomic()); + KCSAN_CHECK_READ_BARRIER(smp_mb__after_atomic()); + KCSAN_CHECK_READ_BARRIER(smp_mb__after_spinlock()); + KCSAN_CHECK_READ_BARRIER(smp_store_mb(test_var, 0)); + KCSAN_CHECK_READ_BARRIER(smp_store_release(&test_var, 0)); + KCSAN_CHECK_READ_BARRIER(xchg(&test_var, 0)); + KCSAN_CHECK_READ_BARRIER(xchg_release(&test_var, 0)); + KCSAN_CHECK_READ_BARRIER(cmpxchg(&test_var, 0, 0)); + KCSAN_CHECK_READ_BARRIER(cmpxchg_release(&test_var, 0, 0)); + KCSAN_CHECK_READ_BARRIER(atomic_set_release(&dummy, 0)); + KCSAN_CHECK_READ_BARRIER(atomic_add_return(1, &dummy)); + KCSAN_CHECK_READ_BARRIER(atomic_add_return_release(1, &dummy)); + KCSAN_CHECK_READ_BARRIER(atomic_fetch_add(1, &dummy)); + KCSAN_CHECK_READ_BARRIER(atomic_fetch_add_release(1, &dummy)); + KCSAN_CHECK_READ_BARRIER(test_and_set_bit(0, &test_var)); + KCSAN_CHECK_READ_BARRIER(test_and_clear_bit(0, &test_var)); + KCSAN_CHECK_READ_BARRIER(test_and_change_bit(0, &test_var)); + KCSAN_CHECK_READ_BARRIER(clear_bit_unlock(0, &test_var)); + KCSAN_CHECK_READ_BARRIER(__clear_bit_unlock(0, &test_var)); + KCSAN_CHECK_READ_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var)); + arch_spin_lock(&arch_spinlock); + KCSAN_CHECK_READ_BARRIER(arch_spin_unlock(&arch_spinlock)); + spin_lock(&spinlock); + KCSAN_CHECK_READ_BARRIER(spin_unlock(&spinlock)); + + KCSAN_CHECK_WRITE_BARRIER(mb()); + KCSAN_CHECK_WRITE_BARRIER(wmb()); + KCSAN_CHECK_WRITE_BARRIER(smp_mb()); + KCSAN_CHECK_WRITE_BARRIER(smp_wmb()); + KCSAN_CHECK_WRITE_BARRIER(dma_wmb()); + KCSAN_CHECK_WRITE_BARRIER(smp_mb__before_atomic()); + KCSAN_CHECK_WRITE_BARRIER(smp_mb__after_atomic()); + KCSAN_CHECK_WRITE_BARRIER(smp_mb__after_spinlock()); + KCSAN_CHECK_WRITE_BARRIER(smp_store_mb(test_var, 0)); + KCSAN_CHECK_WRITE_BARRIER(smp_store_release(&test_var, 0)); + KCSAN_CHECK_WRITE_BARRIER(xchg(&test_var, 0)); + KCSAN_CHECK_WRITE_BARRIER(xchg_release(&test_var, 0)); + KCSAN_CHECK_WRITE_BARRIER(cmpxchg(&test_var, 0, 0)); + KCSAN_CHECK_WRITE_BARRIER(cmpxchg_release(&test_var, 0, 0)); + KCSAN_CHECK_WRITE_BARRIER(atomic_set_release(&dummy, 0)); + KCSAN_CHECK_WRITE_BARRIER(atomic_add_return(1, &dummy)); + KCSAN_CHECK_WRITE_BARRIER(atomic_add_return_release(1, &dummy)); + KCSAN_CHECK_WRITE_BARRIER(atomic_fetch_add(1, &dummy)); + KCSAN_CHECK_WRITE_BARRIER(atomic_fetch_add_release(1, &dummy)); + KCSAN_CHECK_WRITE_BARRIER(test_and_set_bit(0, &test_var)); + KCSAN_CHECK_WRITE_BARRIER(test_and_clear_bit(0, &test_var)); + KCSAN_CHECK_WRITE_BARRIER(test_and_change_bit(0, &test_var)); + KCSAN_CHECK_WRITE_BARRIER(clear_bit_unlock(0, &test_var)); + KCSAN_CHECK_WRITE_BARRIER(__clear_bit_unlock(0, &test_var)); + KCSAN_CHECK_WRITE_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var)); + arch_spin_lock(&arch_spinlock); + KCSAN_CHECK_WRITE_BARRIER(arch_spin_unlock(&arch_spinlock)); + spin_lock(&spinlock); + KCSAN_CHECK_WRITE_BARRIER(spin_unlock(&spinlock)); + + KCSAN_CHECK_RW_BARRIER(mb()); + KCSAN_CHECK_RW_BARRIER(wmb()); + KCSAN_CHECK_RW_BARRIER(rmb()); + KCSAN_CHECK_RW_BARRIER(smp_mb()); + KCSAN_CHECK_RW_BARRIER(smp_wmb()); + KCSAN_CHECK_RW_BARRIER(smp_rmb()); + KCSAN_CHECK_RW_BARRIER(dma_wmb()); + KCSAN_CHECK_RW_BARRIER(dma_rmb()); + KCSAN_CHECK_RW_BARRIER(smp_mb__before_atomic()); + KCSAN_CHECK_RW_BARRIER(smp_mb__after_atomic()); + KCSAN_CHECK_RW_BARRIER(smp_mb__after_spinlock()); + KCSAN_CHECK_RW_BARRIER(smp_store_mb(test_var, 0)); + KCSAN_CHECK_RW_BARRIER(smp_store_release(&test_var, 0)); + KCSAN_CHECK_RW_BARRIER(xchg(&test_var, 0)); + KCSAN_CHECK_RW_BARRIER(xchg_release(&test_var, 0)); + KCSAN_CHECK_RW_BARRIER(cmpxchg(&test_var, 0, 0)); + KCSAN_CHECK_RW_BARRIER(cmpxchg_release(&test_var, 0, 0)); + KCSAN_CHECK_RW_BARRIER(atomic_set_release(&dummy, 0)); + KCSAN_CHECK_RW_BARRIER(atomic_add_return(1, &dummy)); + KCSAN_CHECK_RW_BARRIER(atomic_add_return_release(1, &dummy)); + KCSAN_CHECK_RW_BARRIER(atomic_fetch_add(1, &dummy)); + KCSAN_CHECK_RW_BARRIER(atomic_fetch_add_release(1, &dummy)); + KCSAN_CHECK_RW_BARRIER(test_and_set_bit(0, &test_var)); + KCSAN_CHECK_RW_BARRIER(test_and_clear_bit(0, &test_var)); + KCSAN_CHECK_RW_BARRIER(test_and_change_bit(0, &test_var)); + KCSAN_CHECK_RW_BARRIER(clear_bit_unlock(0, &test_var)); + KCSAN_CHECK_RW_BARRIER(__clear_bit_unlock(0, &test_var)); + KCSAN_CHECK_RW_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var)); + arch_spin_lock(&arch_spinlock); + KCSAN_CHECK_RW_BARRIER(arch_spin_unlock(&arch_spinlock)); + spin_lock(&spinlock); + KCSAN_CHECK_RW_BARRIER(spin_unlock(&spinlock)); + + kcsan_nestable_atomic_end(); + + return ret; +} + static int __init kcsan_selftest(void) { int passed = 0; @@ -120,6 +260,7 @@ static int __init kcsan_selftest(void) RUN_TEST(test_requires); RUN_TEST(test_encode_decode); RUN_TEST(test_matching_access); + RUN_TEST(test_barrier); pr_info("selftest: %d/%d tests passed\n", passed, total); if (passed != total) From patchwork Tue Oct 5 10:58:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 621D6C4332F for ; Tue, 5 Oct 2021 11:00:18 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 14D126117A for ; Tue, 5 Oct 2021 11:00:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 14D126117A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id A8E89940013; Tue, 5 Oct 2021 07:00:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A131D940007; Tue, 5 Oct 2021 07:00:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 816D2940013; Tue, 5 Oct 2021 07:00:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0044.hostedemail.com [216.40.44.44]) by kanga.kvack.org (Postfix) with ESMTP id 7299D940007 for ; Tue, 5 Oct 2021 07:00:17 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 248E88249980 for ; Tue, 5 Oct 2021 11:00:17 +0000 (UTC) X-FDA: 78662089674.25.19D1D51 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf23.hostedemail.com (Postfix) with ESMTP id C63649001B20 for ; Tue, 5 Oct 2021 11:00:16 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id n17-20020a7bc5d1000000b0030d4041f73eso922544wmk.4 for ; Tue, 05 Oct 2021 04:00:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=E3HFvEWei3GORNCVEmwUUOkLQSsacsR5poZwBNsW3No=; b=gHVBD0A+3lHoHP3P8gK1UEdrby9miEcsvus8c3g0zgInVbsGgAU3MVoXEmvCMU0pqT XsksqsMBf+Hrn5+rm9PXOFI7hX3xUlvTsfvIp0JihSS0PPAqU4KYHoDYpWRmKjlLfv3Z ux5nfeNlt/aT3R0wsShxX5tYcRBQGyKY82Rl2rRwAfYMVXQQmz8vQj5Wv5lkreriN5iv 4MrGMe4C5FTHMphxlwJoEyac0WBUKG5Pj09EyVrDVCw2pbWupHZuTVzmR6/ARtMZvkBk j/24lJVwkBO4ioR+fijSBKbPcf2HbjlMwYLLnO0riZ/tFPOjsoI5h9EGaPdHgllWr2mf 1vFw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=E3HFvEWei3GORNCVEmwUUOkLQSsacsR5poZwBNsW3No=; b=XumNQ6mcM6bhndhTmBcX50bTHjRzTCuXfOMYOPIMZd+xtgJyqwdFdo8BzbkGm1TEQd jaExlPVXlR2HBy5O87Ge4iDKOIrj1VA246vAOthgZSg9qwFzqJkNObRGokzEnmFdcNyv +njvcNm8a+U+yPQsVGiaZK2iqWb5E8A9ViOPtTZsWHSw8MTDTjNzs0pblic7byB2r9Ek DmVKoDmsP+y57MhHM8Ue2MQpkHq+N9lusOx6rGqv9Z0FnWafdvOEgmWL1L/PM4/L4KWc wsIrgw67ZO0T18MXR4TdS5k4maOfpPL50MjqrjlVsZl9XPhUq5Yy9+gJ9NNUpjowf8Sg o5oQ== X-Gm-Message-State: AOAM530eRCFmOdtIEgxEPVvwFoQUhuxK/jJQANsXVMJyKqLXVdPtgee7 NCARWVIy8woitnKJZHXmFnsqFbbqTA== X-Google-Smtp-Source: ABdhPJwKseKhghSFu1rN08u1NlsDKg2r0ReVbz2Z2EPPZdFMgRfGNw4KroaYjgRYAz0IHVtIcy7DHZxqsA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a1c:f31a:: with SMTP id q26mr2540960wmq.159.1633431615667; Tue, 05 Oct 2021 04:00:15 -0700 (PDT) Date: Tue, 5 Oct 2021 12:58:56 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-15-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 14/23] locking/barriers, kcsan: Add instrumentation for barriers From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=gHVBD0A+; spf=pass (imf23.hostedemail.com: domain of 3PzBcYQUKCCICJTCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--elver.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3PzBcYQUKCCICJTCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: C63649001B20 X-Stat-Signature: xggkjttej7hzpfctc49ic5dkgg55n9m6 X-HE-Tag: 1633431616-68801 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Adds the required KCSAN instrumentation for barriers if CONFIG_SMP. KCSAN supports modeling the effects of: smp_mb() smp_rmb() smp_wmb() smp_store_release() Signed-off-by: Marco Elver --- include/asm-generic/barrier.h | 29 +++++++++++++++-------------- include/linux/spinlock.h | 2 +- 2 files changed, 16 insertions(+), 15 deletions(-) diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h index 640f09479bdf..27a9c9edfef6 100644 --- a/include/asm-generic/barrier.h +++ b/include/asm-generic/barrier.h @@ -14,6 +14,7 @@ #ifndef __ASSEMBLY__ #include +#include #include #ifndef nop @@ -62,15 +63,15 @@ #ifdef CONFIG_SMP #ifndef smp_mb -#define smp_mb() __smp_mb() +#define smp_mb() do { kcsan_mb(); __smp_mb(); } while (0) #endif #ifndef smp_rmb -#define smp_rmb() __smp_rmb() +#define smp_rmb() do { kcsan_rmb(); __smp_rmb(); } while (0) #endif #ifndef smp_wmb -#define smp_wmb() __smp_wmb() +#define smp_wmb() do { kcsan_wmb(); __smp_wmb(); } while (0) #endif #else /* !CONFIG_SMP */ @@ -123,19 +124,19 @@ do { \ #ifdef CONFIG_SMP #ifndef smp_store_mb -#define smp_store_mb(var, value) __smp_store_mb(var, value) +#define smp_store_mb(var, value) do { kcsan_mb(); __smp_store_mb(var, value); } while (0) #endif #ifndef smp_mb__before_atomic -#define smp_mb__before_atomic() __smp_mb__before_atomic() +#define smp_mb__before_atomic() do { kcsan_mb(); __smp_mb__before_atomic(); } while (0) #endif #ifndef smp_mb__after_atomic -#define smp_mb__after_atomic() __smp_mb__after_atomic() +#define smp_mb__after_atomic() do { kcsan_mb(); __smp_mb__after_atomic(); } while (0) #endif #ifndef smp_store_release -#define smp_store_release(p, v) __smp_store_release(p, v) +#define smp_store_release(p, v) do { kcsan_release(); __smp_store_release(p, v); } while (0) #endif #ifndef smp_load_acquire @@ -178,13 +179,13 @@ do { \ #endif /* CONFIG_SMP */ /* Barriers for virtual machine guests when talking to an SMP host */ -#define virt_mb() __smp_mb() -#define virt_rmb() __smp_rmb() -#define virt_wmb() __smp_wmb() -#define virt_store_mb(var, value) __smp_store_mb(var, value) -#define virt_mb__before_atomic() __smp_mb__before_atomic() -#define virt_mb__after_atomic() __smp_mb__after_atomic() -#define virt_store_release(p, v) __smp_store_release(p, v) +#define virt_mb() do { kcsan_mb(); __smp_mb(); } while (0) +#define virt_rmb() do { kcsan_rmb(); __smp_rmb(); } while (0) +#define virt_wmb() do { kcsan_wmb(); __smp_wmb(); } while (0) +#define virt_store_mb(var, value) do { kcsan_mb(); __smp_store_mb(var, value); } while (0) +#define virt_mb__before_atomic() do { kcsan_mb(); __smp_mb__before_atomic(); } while (0) +#define virt_mb__after_atomic() do { kcsan_mb(); __smp_mb__after_atomic(); } while (0) +#define virt_store_release(p, v) do { kcsan_release(); __smp_store_release(p, v); } while (0) #define virt_load_acquire(p) __smp_load_acquire(p) /** diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index 45310ea1b1d7..f6d69808b929 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -172,7 +172,7 @@ do { \ * Architectures that can implement ACQUIRE better need to take care. */ #ifndef smp_mb__after_spinlock -#define smp_mb__after_spinlock() do { } while (0) +#define smp_mb__after_spinlock() kcsan_mb() #endif #ifdef CONFIG_DEBUG_SPINLOCK From patchwork Tue Oct 5 10:58:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D6EF1C4332F for ; Tue, 5 Oct 2021 11:00:20 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8B89261251 for ; Tue, 5 Oct 2021 11:00:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 8B89261251 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 310CF940014; Tue, 5 Oct 2021 07:00:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 24C4E940007; Tue, 5 Oct 2021 07:00:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 09CE7940014; Tue, 5 Oct 2021 07:00:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0147.hostedemail.com [216.40.44.147]) by kanga.kvack.org (Postfix) with ESMTP id E7059940007 for ; Tue, 5 Oct 2021 07:00:19 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id AECC92DED0 for ; Tue, 5 Oct 2021 11:00:19 +0000 (UTC) X-FDA: 78662089758.27.B6A2E1B Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf21.hostedemail.com (Postfix) with ESMTP id 4E841D0392F7 for ; Tue, 5 Oct 2021 11:00:19 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id 10-20020a05600c240a00b0030d403f24e2so1133613wmp.9 for ; Tue, 05 Oct 2021 04:00:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=bAriFda77KQSvkeLAdzIsmSKLo4uFZFQooRrd/rm25s=; b=ets2sLEUSeqf2WMRaL5hYWDn8l6wE1b2w/7sDfXLO4KLhXBCQaC4kOgTp19Encefnw jYlLY4PKK2IXcJgFESqwEwHrZ5Td7G1niHpPxgaOkxLTiG61GgRatpK7/qrJkQXD7oP9 yaY9tMHF1HG+L9wTNS1oF4p+m/n35XBPrQDyduB0jHpnbJDmIoe1ltVAk5mXWxmU1jVM pUH1claWzDggDOIxpc04wW2LCQRNVt+OSpPe5M8kEB7gcagf6pi5RmJ/PcJruJkwx9UN cJTZy5YARZkTft5TnnPUQHgutpPljmQ+lyq6vgDFFUpJk/MWohfufIVbaPqOOABrRjWb 6kfg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=bAriFda77KQSvkeLAdzIsmSKLo4uFZFQooRrd/rm25s=; b=BBKyajcOMfquTpX9oC78cUZ5M7ptfpskMMwBuk7fYbYIK5i88nI4/reAjeUWN/UGkq sRg6wbIGcrFH+2588gt4h2lijSdUBQKq0Muce8psNhX0jO4vZLaXSh1pdFSChMMvhtdM O2PPSybXe1PjiAJeUirZay8U+F4GKib5Qdt56FHIVjgHYhojA7NA/f044GDSmbYaPtZ1 nFroXAV76pXFFaxJ1C/wv77Gr6rAb9K21ieKvfjuJjLcpwZ29Rz4K77yMiZ0vjqJjAep XVy9eKkDV4pfbTkWStlwEC5pw/VM88WwXI5nudsQEF3bMCCJ/D12+ITr560xFkbGOu6K 9twg== X-Gm-Message-State: AOAM533qE5AY0X/vAXhRVAE/SXqYnk4a4OZ2KDD/3NWlLkcz+ojSLZnx f/LAJKfH35bpCG+3F09GlXVWxtJPPw== X-Google-Smtp-Source: ABdhPJxwiOSTStDBneaFQWWQkzxn7TeSivNZJE5WBv37pW+d6gP/rGHoYZw5KmilYcs0foSm3DlIC4Lt7Q== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a5d:6084:: with SMTP id w4mr17452146wrt.176.1633431618057; Tue, 05 Oct 2021 04:00:18 -0700 (PDT) Date: Tue, 5 Oct 2021 12:58:57 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-16-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 15/23] locking/barriers, kcsan: Support generic instrumentation From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 4E841D0392F7 X-Stat-Signature: fxxjc13kkgotjzbbx4ab5s7btusi1z6a Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ets2sLEU; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf21.hostedemail.com: domain of 3QjBcYQUKCCUFMWFSHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--elver.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3QjBcYQUKCCUFMWFSHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--elver.bounces.google.com X-HE-Tag: 1633431619-977994 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Thus far only smp_*() barriers had been defined by asm-generic/barrier.h based on __smp_*() barriers, because the !SMP case is usually generic. With the introduction of instrumentation, it also makes sense to have asm-generic/barrier.h assist in the definition of instrumented versions of mb(), rmb(), wmb(), dma_rmb(), and dma_wmb(). Because there is no requirement to distinguish the !SMP case, the definition can be simpler: we can avoid also providing fallbacks for the __ prefixed cases, and only check if `defined(__)`, to finally define the KCSAN-instrumented versions. This also allows for the compiler to complain if an architecture accidentally defines both the normal and __ prefixed variant. Signed-off-by: Marco Elver --- include/asm-generic/barrier.h | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h index 27a9c9edfef6..02c4339c8eeb 100644 --- a/include/asm-generic/barrier.h +++ b/include/asm-generic/barrier.h @@ -21,6 +21,31 @@ #define nop() asm volatile ("nop") #endif +/* + * Architectures that want generic instrumentation can define __ prefixed + * variants of all barriers. + */ + +#ifdef __mb +#define mb() do { kcsan_mb(); __mb(); } while (0) +#endif + +#ifdef __rmb +#define rmb() do { kcsan_rmb(); __rmb(); } while (0) +#endif + +#ifdef __wmb +#define wmb() do { kcsan_wmb(); __wmb(); } while (0) +#endif + +#ifdef __dma_rmb +#define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0) +#endif + +#ifdef __dma_wmb +#define dma_wmb() do { kcsan_wmb(); __dma_wmb(); } while (0) +#endif + /* * Force strict CPU ordering. And yes, this is required on UP too when we're * talking to devices. From patchwork Tue Oct 5 10:58:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536187 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BAF8AC433EF for ; Tue, 5 Oct 2021 11:00:23 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 545756117A for ; Tue, 5 Oct 2021 11:00:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 545756117A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id BC0B6940015; Tue, 5 Oct 2021 07:00:22 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B4B1D940007; Tue, 5 Oct 2021 07:00:22 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 902AC940015; Tue, 5 Oct 2021 07:00:22 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0033.hostedemail.com [216.40.44.33]) by kanga.kvack.org (Postfix) with ESMTP id 7ABEF940007 for ; Tue, 5 Oct 2021 07:00:22 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 2A088182CEC6C for ; Tue, 5 Oct 2021 11:00:22 +0000 (UTC) X-FDA: 78662089884.06.BBED8D6 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf05.hostedemail.com (Postfix) with ESMTP id C33E6507313A for ; Tue, 5 Oct 2021 11:00:21 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id k16-20020a5d6290000000b00160753b430fso5607488wru.11 for ; Tue, 05 Oct 2021 04:00:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rIwmDVerwfzKdUsq5VTaUgCNEFe/HMrrASoB/MaRzu4=; b=FEybRNu8KFEHzd2QkAhkG6/ZycOJ5IY/9gjOSD7sK17HwURzEO1fu0HFiGvoSzuomH 97fWKwu2TgtsMkjORS2GTjY5/C3otHjqifdn7/QMWrnrnzFT528+vO7LsdUFo/uIrUDA kDcQM+ty6eshdT/Jk+o+OkX34/1UcNW7OAFhUX2YRNoNoTtx1tokDc0s0Klqbrt3nGkD pgBSdQvlteDASe5FtX1EcPh9K6iPcbERhku/i0r8D6kgBOrP9InkYDwqoQYt44n+X1KR ae/7WuD50kR6MZnX12ZsJqXIR55C1azwbK8TumT0Um+qL7tYAPJf5I0Oy/4esrtFHg0x AiEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rIwmDVerwfzKdUsq5VTaUgCNEFe/HMrrASoB/MaRzu4=; b=rf1Q2nISL0BoAzlF/8vkbeM9ZW1ysxSFGW5FIa3ZgwDq/VSRrxX5gYEwOUaIX8tMB+ hlRhS1UHOkAFFbx8u6DbQiirSj6IYp4+ymeNi/ogib5T/05eBFw3rwBh5kjWTM2BY0q3 HHDLTzfGyZiknNfUAaL+d3+3XSokF1A59xKI8TqD3RKLDCnhNHKs+3iZiBCmC5ZHjeLD NqN63tBAyQLKuoYzaSUxtECda9XbCvQLI2fs+lbbdIHlMMjlpJo397BdruLWrzE7+Dl9 kgHHdtep9nG0Z9QGnxKPwyRmJCn3qiZj/vWA7iM7n/KMBeG/XysT3jYTNocpp8jeR0BS 9tyA== X-Gm-Message-State: AOAM531sw1cMmY/NOikUUDMKLo6AM42oAfQW/42kfRu95YGA5WQZ0WZA 978uSnTDDC3vGBC+upfxdM9fE0fnQg== X-Google-Smtp-Source: ABdhPJziD9u9RjvWKg/GOzqgqeNeAfSPrcCP8uvolfEqDvj0xzIIakdtGG/R+vhKOQ/wcfPMl/dlIxRxbQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a1c:7e53:: with SMTP id z80mr2649588wmc.152.1633431620500; Tue, 05 Oct 2021 04:00:20 -0700 (PDT) Date: Tue, 5 Oct 2021 12:58:58 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-17-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 16/23] locking/atomics, kcsan: Add instrumentation for barriers From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: C33E6507313A X-Stat-Signature: q3g88yd1zed37y716769u5tnwcbxwqwb Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=FEybRNu8; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of 3RDBcYQUKCCcHOYHUJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--elver.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=3RDBcYQUKCCcHOYHUJRRJOH.FRPOLQXa-PPNYDFN.RUJ@flex--elver.bounces.google.com X-HE-Tag: 1633431621-391750 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Adds the required KCSAN instrumentation for barriers of atomics. Signed-off-by: Marco Elver --- include/linux/atomic/atomic-instrumented.h | 135 ++++++++++++++++++++- scripts/atomic/gen-atomic-instrumented.sh | 41 +++++-- 2 files changed, 166 insertions(+), 10 deletions(-) diff --git a/include/linux/atomic/atomic-instrumented.h b/include/linux/atomic/atomic-instrumented.h index a0f654370da3..5d69b143c28e 100644 --- a/include/linux/atomic/atomic-instrumented.h +++ b/include/linux/atomic/atomic-instrumented.h @@ -45,6 +45,7 @@ atomic_set(atomic_t *v, int i) static __always_inline void atomic_set_release(atomic_t *v, int i) { + kcsan_release(); instrument_atomic_write(v, sizeof(*v)); arch_atomic_set_release(v, i); } @@ -59,6 +60,7 @@ atomic_add(int i, atomic_t *v) static __always_inline int atomic_add_return(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_add_return(i, v); } @@ -73,6 +75,7 @@ atomic_add_return_acquire(int i, atomic_t *v) static __always_inline int atomic_add_return_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_add_return_release(i, v); } @@ -87,6 +90,7 @@ atomic_add_return_relaxed(int i, atomic_t *v) static __always_inline int atomic_fetch_add(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_add(i, v); } @@ -101,6 +105,7 @@ atomic_fetch_add_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_add_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_add_release(i, v); } @@ -122,6 +127,7 @@ atomic_sub(int i, atomic_t *v) static __always_inline int atomic_sub_return(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_sub_return(i, v); } @@ -136,6 +142,7 @@ atomic_sub_return_acquire(int i, atomic_t *v) static __always_inline int atomic_sub_return_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_sub_return_release(i, v); } @@ -150,6 +157,7 @@ atomic_sub_return_relaxed(int i, atomic_t *v) static __always_inline int atomic_fetch_sub(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_sub(i, v); } @@ -164,6 +172,7 @@ atomic_fetch_sub_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_sub_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_sub_release(i, v); } @@ -185,6 +194,7 @@ atomic_inc(atomic_t *v) static __always_inline int atomic_inc_return(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_return(v); } @@ -199,6 +209,7 @@ atomic_inc_return_acquire(atomic_t *v) static __always_inline int atomic_inc_return_release(atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_return_release(v); } @@ -213,6 +224,7 @@ atomic_inc_return_relaxed(atomic_t *v) static __always_inline int atomic_fetch_inc(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_inc(v); } @@ -227,6 +239,7 @@ atomic_fetch_inc_acquire(atomic_t *v) static __always_inline int atomic_fetch_inc_release(atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_inc_release(v); } @@ -248,6 +261,7 @@ atomic_dec(atomic_t *v) static __always_inline int atomic_dec_return(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_return(v); } @@ -262,6 +276,7 @@ atomic_dec_return_acquire(atomic_t *v) static __always_inline int atomic_dec_return_release(atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_return_release(v); } @@ -276,6 +291,7 @@ atomic_dec_return_relaxed(atomic_t *v) static __always_inline int atomic_fetch_dec(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_dec(v); } @@ -290,6 +306,7 @@ atomic_fetch_dec_acquire(atomic_t *v) static __always_inline int atomic_fetch_dec_release(atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_dec_release(v); } @@ -311,6 +328,7 @@ atomic_and(int i, atomic_t *v) static __always_inline int atomic_fetch_and(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_and(i, v); } @@ -325,6 +343,7 @@ atomic_fetch_and_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_and_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_and_release(i, v); } @@ -346,6 +365,7 @@ atomic_andnot(int i, atomic_t *v) static __always_inline int atomic_fetch_andnot(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_andnot(i, v); } @@ -360,6 +380,7 @@ atomic_fetch_andnot_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_andnot_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_andnot_release(i, v); } @@ -381,6 +402,7 @@ atomic_or(int i, atomic_t *v) static __always_inline int atomic_fetch_or(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_or(i, v); } @@ -395,6 +417,7 @@ atomic_fetch_or_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_or_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_or_release(i, v); } @@ -416,6 +439,7 @@ atomic_xor(int i, atomic_t *v) static __always_inline int atomic_fetch_xor(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_xor(i, v); } @@ -430,6 +454,7 @@ atomic_fetch_xor_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_xor_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_xor_release(i, v); } @@ -444,6 +469,7 @@ atomic_fetch_xor_relaxed(int i, atomic_t *v) static __always_inline int atomic_xchg(atomic_t *v, int i) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_xchg(v, i); } @@ -458,6 +484,7 @@ atomic_xchg_acquire(atomic_t *v, int i) static __always_inline int atomic_xchg_release(atomic_t *v, int i) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_xchg_release(v, i); } @@ -472,6 +499,7 @@ atomic_xchg_relaxed(atomic_t *v, int i) static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_cmpxchg(v, old, new); } @@ -486,6 +514,7 @@ atomic_cmpxchg_acquire(atomic_t *v, int old, int new) static __always_inline int atomic_cmpxchg_release(atomic_t *v, int old, int new) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_cmpxchg_release(v, old, new); } @@ -500,6 +529,7 @@ atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic_try_cmpxchg(v, old, new); @@ -516,6 +546,7 @@ atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) static __always_inline bool atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic_try_cmpxchg_release(v, old, new); @@ -532,6 +563,7 @@ atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) static __always_inline bool atomic_sub_and_test(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_sub_and_test(i, v); } @@ -539,6 +571,7 @@ atomic_sub_and_test(int i, atomic_t *v) static __always_inline bool atomic_dec_and_test(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_and_test(v); } @@ -546,6 +579,7 @@ atomic_dec_and_test(atomic_t *v) static __always_inline bool atomic_inc_and_test(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_and_test(v); } @@ -553,6 +587,7 @@ atomic_inc_and_test(atomic_t *v) static __always_inline bool atomic_add_negative(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_add_negative(i, v); } @@ -560,6 +595,7 @@ atomic_add_negative(int i, atomic_t *v) static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_add_unless(v, a, u); } @@ -567,6 +603,7 @@ atomic_fetch_add_unless(atomic_t *v, int a, int u) static __always_inline bool atomic_add_unless(atomic_t *v, int a, int u) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_add_unless(v, a, u); } @@ -574,6 +611,7 @@ atomic_add_unless(atomic_t *v, int a, int u) static __always_inline bool atomic_inc_not_zero(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_not_zero(v); } @@ -581,6 +619,7 @@ atomic_inc_not_zero(atomic_t *v) static __always_inline bool atomic_inc_unless_negative(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_unless_negative(v); } @@ -588,6 +627,7 @@ atomic_inc_unless_negative(atomic_t *v) static __always_inline bool atomic_dec_unless_positive(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_unless_positive(v); } @@ -595,6 +635,7 @@ atomic_dec_unless_positive(atomic_t *v) static __always_inline int atomic_dec_if_positive(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_if_positive(v); } @@ -623,6 +664,7 @@ atomic64_set(atomic64_t *v, s64 i) static __always_inline void atomic64_set_release(atomic64_t *v, s64 i) { + kcsan_release(); instrument_atomic_write(v, sizeof(*v)); arch_atomic64_set_release(v, i); } @@ -637,6 +679,7 @@ atomic64_add(s64 i, atomic64_t *v) static __always_inline s64 atomic64_add_return(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_add_return(i, v); } @@ -651,6 +694,7 @@ atomic64_add_return_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_add_return_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_add_return_release(i, v); } @@ -665,6 +709,7 @@ atomic64_add_return_relaxed(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_add(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_add(i, v); } @@ -679,6 +724,7 @@ atomic64_fetch_add_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_add_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_add_release(i, v); } @@ -700,6 +746,7 @@ atomic64_sub(s64 i, atomic64_t *v) static __always_inline s64 atomic64_sub_return(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_sub_return(i, v); } @@ -714,6 +761,7 @@ atomic64_sub_return_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_sub_return_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_sub_return_release(i, v); } @@ -728,6 +776,7 @@ atomic64_sub_return_relaxed(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_sub(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_sub(i, v); } @@ -742,6 +791,7 @@ atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_sub_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_sub_release(i, v); } @@ -763,6 +813,7 @@ atomic64_inc(atomic64_t *v) static __always_inline s64 atomic64_inc_return(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_return(v); } @@ -777,6 +828,7 @@ atomic64_inc_return_acquire(atomic64_t *v) static __always_inline s64 atomic64_inc_return_release(atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_return_release(v); } @@ -791,6 +843,7 @@ atomic64_inc_return_relaxed(atomic64_t *v) static __always_inline s64 atomic64_fetch_inc(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_inc(v); } @@ -805,6 +858,7 @@ atomic64_fetch_inc_acquire(atomic64_t *v) static __always_inline s64 atomic64_fetch_inc_release(atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_inc_release(v); } @@ -826,6 +880,7 @@ atomic64_dec(atomic64_t *v) static __always_inline s64 atomic64_dec_return(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_return(v); } @@ -840,6 +895,7 @@ atomic64_dec_return_acquire(atomic64_t *v) static __always_inline s64 atomic64_dec_return_release(atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_return_release(v); } @@ -854,6 +910,7 @@ atomic64_dec_return_relaxed(atomic64_t *v) static __always_inline s64 atomic64_fetch_dec(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_dec(v); } @@ -868,6 +925,7 @@ atomic64_fetch_dec_acquire(atomic64_t *v) static __always_inline s64 atomic64_fetch_dec_release(atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_dec_release(v); } @@ -889,6 +947,7 @@ atomic64_and(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_and(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_and(i, v); } @@ -903,6 +962,7 @@ atomic64_fetch_and_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_and_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_and_release(i, v); } @@ -924,6 +984,7 @@ atomic64_andnot(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_andnot(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_andnot(i, v); } @@ -938,6 +999,7 @@ atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_andnot_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_andnot_release(i, v); } @@ -959,6 +1021,7 @@ atomic64_or(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_or(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_or(i, v); } @@ -973,6 +1036,7 @@ atomic64_fetch_or_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_or_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_or_release(i, v); } @@ -994,6 +1058,7 @@ atomic64_xor(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_xor(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_xor(i, v); } @@ -1008,6 +1073,7 @@ atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_xor_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_xor_release(i, v); } @@ -1022,6 +1088,7 @@ atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) static __always_inline s64 atomic64_xchg(atomic64_t *v, s64 i) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_xchg(v, i); } @@ -1036,6 +1103,7 @@ atomic64_xchg_acquire(atomic64_t *v, s64 i) static __always_inline s64 atomic64_xchg_release(atomic64_t *v, s64 i) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_xchg_release(v, i); } @@ -1050,6 +1118,7 @@ atomic64_xchg_relaxed(atomic64_t *v, s64 i) static __always_inline s64 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_cmpxchg(v, old, new); } @@ -1064,6 +1133,7 @@ atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) static __always_inline s64 atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_cmpxchg_release(v, old, new); } @@ -1078,6 +1148,7 @@ atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic64_try_cmpxchg(v, old, new); @@ -1094,6 +1165,7 @@ atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) static __always_inline bool atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic64_try_cmpxchg_release(v, old, new); @@ -1110,6 +1182,7 @@ atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) static __always_inline bool atomic64_sub_and_test(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_sub_and_test(i, v); } @@ -1117,6 +1190,7 @@ atomic64_sub_and_test(s64 i, atomic64_t *v) static __always_inline bool atomic64_dec_and_test(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_and_test(v); } @@ -1124,6 +1198,7 @@ atomic64_dec_and_test(atomic64_t *v) static __always_inline bool atomic64_inc_and_test(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_and_test(v); } @@ -1131,6 +1206,7 @@ atomic64_inc_and_test(atomic64_t *v) static __always_inline bool atomic64_add_negative(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_add_negative(i, v); } @@ -1138,6 +1214,7 @@ atomic64_add_negative(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_add_unless(v, a, u); } @@ -1145,6 +1222,7 @@ atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) static __always_inline bool atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_add_unless(v, a, u); } @@ -1152,6 +1230,7 @@ atomic64_add_unless(atomic64_t *v, s64 a, s64 u) static __always_inline bool atomic64_inc_not_zero(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_not_zero(v); } @@ -1159,6 +1238,7 @@ atomic64_inc_not_zero(atomic64_t *v) static __always_inline bool atomic64_inc_unless_negative(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_unless_negative(v); } @@ -1166,6 +1246,7 @@ atomic64_inc_unless_negative(atomic64_t *v) static __always_inline bool atomic64_dec_unless_positive(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_unless_positive(v); } @@ -1173,6 +1254,7 @@ atomic64_dec_unless_positive(atomic64_t *v) static __always_inline s64 atomic64_dec_if_positive(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_if_positive(v); } @@ -1201,6 +1283,7 @@ atomic_long_set(atomic_long_t *v, long i) static __always_inline void atomic_long_set_release(atomic_long_t *v, long i) { + kcsan_release(); instrument_atomic_write(v, sizeof(*v)); arch_atomic_long_set_release(v, i); } @@ -1215,6 +1298,7 @@ atomic_long_add(long i, atomic_long_t *v) static __always_inline long atomic_long_add_return(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_add_return(i, v); } @@ -1229,6 +1313,7 @@ atomic_long_add_return_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_add_return_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_add_return_release(i, v); } @@ -1243,6 +1328,7 @@ atomic_long_add_return_relaxed(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_add(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_add(i, v); } @@ -1257,6 +1343,7 @@ atomic_long_fetch_add_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_add_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_add_release(i, v); } @@ -1278,6 +1365,7 @@ atomic_long_sub(long i, atomic_long_t *v) static __always_inline long atomic_long_sub_return(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_sub_return(i, v); } @@ -1292,6 +1380,7 @@ atomic_long_sub_return_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_sub_return_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_sub_return_release(i, v); } @@ -1306,6 +1395,7 @@ atomic_long_sub_return_relaxed(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_sub(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_sub(i, v); } @@ -1320,6 +1410,7 @@ atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_sub_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_sub_release(i, v); } @@ -1341,6 +1432,7 @@ atomic_long_inc(atomic_long_t *v) static __always_inline long atomic_long_inc_return(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_inc_return(v); } @@ -1355,6 +1447,7 @@ atomic_long_inc_return_acquire(atomic_long_t *v) static __always_inline long atomic_long_inc_return_release(atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_inc_return_release(v); } @@ -1369,6 +1462,7 @@ atomic_long_inc_return_relaxed(atomic_long_t *v) static __always_inline long atomic_long_fetch_inc(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_inc(v); } @@ -1383,6 +1477,7 @@ atomic_long_fetch_inc_acquire(atomic_long_t *v) static __always_inline long atomic_long_fetch_inc_release(atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_inc_release(v); } @@ -1404,6 +1499,7 @@ atomic_long_dec(atomic_long_t *v) static __always_inline long atomic_long_dec_return(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_dec_return(v); } @@ -1418,6 +1514,7 @@ atomic_long_dec_return_acquire(atomic_long_t *v) static __always_inline long atomic_long_dec_return_release(atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_dec_return_release(v); } @@ -1432,6 +1529,7 @@ atomic_long_dec_return_relaxed(atomic_long_t *v) static __always_inline long atomic_long_fetch_dec(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_dec(v); } @@ -1446,6 +1544,7 @@ atomic_long_fetch_dec_acquire(atomic_long_t *v) static __always_inline long atomic_long_fetch_dec_release(atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_dec_release(v); } @@ -1467,6 +1566,7 @@ atomic_long_and(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_and(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_and(i, v); } @@ -1481,6 +1581,7 @@ atomic_long_fetch_and_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_and_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_and_release(i, v); } @@ -1502,6 +1603,7 @@ atomic_long_andnot(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_andnot(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_andnot(i, v); } @@ -1516,6 +1618,7 @@ atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_andnot_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_andnot_release(i, v); } @@ -1537,6 +1640,7 @@ atomic_long_or(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_or(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_or(i, v); } @@ -1551,6 +1655,7 @@ atomic_long_fetch_or_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_or_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_or_release(i, v); } @@ -1572,6 +1677,7 @@ atomic_long_xor(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_xor(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_xor(i, v); } @@ -1586,6 +1692,7 @@ atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_xor_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_xor_release(i, v); } @@ -1600,6 +1707,7 @@ atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) static __always_inline long atomic_long_xchg(atomic_long_t *v, long i) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_xchg(v, i); } @@ -1614,6 +1722,7 @@ atomic_long_xchg_acquire(atomic_long_t *v, long i) static __always_inline long atomic_long_xchg_release(atomic_long_t *v, long i) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_xchg_release(v, i); } @@ -1628,6 +1737,7 @@ atomic_long_xchg_relaxed(atomic_long_t *v, long i) static __always_inline long atomic_long_cmpxchg(atomic_long_t *v, long old, long new) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_cmpxchg(v, old, new); } @@ -1642,6 +1752,7 @@ atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) static __always_inline long atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_cmpxchg_release(v, old, new); } @@ -1656,6 +1767,7 @@ atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) static __always_inline bool atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic_long_try_cmpxchg(v, old, new); @@ -1672,6 +1784,7 @@ atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) static __always_inline bool atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic_long_try_cmpxchg_release(v, old, new); @@ -1688,6 +1801,7 @@ atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) static __always_inline bool atomic_long_sub_and_test(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_sub_and_test(i, v); } @@ -1695,6 +1809,7 @@ atomic_long_sub_and_test(long i, atomic_long_t *v) static __always_inline bool atomic_long_dec_and_test(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_dec_and_test(v); } @@ -1702,6 +1817,7 @@ atomic_long_dec_and_test(atomic_long_t *v) static __always_inline bool atomic_long_inc_and_test(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_inc_and_test(v); } @@ -1709,6 +1825,7 @@ atomic_long_inc_and_test(atomic_long_t *v) static __always_inline bool atomic_long_add_negative(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_add_negative(i, v); } @@ -1716,6 +1833,7 @@ atomic_long_add_negative(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_add_unless(v, a, u); } @@ -1723,6 +1841,7 @@ atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) static __always_inline bool atomic_long_add_unless(atomic_long_t *v, long a, long u) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_add_unless(v, a, u); } @@ -1730,6 +1849,7 @@ atomic_long_add_unless(atomic_long_t *v, long a, long u) static __always_inline bool atomic_long_inc_not_zero(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_inc_not_zero(v); } @@ -1737,6 +1857,7 @@ atomic_long_inc_not_zero(atomic_long_t *v) static __always_inline bool atomic_long_inc_unless_negative(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_inc_unless_negative(v); } @@ -1744,6 +1865,7 @@ atomic_long_inc_unless_negative(atomic_long_t *v) static __always_inline bool atomic_long_dec_unless_positive(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_dec_unless_positive(v); } @@ -1751,6 +1873,7 @@ atomic_long_dec_unless_positive(atomic_long_t *v) static __always_inline long atomic_long_dec_if_positive(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_dec_if_positive(v); } @@ -1758,6 +1881,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define xchg(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_mb(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_xchg(__ai_ptr, __VA_ARGS__); \ }) @@ -1772,6 +1896,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define xchg_release(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_release(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_xchg_release(__ai_ptr, __VA_ARGS__); \ }) @@ -1786,6 +1911,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define cmpxchg(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_mb(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_cmpxchg(__ai_ptr, __VA_ARGS__); \ }) @@ -1800,6 +1926,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define cmpxchg_release(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_release(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_cmpxchg_release(__ai_ptr, __VA_ARGS__); \ }) @@ -1814,6 +1941,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define cmpxchg64(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_mb(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_cmpxchg64(__ai_ptr, __VA_ARGS__); \ }) @@ -1828,6 +1956,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define cmpxchg64_release(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_release(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_cmpxchg64_release(__ai_ptr, __VA_ARGS__); \ }) @@ -1843,6 +1972,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ typeof(oldp) __ai_oldp = (oldp); \ + kcsan_mb(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_atomic_write(__ai_oldp, sizeof(*__ai_oldp)); \ arch_try_cmpxchg(__ai_ptr, __ai_oldp, __VA_ARGS__); \ @@ -1861,6 +1991,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ typeof(oldp) __ai_oldp = (oldp); \ + kcsan_release(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_atomic_write(__ai_oldp, sizeof(*__ai_oldp)); \ arch_try_cmpxchg_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \ @@ -1892,6 +2023,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define sync_cmpxchg(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_mb(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_sync_cmpxchg(__ai_ptr, __VA_ARGS__); \ }) @@ -1899,6 +2031,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define cmpxchg_double(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_mb(); \ instrument_atomic_write(__ai_ptr, 2 * sizeof(*__ai_ptr)); \ arch_cmpxchg_double(__ai_ptr, __VA_ARGS__); \ }) @@ -1912,4 +2045,4 @@ atomic_long_dec_if_positive(atomic_long_t *v) }) #endif /* _LINUX_ATOMIC_INSTRUMENTED_H */ -// 2a9553f0a9d5619f19151092df5cabbbf16ce835 +// 87c974b93032afd42143613434d1a7788fa598f9 diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh index 035ceb4ee85c..68f902731d01 100755 --- a/scripts/atomic/gen-atomic-instrumented.sh +++ b/scripts/atomic/gen-atomic-instrumented.sh @@ -34,6 +34,14 @@ gen_param_check() gen_params_checks() { local meta="$1"; shift + local order="$1"; shift + + if [ "${order}" = "_release" ]; then + printf "\tkcsan_release();\n" + elif [ -z "${order}" ] && ! meta_in "$meta" "slv"; then + # RMW with return value is fully ordered + printf "\tkcsan_mb();\n" + fi while [ "$#" -gt 0 ]; do gen_param_check "$meta" "$1" @@ -56,7 +64,7 @@ gen_proto_order_variant() local ret="$(gen_ret_type "${meta}" "${int}")" local params="$(gen_params "${int}" "${atomic}" "$@")" - local checks="$(gen_params_checks "${meta}" "$@")" + local checks="$(gen_params_checks "${meta}" "${order}" "$@")" local args="$(gen_args "$@")" local retstmt="$(gen_ret_stmt "${meta}")" @@ -75,29 +83,44 @@ EOF gen_xchg() { local xchg="$1"; shift + local order="$1"; shift local mult="$1"; shift + kcsan_barrier="" + if [ "${xchg%_local}" = "${xchg}" ]; then + case "$order" in + _release) kcsan_barrier="kcsan_release()" ;; + "") kcsan_barrier="kcsan_mb()" ;; + esac + fi + if [ "${xchg%${xchg#try_cmpxchg}}" = "try_cmpxchg" ] ; then cat < X-Patchwork-Id: 12536189 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EA58DC433FE for ; Tue, 5 Oct 2021 11:00:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 9D5E86117A for ; Tue, 5 Oct 2021 11:00:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 9D5E86117A Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 3D292940016; Tue, 5 Oct 2021 07:00:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 35CBC940007; Tue, 5 Oct 2021 07:00:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D5AD940016; Tue, 5 Oct 2021 07:00:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0213.hostedemail.com [216.40.44.213]) by kanga.kvack.org (Postfix) with ESMTP id 059B3940007 for ; Tue, 5 Oct 2021 07:00:27 -0400 (EDT) Received: from smtpin04.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BA3408249980 for ; Tue, 5 Oct 2021 11:00:26 +0000 (UTC) X-FDA: 78662090052.04.0D26CE4 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf22.hostedemail.com (Postfix) with ESMTP id 8460A31CE for ; Tue, 5 Oct 2021 11:00:26 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id h24-20020a7bc938000000b0030d400be5b5so6249299wml.0 for ; Tue, 05 Oct 2021 04:00:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=FsZx+1aUI6aPLGHtJazoYidelDd+JHI3Oque8YR4DUc=; b=bncNaURV8U+L/MP6DHcifcIjKya0n67TGaPEehD44/Pm+KoCE66jdcQK6BvBN1e9Az i8wF59eNK3HPqwpVHP0CDCf0K3ZCa+f3rTCri6kjG1bvs7F4D3M9s9mornuL8qEbKzL+ +E8fH1Qusqns6nKdkAHhbGvyRMRLCe8i+3G6SPHIsikTc5lIoDfY6r45Ini4E+cAjWtM 2E5q28T9pRHC5H+nsJRK4cFWUYSUZw9yPMwgrB4sv5uhcKfNqrvZLb64Ee2fpNlI+SmB JwINLd+pzRyh2hQrT333/qEUgI3uFFDoSljZcofqNbVqmGfSecILtCPE6uNOTDyemEqj dP0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FsZx+1aUI6aPLGHtJazoYidelDd+JHI3Oque8YR4DUc=; b=54m8tIvPXEV5zL3th+ArMSyoum29Ydotizeu0qgkWqjj0NUy+YJpeHh+3pckVInCHF AoyAKlt95xay8II/QOcnOUgZCO4WZ4aLVOV/kCy+YxX3i9bYGzPjnzNdBjTzW8+Gi9rs 5njwEexbloUpVH5izSFfeam0j2Y8WtdzsrP73vPcdrN6V0nGxq8nsJhRaXay9C6QntxP 80SPnX2jwN/QEoL0mbdPuTntdpW1IPxS3/3k8LhNi5Jpq2Ur7h3w1E0Enpyk6x5FvMYg YYfs3dXSkP3xw3w7iFYAAvcZybDyXpgRPyxJnQbtDte0ukwO9V8590IQFIBYJU5ZWQat gAGQ== X-Gm-Message-State: AOAM531CpkBjV/88W51nDSbPDSj1nWu2BsDRD4Mk+2lf8NMCTBEsfvpk FphKgvbhA6BsmexjjKYn58tZOSwbFw== X-Google-Smtp-Source: ABdhPJyDvlA4g8ipU5InCKw9mS62EE7qBbe4UjBC1Axj5ukt1gnSOvOZv1Ve5Dnwomu+Br87UdQC1JgBfw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a05:600c:4fc7:: with SMTP id o7mr2657914wmq.91.1633431625363; Tue, 05 Oct 2021 04:00:25 -0700 (PDT) Date: Tue, 5 Oct 2021 12:59:00 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-19-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 18/23] x86/barriers, kcsan: Use generic instrumentation for non-smp barriers From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Rspamd-Queue-Id: 8460A31CE X-Stat-Signature: mzx6if5a1t35wy4hm5biaei8uxcuc8p7 Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=bncNaURV; spf=pass (imf22.hostedemail.com: domain of 3STBcYQUKCCwMTdMZOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--elver.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3STBcYQUKCCwMTdMZOWWOTM.KWUTQVcf-UUSdIKS.WZO@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam06 X-HE-Tag: 1633431626-617051 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Prefix all barriers with __, now that asm-generic/barriers.h supports defining the final instrumented version of these barriers. The change is limited to barriers used by x86-64. Signed-off-by: Marco Elver --- arch/x86/include/asm/barrier.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h index 3ba772a69cc8..35389b2af88e 100644 --- a/arch/x86/include/asm/barrier.h +++ b/arch/x86/include/asm/barrier.h @@ -19,9 +19,9 @@ #define wmb() asm volatile(ALTERNATIVE("lock; addl $0,-4(%%esp)", "sfence", \ X86_FEATURE_XMM2) ::: "memory", "cc") #else -#define mb() asm volatile("mfence":::"memory") -#define rmb() asm volatile("lfence":::"memory") -#define wmb() asm volatile("sfence" ::: "memory") +#define __mb() asm volatile("mfence":::"memory") +#define __rmb() asm volatile("lfence":::"memory") +#define __wmb() asm volatile("sfence" ::: "memory") #endif /** @@ -51,8 +51,8 @@ static inline unsigned long array_index_mask_nospec(unsigned long index, /* Prevent speculative execution past this barrier. */ #define barrier_nospec() alternative("", "lfence", X86_FEATURE_LFENCE_RDTSC) -#define dma_rmb() barrier() -#define dma_wmb() barrier() +#define __dma_rmb() barrier() +#define __dma_wmb() barrier() #define __smp_mb() asm volatile("lock; addl $0,-4(%%" _ASM_SP ")" ::: "memory", "cc") From patchwork Tue Oct 5 10:59:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536191 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 81B6BC4332F for ; Tue, 5 Oct 2021 11:00:30 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2BC1161186 for ; Tue, 5 Oct 2021 11:00:30 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2BC1161186 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 818D2940017; Tue, 5 Oct 2021 07:00:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7A120940007; Tue, 5 Oct 2021 07:00:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 64230940017; Tue, 5 Oct 2021 07:00:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0140.hostedemail.com [216.40.44.140]) by kanga.kvack.org (Postfix) with ESMTP id 5434C940007 for ; Tue, 5 Oct 2021 07:00:29 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 176428249980 for ; Tue, 5 Oct 2021 11:00:29 +0000 (UTC) X-FDA: 78662090178.03.E8C8236 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf05.hostedemail.com (Postfix) with ESMTP id D0328507313E for ; Tue, 5 Oct 2021 11:00:28 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id e11-20020a5d500b000000b001609d035ea5so2723634wrt.22 for ; Tue, 05 Oct 2021 04:00:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=DahfAT1Q5cTf5jxzmYjgaD95bFaAqh3xNJSBaROtoZ0=; b=lZfIqlUUqz7u5X/s/XmXxONKoH9HGZb4q0bpmoH4oihbA1XE3i5nHsIkL+hhCtJY3V gvIsY4OWMufkNrB7mXwRc/5nRf82JskqH8iyqHsmDzenzSo5P9uZ25rWIHXuKezfeurA B74KPFJzmNJFKW7vg3RQDckzc/MhOSxFUgw2zBgmiDHU3+6zwQIJVSxRK/TWkrJ4haeu ywrJxCJ3JEFT5zH9aUCCBMUsDT3e7OW0ae9Urmv2YygyzamAKu3iAJNUfU2O0AIeIRRr 7x7Gtk3YUtwPTPnoUXRcFSZRNAMXe8e7SjzOqwOs9ulzJpZAGRgyV4tIO2+HucspCIGw JA7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=DahfAT1Q5cTf5jxzmYjgaD95bFaAqh3xNJSBaROtoZ0=; b=xNgY2Er5q5jGVRYXGuWad+eAtfvKnZZmarwYKlCQkmTExE+CxxTwjazbe9nbQXh+DN 2FJx7bmBL3YAf5U1s8kM2Ka1rmPUBVoXyNpRAe4zEZYwxUO0yo1ZP/UYZxyrCL/pTAoF ziD8ezz/mNEHflcgC7fAHHN3uMEjGT3kV316uzUJAgBNfmslPXTz4U/9ugqERJ6Rz7ZR Q172XbltP+CnkEKOEQLT4IFDnZaNm2Alpy7iuaScuD6Gq3hYAXutx0CupuzHSUU2aVCM KtejEQ5x+USqfLTs3JczDJmQ0VrAEjHLNJ6eIlzZwrvMxQkkf+jVUwoBd4EWW0sUMeik bdhQ== X-Gm-Message-State: AOAM533SxhVy5ZMEGNdVLMySUdMwyPE4LXmQ0PCTsQdif/jNi4irmBbh k/i7wOh1bBRyudpIEZrW1pYL0XxDlA== X-Google-Smtp-Source: ABdhPJxJAoPskpWSnMop/lTO76OUSxjnhmwBrh54jf8LKDoRw6BSA+ERe7cvgmB0awg4h7LA7Ty/9GLR1g== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a1c:ed0a:: with SMTP id l10mr2762058wmh.140.1633431627680; Tue, 05 Oct 2021 04:00:27 -0700 (PDT) Date: Tue, 5 Oct 2021 12:59:01 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-20-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 19/23] x86/qspinlock, kcsan: Instrument barrier of pv_queued_spin_unlock() From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=lZfIqlUU; spf=pass (imf05.hostedemail.com: domain of 3SzBcYQUKCC4OVfObQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--elver.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=3SzBcYQUKCC4OVfObQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: D0328507313E X-Stat-Signature: nm8iom8rfkybqtzsnb3mxgnmn3am1676 X-HE-Tag: 1633431628-533261 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If CONFIG_PARAVIRT_SPINLOCKS=y, queued_spin_unlock() is implemented using pv_queued_spin_unlock() which is entirely inline asm based. As such, we do not receive any KCSAN barrier instrumentation via regular atomic operations. Add the missing KCSAN barrier instrumentation for the CONFIG_PARAVIRT_SPINLOCKS case. Signed-off-by: Marco Elver --- arch/x86/include/asm/qspinlock.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h index d86ab942219c..d87451df480b 100644 --- a/arch/x86/include/asm/qspinlock.h +++ b/arch/x86/include/asm/qspinlock.h @@ -53,6 +53,7 @@ static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) static inline void queued_spin_unlock(struct qspinlock *lock) { + kcsan_release(); pv_queued_spin_unlock(lock); } From patchwork Tue Oct 5 10:59:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536193 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EBD8C433F5 for ; Tue, 5 Oct 2021 11:00:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E5434614C8 for ; Tue, 5 Oct 2021 11:00:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E5434614C8 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 73030940018; Tue, 5 Oct 2021 07:00:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 68F86940007; Tue, 5 Oct 2021 07:00:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4BCCF940018; Tue, 5 Oct 2021 07:00:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0180.hostedemail.com [216.40.44.180]) by kanga.kvack.org (Postfix) with ESMTP id 393F9940007 for ; Tue, 5 Oct 2021 07:00:31 -0400 (EDT) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 02CB331ED8 for ; Tue, 5 Oct 2021 11:00:31 +0000 (UTC) X-FDA: 78662090262.02.887041B Received: from mail-qk1-f202.google.com (mail-qk1-f202.google.com [209.85.222.202]) by imf26.hostedemail.com (Postfix) with ESMTP id 9F74420061F2 for ; Tue, 5 Oct 2021 11:00:30 +0000 (UTC) Received: by mail-qk1-f202.google.com with SMTP id bk9-20020a05620a1a0900b0045df00f93a9so26660546qkb.1 for ; Tue, 05 Oct 2021 04:00:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GLbylOcVG6Z3Pa9zRtQvvZApReh1K+etnDrS06Ovq3M=; b=FUiAxTnfrZLKUIMUnc8u8Uz1MPlAowbzXsoiyB0j1/uFmemd8bA57CbOJSvAnN0uer mWV43BDY51SWbpoWRR+8FYTzxCYDzgf2kQUkjcDeaLxzs60zcolLONh3I5/xSIM2Ln9a pDdOV/AOyMJZ3CEpBJr6cgi6t+sMmiweK3lQVUBfjP8KqZcObQKaWzLkMsOBi4PKrRwo pkWV19EvmTjKbCycM8POFEy7LjGI3rebQ+fKCwy2o+uP6loTZ5R2j48UcupzxzV7hZxz MydcwcR0hWw/EcEm6/tZbkG8TeBm5t0dYPdwTG1XyUWWYW6bL/e9ZkuFA9GL2O+TB21f /7GA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GLbylOcVG6Z3Pa9zRtQvvZApReh1K+etnDrS06Ovq3M=; b=3vo5z5OeAaJQbREE/mn41DBOPLIVBEYREEsElW9WB/sl/1KQD0mazRWOBigeSvCbtO E8VRMVXKywvHOys2RZjNhKJRLa/oxHE/L3lLbUxrtnQ4yvHvb1zXwaziyCNEudP1uEGJ cN5YTIiRHZrge9IcWtXdlvj16FAIbSJ412/6SV96n5AqbWCj5PZ+7Y95KHZFThOv3KWk WTt73Zppy5mtfblrN6JmdSP7ZssfMdgPK6XR1ZZ+vMqfp+s9NmvSEKqkGfluBFWzqTIB LhqXsPdbRmyRgx2RmxajYiE04+luXMvCZhD+HdRh/4/wYA/0nQiOexifNNJYA9epw/vi 23xg== X-Gm-Message-State: AOAM532Q8IGOsk/fkGrdUL8LpnZtM0W7HS0QKoODVQ4ZfkQNYrWWh8r+ Bhx33uZD2lkd9iASwIzHLFXHyk3eGA== X-Google-Smtp-Source: ABdhPJyjNJ7xQvPokpErt/T0wlpwPwNSdSeEenve8kThGp3G7hepSph1caWrRWcNousw/8R5qeK+ovlydQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a0c:c1c9:: with SMTP id v9mr26722583qvh.31.1633431629848; Tue, 05 Oct 2021 04:00:29 -0700 (PDT) Date: Tue, 5 Oct 2021 12:59:02 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-21-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 20/23] mm, kcsan: Enable barrier instrumentation From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=FUiAxTnf; spf=pass (imf26.hostedemail.com: domain of 3TTBcYQUKCDAQXhQdSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--elver.bounces.google.com designates 209.85.222.202 as permitted sender) smtp.mailfrom=3TTBcYQUKCDAQXhQdSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 9F74420061F2 X-Stat-Signature: d13w4wcrjd5ns8umh9ukhgxyejgw1ac9 X-HE-Tag: 1633431630-205552 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some memory management calls imply memory barriers that are required to avoid false positives. For example, without the correct instrumentation, we could observe data races of the following variant: T0 | T1 ------------------------+------------------------ | *a = 42; ---+ | kfree(a); | | | | b = kmalloc(..); // b == a <-+ | *b = 42; // not a data race! | Therefore, instrument memory barriers in all allocator code currently not being instrumented in a default build. Signed-off-by: Marco Elver --- mm/Makefile | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/Makefile b/mm/Makefile index fc60a40ce954..11e9fcd410be 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -15,6 +15,8 @@ KCSAN_SANITIZE_slab_common.o := n KCSAN_SANITIZE_slab.o := n KCSAN_SANITIZE_slub.o := n KCSAN_SANITIZE_page_alloc.o := n +# But enable explicit instrumentation for memory barriers. +KCSAN_INSTRUMENT_BARRIERS := y # These files are disabled because they produce non-interesting and/or # flaky coverage that is not a function of syscall inputs. E.g. slab is out of From patchwork Tue Oct 5 10:59:03 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536199 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 42E05C433EF for ; Tue, 5 Oct 2021 11:01:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E67F061186 for ; Tue, 5 Oct 2021 11:01:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org E67F061186 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 9297A6B006C; Tue, 5 Oct 2021 07:01:33 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8D7B3940007; Tue, 5 Oct 2021 07:01:33 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7C70E6B0072; Tue, 5 Oct 2021 07:01:33 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id 6D0516B006C for ; Tue, 5 Oct 2021 07:01:33 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 38A59299C9 for ; Tue, 5 Oct 2021 11:01:33 +0000 (UTC) X-FDA: 78662092866.31.FCF7025 Received: from mail-qt1-f201.google.com (mail-qt1-f201.google.com [209.85.160.201]) by imf25.hostedemail.com (Postfix) with ESMTP id D4F67B0022D6 for ; Tue, 5 Oct 2021 11:00:32 +0000 (UTC) Received: by mail-qt1-f201.google.com with SMTP id 13-20020ac8560d000000b0029f69548889so22849374qtr.3 for ; Tue, 05 Oct 2021 04:00:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=TKNtzDHuk87hCc1lxzVppXFapgrghO5zpeN0ZyRA5jw=; b=JKcUv82mPXqeJdPMCbVh91BG9upZmsAGCCh/2bt+IWLY2yjvITsH/cWV/zU35BdKTM jNCdIMmV8ckWS5Du9bS6YqvpmsOpyxG9JOsg8ePp3uP/4/z0HeO/qNWOh2guREdPHWF6 2ByHNRDO5z9Q3gSP3P+haGegcVGmNGue9HJ8ljLcuxqEbk3AkomVVo1IaOdhXrryVvmy qdZyjIh39khqL0eN1TepdMNJRut7rArDExu0CgJa/4U35yc+kvnNRWfL61YAkvrh63ES QldniYUzN5VCAcdUFgtq+R9OdRNKMwd89lOkofWQW5sAjXDcndxnzwpFNvf77dR5ydM4 sR5Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=TKNtzDHuk87hCc1lxzVppXFapgrghO5zpeN0ZyRA5jw=; b=2MAEru9LzMv3sNz2T+28puIslhK/oDTaQD+QrZYFPGYs4GKmP2vN1ohHk/MyWojslK lteu+wxDhIgEuvnK1+vK7WKWe1VdDvzLFf9D33sVqfOwnogTgzgObjtAivIZNd67fxRK /Ub44XCTWxUtl/wRKMR2H8feQAraXVZYFk2xBYqhFU/mJMnGcvMn6ig9k/+GfvKtxWYw QaM1yJlv4xWhMvRdSdmkitDf0oRf8BgCn4biMcouLbbBjNDWWezo5QdpcgrjkLW8lIxM dPXocIbIHtZU+Zu71Iri65mh8T4O4mzHnKw3R1PV1Fghjf8M6Gc56oBlrYUHuRpIksbC 0xtQ== X-Gm-Message-State: AOAM533pfklHZRbd+63ULIEmCHKjfuyUGhcUvm5jEHa+PNkCk8tghK6K aKGf2LeqsYTIGASjPspPjc2OC4fkDA== X-Google-Smtp-Source: ABdhPJzNiBAgAqz71GJIOzstXGmj9ewgvEQ07LltyxgH0Wyapt+QKybDaL8wqCgtEnvE4ONvgrWG9IIZ8w== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:ad4:46d1:: with SMTP id g17mr26469108qvw.5.1633431631997; Tue, 05 Oct 2021 04:00:31 -0700 (PDT) Date: Tue, 5 Oct 2021 12:59:03 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-22-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 21/23] sched, kcsan: Enable memory barrier instrumentation From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: D4F67B0022D6 X-Stat-Signature: treqiwwmtkp9fisim3pxu3ki6tzmzsgp Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=JKcUv82m; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf25.hostedemail.com: domain of 3TzBcYQUKCDISZjSfUccUZS.QcaZWbil-aaYjOQY.cfU@flex--elver.bounces.google.com designates 209.85.160.201 as permitted sender) smtp.mailfrom=3TzBcYQUKCDISZjSfUccUZS.QcaZWbil-aaYjOQY.cfU@flex--elver.bounces.google.com X-HE-Tag: 1633431632-492760 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There's no fundamental reason to disable KCSAN for scheduler code, except for excessive noise and performance concerns (instrumenting scheduler code is usually a good way to stress test KCSAN itself). However, several core sched functions imply memory barriers that are invisible to KCSAN without instrumentation, but are required to avoid false positives. Therefore, unconditionally enable instrumentation of memory barriers in scheduler code. Also update the comment to reflect this and be a bit more brief. Signed-off-by: Marco Elver --- kernel/sched/Makefile | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile index 978fcfca5871..90da599f5560 100644 --- a/kernel/sched/Makefile +++ b/kernel/sched/Makefile @@ -7,11 +7,10 @@ endif # that is not a function of syscall inputs. E.g. involuntary context switches. KCOV_INSTRUMENT := n -# There are numerous data races here, however, most of them are due to plain accesses. -# This would make it even harder for syzbot to find reproducers, because these -# bugs trigger without specific input. Disable by default, but should re-enable -# eventually. +# Disable KCSAN to avoid excessive noise and performance degradation. To avoid +# false positives ensure barriers implied by sched functions are instrumented. KCSAN_SANITIZE := n +KCSAN_INSTRUMENT_BARRIERS := y ifneq ($(CONFIG_SCHED_OMIT_FRAME_POINTER),y) # According to Alan Modra , the -fno-omit-frame-pointer is From patchwork Tue Oct 5 10:59:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536195 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3DFEC433FE for ; Tue, 5 Oct 2021 11:00:37 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4D29161186 for ; Tue, 5 Oct 2021 11:00:37 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4D29161186 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id EA77A940019; Tue, 5 Oct 2021 07:00:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E56FA940007; Tue, 5 Oct 2021 07:00:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CF90A940019; Tue, 5 Oct 2021 07:00:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0190.hostedemail.com [216.40.44.190]) by kanga.kvack.org (Postfix) with ESMTP id BEDE6940007 for ; Tue, 5 Oct 2021 07:00:36 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 6ECFE1808F812 for ; Tue, 5 Oct 2021 11:00:36 +0000 (UTC) X-FDA: 78662090472.14.EC34A34 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf08.hostedemail.com (Postfix) with ESMTP id 366953002524 for ; Tue, 5 Oct 2021 11:00:36 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id r15-20020adfce8f000000b0015df1098ccbso5600916wrn.4 for ; Tue, 05 Oct 2021 04:00:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=HKI+TnuwkVzGQacqSxDdhV83/KyiTPOZkFDvUOmoq5w=; b=drtT1iGcO3gVywE5+7iiUB+0nkm0x2iHratGhz1+VgZ3S9aL3d85rt7o2hkguAv57j mYu/ECwQYFZKPddmRj18p6tT9dppN3hIRKLF/GDbnWAhASokfzi971xdwXFJUWzX/OVB IrfukrFN921mvm9vOqZqpJB4Cw82TTkrdJ0gyTWK3Hj1qEp9c15e8MmxKCo3Wk155Sht daLsdqNyd894MD9YepBYTRw81E3D1q838qAaUrFaUNRaCD8L1rAGhpvD+dyphBPQNEds Ghc1MsrGDBNuQQDgTGlUeYdgZUt3Q/TVBpyVfX4gbszogoVCovBzfct5itUMq1L63oB7 Iolg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=HKI+TnuwkVzGQacqSxDdhV83/KyiTPOZkFDvUOmoq5w=; b=1ulRS+3Utr8KkU6NYoM7KTQQL8XCcksTm1ZmepuLhO143Rs7rcDvq2vR6HAJthn3UM o+Ttmjp34pN93GNxaAR67dyIhDB2UUKJWP2L9hN/pC60I6md5u4QJ3vLO7aRyX1NvTyB xEnU8+PitKH+Tb2HQUhMBGxc12/lH+KWCsGuihZmSdOwjCb+DfrnT8hDqrA++CrNYX/7 0z2vdn6pIPolopRJZIx2/YCFAcICJVCTt2cPPNgKw+P7H4bs4N8J3T0jtRFwfcE5speM VxWWRIpyVv2dUAYlgjnUuFfeYJXei6NvB2hvEkGG3K3DivMhfKUA1xjI0PUbpHOL6Jea YDRg== X-Gm-Message-State: AOAM532a9TLq1ThXolWywVLdsHBFGSqyKszvBzfvdmF2aOd0oIAG+l0E TsmPAZmfLbvPOkWl/GFDJNR4USQGJQ== X-Google-Smtp-Source: ABdhPJwXAWBk8ymGDBMpf3SYMhCr7mdIwIo84alxZ309ddgJ/+Ku3X3widAzFVRKQyAwYDxE9VpI1KyKrg== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a05:600c:3b26:: with SMTP id m38mr554577wms.0.1633431634472; Tue, 05 Oct 2021 04:00:34 -0700 (PDT) Date: Tue, 5 Oct 2021 12:59:04 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-23-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 22/23] objtool, kcsan: Add memory barrier instrumentation to whitelist From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: 366953002524 X-Stat-Signature: djq3u491wikcumdeowrpxzrb59h7bkrt Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=drtT1iGc; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf08.hostedemail.com: domain of 3UjBcYQUKCDUVcmViXffXcV.TfdcZelo-ddbmRTb.fiX@flex--elver.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=3UjBcYQUKCDUVcmViXffXcV.TfdcZelo-ddbmRTb.fiX@flex--elver.bounces.google.com X-HE-Tag: 1633431636-210675 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Adds KCSAN's memory barrier instrumentation to objtool's uaccess whitelist. Signed-off-by: Marco Elver --- tools/objtool/check.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index e5947fbb9e7a..7e8cd3ba5482 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -651,6 +651,10 @@ static const char *uaccess_safe_builtin[] = { "__asan_report_store16_noabort", /* KCSAN */ "__kcsan_check_access", + "__kcsan_mb", + "__kcsan_wmb", + "__kcsan_rmb", + "__kcsan_release", "kcsan_found_watchpoint", "kcsan_setup_watchpoint", "kcsan_check_scoped_accesses", From patchwork Tue Oct 5 10:59:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12536201 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43D0DC433F5 for ; Tue, 5 Oct 2021 11:01:39 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EF13C601FF for ; Tue, 5 Oct 2021 11:01:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org EF13C601FF Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 90112940008; Tue, 5 Oct 2021 07:01:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B0E3940007; Tue, 5 Oct 2021 07:01:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 778D7940008; Tue, 5 Oct 2021 07:01:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0138.hostedemail.com [216.40.44.138]) by kanga.kvack.org (Postfix) with ESMTP id 67A76940007 for ; Tue, 5 Oct 2021 07:01:38 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 29E6C181E4911 for ; Tue, 5 Oct 2021 11:01:38 +0000 (UTC) X-FDA: 78662093076.12.9C854E9 Received: from mail-qk1-f202.google.com (mail-qk1-f202.google.com [209.85.222.202]) by imf11.hostedemail.com (Postfix) with ESMTP id 90DD2F00240C for ; Tue, 5 Oct 2021 11:00:37 +0000 (UTC) Received: by mail-qk1-f202.google.com with SMTP id bj6-20020a05620a190600b0045e164b4576so26557249qkb.8 for ; Tue, 05 Oct 2021 04:00:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=GVi87uCC9iTMZ0tahcA01mPlZX23fhUUuaenbQ1/OXU=; b=eBKhKFS4x1q99vUTWlhpytTALPtvi3mgku2eaV9j+J7rB3DErHSPoLRonHHdXa8PGK 6HibdPdPZNy+o32rjdkB5tY0E11xrK9Ac3m5LDrgA2eA7J4HLKzrEbZQKgIw1AY8VNHy cAXItLXIxr8LebyG5GXhJ5+H92uEdXLdYsxLq82kMDMHBfWpeoLyU9FKZpfWRWKR30/e 40g9aYJKuuWKDqCwSD6vrOeXjAUAmFMMNRQ8Wu7WwLiT+hOd27rLzpsYzEuWPDXPx3F4 L1BPhdtoHS1Yj0ZHU4Y3K79zG6ebY4E4kLEZBtO7noFlNeKlrDewmFrRbhHEejhMl+79 /MaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=GVi87uCC9iTMZ0tahcA01mPlZX23fhUUuaenbQ1/OXU=; b=DUXLA8dJgnbJGHQgGVeGVTfQ1Kx1yvuzief6+67teY+bsaeUH6FaJqPNcsEYn6lynX c/wD8EjEXw8swklSMpuTUjJ12r8M8Ua6u3yUlUJPkoHtxzifcqbdaV+OpJuCufXJEfi7 KcCTDOwCy2i+NB6SAtgLvZoQYr5RzoH2itO/1/3kqAoBXGpFM6CzhQTXQqa2958lZLaF CjwcslRog0Kj0S2kPAeJPB5cEx9uYppsE7OI+VUyzJBb/AKXnIzgdim+bXM77DoiHeZ3 dVfnCOrtdzTaLBvuE+Vuzdyvk6Ah6Cv/plN2cAlmlEicfqbyfLEiFjRDCSm0mzJ62d2X fVFQ== X-Gm-Message-State: AOAM532lYeS5QV4CirtvEmqpZbO/DkPenUjZhATRb+i2sC8hlQmKZ+Fh dknfbwH6upQ2PU0+c9gCQVR1jS+aNw== X-Google-Smtp-Source: ABdhPJyqnmEQmSHt6KCacfuyURYDmMa093deSnQHXsO3BMcqzTTUKZjrrxibP+T5sEG2RF9gpTkjLb8Zvw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:e44f:5054:55f8:fcb8]) (user=elver job=sendgmr) by 2002:a05:6214:1305:: with SMTP id a5mr21006975qvv.64.1633431636934; Tue, 05 Oct 2021 04:00:36 -0700 (PDT) Date: Tue, 5 Oct 2021 12:59:05 +0200 In-Reply-To: <20211005105905.1994700-1-elver@google.com> Message-Id: <20211005105905.1994700-24-elver@google.com> Mime-Version: 1.0 References: <20211005105905.1994700-1-elver@google.com> X-Mailer: git-send-email 2.33.0.800.g4c38ced690-goog Subject: [PATCH -rcu/kcsan 23/23] objtool, kcsan: Remove memory barrier instrumentation from noinstr From: Marco Elver To: elver@google.com, "Paul E . McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 90DD2F00240C X-Stat-Signature: 6xqo5zfsjnd7x4hbs8npj5pt3ea6gxgz Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=eBKhKFS4; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf11.hostedemail.com: domain of 3VDBcYQUKCDcXeoXkZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--elver.bounces.google.com designates 209.85.222.202 as permitted sender) smtp.mailfrom=3VDBcYQUKCDcXeoXkZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--elver.bounces.google.com X-HE-Tag: 1633431637-38856 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Teach objtool to turn instrumentation required for memory barrier modeling into nops in noinstr text. The __tsan_func_entry/exit calls are still emitted by compilers even with the __no_sanitize_thread attribute. The memory barrier instrumentation will be inserted explicitly (without compiler help), and thus needs to also explicitly be removed. Signed-off-by: Marco Elver --- tools/objtool/check.c | 32 ++++++++++++++++++++++++++------ 1 file changed, 26 insertions(+), 6 deletions(-) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 7e8cd3ba5482..7b694e639164 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -965,6 +965,31 @@ static struct symbol *find_call_destination(struct section *sec, unsigned long o return call_dest; } +static bool should_remove_if_noinstr(const char *name) +{ + /* + * Many compilers cannot disable KCOV with a function attribute so they + * need a little help, NOP out any KCOV calls from noinstr text. + */ + if (!strncmp(name, "__sanitizer_cov_", 16)) + return true; + + /* + * Compilers currently do not remove __tsan_func_entry/exit with the + * __no_sanitize_thread attribute, remove them. Memory barrier + * instrumentation is not emitted by the compiler, but inserted + * explicitly, so we need to also remove them. + */ + if (!strncmp(name, "__tsan_func_", 12) || + !strcmp(name, "__kcsan_mb") || + !strcmp(name, "__kcsan_wmb") || + !strcmp(name, "__kcsan_rmb") || + !strcmp(name, "__kcsan_release")) + return true; + + return false; +} + /* * Find the destination instructions for all calls. */ @@ -1031,13 +1056,8 @@ static int add_call_destinations(struct objtool_file *file) &file->static_call_list); } - /* - * Many compilers cannot disable KCOV with a function attribute - * so they need a little help, NOP out any KCOV calls from noinstr - * text. - */ if (insn->sec->noinstr && - !strncmp(insn->call_dest->name, "__sanitizer_cov_", 16)) { + should_remove_if_noinstr(insn->call_dest->name)) { if (reloc) { reloc->type = R_NONE; elf_write_reloc(file->elf, reloc);