From patchwork Tue Nov 30 11:44:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 685F1C433F5 for ; Tue, 30 Nov 2021 11:45:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236614AbhK3Ls1 (ORCPT ); Tue, 30 Nov 2021 06:48:27 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231292AbhK3LsZ (ORCPT ); Tue, 30 Nov 2021 06:48:25 -0500 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D11DC061574 for ; Tue, 30 Nov 2021 03:45:06 -0800 (PST) Received: by mail-wm1-x34a.google.com with SMTP id n16-20020a05600c3b9000b003331973fdbbso12722276wms.0 for ; Tue, 30 Nov 2021 03:45:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xU7oMqGuZSzuwQqlTLWl5HT2gcOlUZd23bSn8B82hrg=; b=Xy0Q+YI5NXLu9Vu/GqZXZ/NkVKZGY+Dyl/qHhhsqu+N7c30y5RqI/XCrIlHe3X19Nh evJKpX1Yb5PiOGVi1OlZVgpRdD2+1S8l7ngHyodcnAb0g7h8yA5Ab5+1LnPIcWw8vsTM p2uXuWZAkjJyFoSytrkutzkg/LKlRjI43oPCGO+yw5/6Yfs1tHJ/Mf4uJOXe72iBcCBl 4FO/p9lF2haQmJkr/CoJCx3SDiPv+jHsgdY0dUU6fZwZ18WtY9A1ipHCJmqZq3ds957N Uj2uP2ybxXh8OhQWUBFKB4C89sZGuXi6EvuurDqZ5TrmAwgvkFHV0fpSoAZ84iAvdZ0x V/qQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xU7oMqGuZSzuwQqlTLWl5HT2gcOlUZd23bSn8B82hrg=; b=ON37FXEOYYA0SCc/GUJG50DK4WyxANr8n1A+KBUhXPEFZUFNS6+zimEMLjfZ+S2fTU 30LeATfn6O6hDg5QF7yl8dHeb+U0sajQfzLwrT44NL00TRZUyUN503B3LBbjGYVpgNWJ SAEA7TD7d1JZexggeNH7XM8ht8w1x91XDC9btSo04R5OEzKAcJPouzRolKjMHiKo0dgK 5iqciaSxwoKPPOz1iGhFKA9ImnZq7rr4lu3wMC1c6y4jO9r8VGMMoW9qGum0nFJHTweA a7b5n+zg4mUa7UwRPdVVI1hkG9Ha73BEutaXXnID4G/XsvORqHjRNFT6Q20ZqvxozGid mZsg== X-Gm-Message-State: AOAM532hMCVLJRuc41ZB95qdPcy5ZPXkBbUdc4n+dwLKUmRkNzdrGgIg pER6MWFIawdUO3CHF3GuMW0RL7dxnQ== X-Google-Smtp-Source: ABdhPJxMw5l6/kTuSVacreFWKtgYBmLxSIyhtA/0rGCmUImckuo5HDxUebxVJTM+SCYFMFx3K6mM35ARMw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:4f87:: with SMTP id n7mr4359911wmq.63.1638272704753; Tue, 30 Nov 2021 03:45:04 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:09 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-2-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 01/25] kcsan: Refactor reading of instrumented memory From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Factor out the switch statement reading instrumented memory into a helper read_instrumented_memory(). No functional change. Signed-off-by: Marco Elver Acked-by: Mark Rutland --- kernel/kcsan/core.c | 51 +++++++++++++++------------------------------ 1 file changed, 17 insertions(+), 34 deletions(-) diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index 4b84c8e7884b..6bfd3040f46b 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -325,6 +325,21 @@ static void delay_access(int type) udelay(delay); } +/* + * Reads the instrumented memory for value change detection; value change + * detection is currently done for accesses up to a size of 8 bytes. + */ +static __always_inline u64 read_instrumented_memory(const volatile void *ptr, size_t size) +{ + switch (size) { + case 1: return READ_ONCE(*(const u8 *)ptr); + case 2: return READ_ONCE(*(const u16 *)ptr); + case 4: return READ_ONCE(*(const u32 *)ptr); + case 8: return READ_ONCE(*(const u64 *)ptr); + default: return 0; /* Ignore; we do not diff the values. */ + } +} + void kcsan_save_irqtrace(struct task_struct *task) { #ifdef CONFIG_TRACE_IRQFLAGS @@ -482,23 +497,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned * Read the current value, to later check and infer a race if the data * was modified via a non-instrumented access, e.g. from a device. */ - old = 0; - switch (size) { - case 1: - old = READ_ONCE(*(const u8 *)ptr); - break; - case 2: - old = READ_ONCE(*(const u16 *)ptr); - break; - case 4: - old = READ_ONCE(*(const u32 *)ptr); - break; - case 8: - old = READ_ONCE(*(const u64 *)ptr); - break; - default: - break; /* ignore; we do not diff the values */ - } + old = read_instrumented_memory(ptr, size); /* * Delay this thread, to increase probability of observing a racy @@ -511,23 +510,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned * racy access. */ access_mask = ctx->access_mask; - new = 0; - switch (size) { - case 1: - new = READ_ONCE(*(const u8 *)ptr); - break; - case 2: - new = READ_ONCE(*(const u16 *)ptr); - break; - case 4: - new = READ_ONCE(*(const u32 *)ptr); - break; - case 8: - new = READ_ONCE(*(const u64 *)ptr); - break; - default: - break; /* ignore; we do not diff the values */ - } + new = read_instrumented_memory(ptr, size); diff = old ^ new; if (access_mask) From patchwork Tue Nov 30 11:44:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9334C433FE for ; Tue, 30 Nov 2021 11:45:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240895AbhK3Lsc (ORCPT ); Tue, 30 Nov 2021 06:48:32 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44944 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231292AbhK3Ls2 (ORCPT ); Tue, 30 Nov 2021 06:48:28 -0500 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51F3EC061748 for ; Tue, 30 Nov 2021 03:45:09 -0800 (PST) Received: by mail-wm1-x34a.google.com with SMTP id p12-20020a05600c1d8c00b0033a22e48203so12698173wms.6 for ; Tue, 30 Nov 2021 03:45:09 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ZmySmV7nXgp7ZU44HAHRwN8PhZWnIeFNi9/5LSGaJn0=; b=hs4LlI5lsoVztdyeXHYGVaXqoGqfWxtfU47ZxaWIXMF2LtJbb78kUIVc0cRuvGbMfb UNw6/fXMdGtPv2h8zBMi5zOcBaSwfjE2+nKiMNKeM9jeUs4vjbJK99CBpMUY1igcK2AO hkM8i+x5rI34uTiK4kGz6Ny8KsFfPBn9Skfxtgdf+EY6mB5WctN9uRu5XKue8YS4CWPZ MuVmceyTqxFjckuQpfDMMo3/XKIvHHLDEu5mET7aFh3FTzDlkJ5ONfBlsc2EWCjP5kGc SyQVbaVFhKuFSos4aYqJ8KZimc+D5tJysctdVSWpiS6AHqJ7/PEA9mqqH25rLKWF5py6 iIDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ZmySmV7nXgp7ZU44HAHRwN8PhZWnIeFNi9/5LSGaJn0=; b=gXkTb46IUOdea5xl5S+QaXac/16KgQxw940jNx6VRg7b6Je76GSW4XBWabjeb82FAS eLirX87Fevof1C7EWFwHwJ6IgiBXF0Ub7pjjNYm5aP10DK6gSbbKbvlBiEe/qiPQMhL1 vbbNks1s5ZCcHllNoTH01+adNlgP4eLzkf0k2vjMOJXe+/vqQoBibTWIsoy8MTdVgr6J lu0kmYD7YWieaYvFWLF3oTzuIykxh+y09snGiOVpCR/1H5BheBp+tJvG76/G7q616UIw MYJj0O3J8LP3OO4pCCrbOCgbLiq5HHtztQlbVaUOHR8Bpbz6hUwe54gXP0U8Oc0K763I 68pg== X-Gm-Message-State: AOAM532vRpit3YepvMWkDx7N5EGdC3xeRnZGMZ3w3uMvIFdXJCqoRrBS qAXRPJL5hdaHxZjjtgLl3twRpvsNBQ== X-Google-Smtp-Source: ABdhPJwtFgfDTN6vPjSFkCyUpvSQ19ytHU7Acqp52u/sb5E17RDsJPS4ZB9wklKdX5QEMGX86DVOVVVs4A== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:4f0b:: with SMTP id l11mr625318wmq.0.1638272707028; Tue, 30 Nov 2021 03:45:07 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:10 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-3-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 02/25] kcsan: Remove redundant zero-initialization of globals From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org They are implicitly zero-initialized, remove explicit initialization. It keeps the upcoming additions to kcsan_ctx consistent with the rest. No functional change intended. Signed-off-by: Marco Elver Acked-by: Mark Rutland --- v3: * Minimize diff by leaving "scoped_accesses" on its own line, which should also reduce diff of future changes. --- init/init_task.c | 5 ----- kernel/kcsan/core.c | 5 ----- 2 files changed, 10 deletions(-) diff --git a/init/init_task.c b/init/init_task.c index 2d024066e27b..73cc8f03511a 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -182,11 +182,6 @@ struct task_struct init_task #endif #ifdef CONFIG_KCSAN .kcsan_ctx = { - .disable_count = 0, - .atomic_next = 0, - .atomic_nest_count = 0, - .in_flat_atomic = false, - .access_mask = 0, .scoped_accesses = {LIST_POISON1, NULL}, }, #endif diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index 6bfd3040f46b..e34a1710b7bc 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -44,11 +44,6 @@ bool kcsan_enabled; /* Per-CPU kcsan_ctx for interrupts */ static DEFINE_PER_CPU(struct kcsan_ctx, kcsan_cpu_ctx) = { - .disable_count = 0, - .atomic_next = 0, - .atomic_nest_count = 0, - .in_flat_atomic = false, - .access_mask = 0, .scoped_accesses = {LIST_POISON1, NULL}, }; From patchwork Tue Nov 30 11:44:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 365C3C433FE for ; Tue, 30 Nov 2021 11:45:19 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240971AbhK3Lsg (ORCPT ); Tue, 30 Nov 2021 06:48:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240894AbhK3Lsc (ORCPT ); Tue, 30 Nov 2021 06:48:32 -0500 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A767DC06175A for ; Tue, 30 Nov 2021 03:45:11 -0800 (PST) Received: by mail-wr1-x44a.google.com with SMTP id q17-20020adff791000000b00183e734ba48so3524234wrp.8 for ; Tue, 30 Nov 2021 03:45:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=cCyxJnES0DWyrpQ2J5vRoZu9DlHTb3+fL88VjiCWbjg=; b=cKjMDTXM8NjyKwJV3lDtGCObCm4HpcJe5F+4CzZRRXVIIKSOXaY8t1H5jc/MfrMybp l7C1C5PdFSieee+5Ow4GgXoKOJzHKEgRplPyzIuWqJojZ+nN2/4IJsjhDdaZMC4hfGLc YjipwbrwuXGRJcsG9dWRqwvvUcQGiVB9s3RVhs0q+yDpoxr5r2oMHuFRBcQ1r5QIpX3k KQBK18DLneuEMmku+aWMunTXlRFji0E50ji38pRBCDpdp+D21l7U1sivA4dITmYi7DXU QeLVu19keyQA504MdNHYtGnlBTN/rI+Vy0aqov/bs8mgOaxhMmA9ff8gcphJR+tq3WW4 AyAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=cCyxJnES0DWyrpQ2J5vRoZu9DlHTb3+fL88VjiCWbjg=; b=tBTRCTuQYv1W13v55+KxDGzKIF4gLq+m062jdHLCT37I9ofWGRm+W6bHLIT2bVOmIP qoS5IEUMcboB0++wZVe+bk29xintgb81BvUJrbOK2IOrDREi+4GeIMdYVL4eQ4KtvHuZ 6rLHE7Gtes6G2ErMaeE8kQda+8Z+BFHcglC8SbiVy6X6420TokqiFkaGDHV3MmnNQUxH OcoqE1sN+ZTPgE+G+TPoT1BjVDn4ixDEmb22yy8LxRxf5NCCZBQYHlwSZA2E/V9JfRp7 aCc7J/9gwBs9xC1pZ0wzvVbpJS1f84EWjb8rBA5wEgmZKpiDrLe3ws74V7WuVk8jZRg9 /c3Q== X-Gm-Message-State: AOAM531nCpmfabAM8MWrlt18aDbX0p1+t/uUFHNs5fXisgRkgDhI4fl3 aJOLZeBwdhoKpDgVwfzGttB7YT4FZQ== X-Google-Smtp-Source: ABdhPJz78uHZ/XMYdk54D/l9GyDkbV5tPc83Pq6RQwiWk9kg5EHHGLp650DVfz5FKzeodb/NW5W0kpyZ5g== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:1d1b:: with SMTP id l27mr623895wms.1.1638272709978; Tue, 30 Nov 2021 03:45:09 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:11 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-4-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 03/25] kcsan: Avoid checking scoped accesses from nested contexts From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Avoid checking scoped accesses from nested contexts (such as nested interrupts or in scheduler code) which share the same kcsan_ctx. This is to avoid detecting false positive races of accesses in the same thread with currently scoped accesses: consider setting up a watchpoint for a non-scoped (normal) access that also "conflicts" with a current scoped access. In a nested interrupt (or in the scheduler), which shares the same kcsan_ctx, we cannot check scoped accesses set up in the parent context -- simply ignore them in this case. With the introduction of kcsan_ctx::disable_scoped, we can also clean up kcsan_check_scoped_accesses()'s recursion guard, and do not need to modify the list's prev pointer. Signed-off-by: Marco Elver --- include/linux/kcsan.h | 1 + kernel/kcsan/core.c | 18 +++++++++++++++--- 2 files changed, 16 insertions(+), 3 deletions(-) diff --git a/include/linux/kcsan.h b/include/linux/kcsan.h index fc266ecb2a4d..13cef3458fed 100644 --- a/include/linux/kcsan.h +++ b/include/linux/kcsan.h @@ -21,6 +21,7 @@ */ struct kcsan_ctx { int disable_count; /* disable counter */ + int disable_scoped; /* disable scoped access counter */ int atomic_next; /* number of following atomic ops */ /* diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index e34a1710b7bc..bd359f8ee63a 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -204,15 +204,17 @@ check_access(const volatile void *ptr, size_t size, int type, unsigned long ip); static noinline void kcsan_check_scoped_accesses(void) { struct kcsan_ctx *ctx = get_ctx(); - struct list_head *prev_save = ctx->scoped_accesses.prev; struct kcsan_scoped_access *scoped_access; - ctx->scoped_accesses.prev = NULL; /* Avoid recursion. */ + if (ctx->disable_scoped) + return; + + ctx->disable_scoped++; list_for_each_entry(scoped_access, &ctx->scoped_accesses, list) { check_access(scoped_access->ptr, scoped_access->size, scoped_access->type, scoped_access->ip); } - ctx->scoped_accesses.prev = prev_save; + ctx->disable_scoped--; } /* Rules for generic atomic accesses. Called from fast-path. */ @@ -465,6 +467,15 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned goto out; } + /* + * Avoid races of scoped accesses from nested interrupts (or scheduler). + * Assume setting up a watchpoint for a non-scoped (normal) access that + * also conflicts with a current scoped access. In a nested interrupt, + * which shares the context, it would check a conflicting scoped access. + * To avoid, disable scoped access checking. + */ + ctx->disable_scoped++; + /* * Save and restore the IRQ state trace touched by KCSAN, since KCSAN's * runtime is entered for every memory access, and potentially useful @@ -578,6 +589,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned if (!kcsan_interrupt_watcher) local_irq_restore(irq_flags); kcsan_restore_irqtrace(current); + ctx->disable_scoped--; out: user_access_restore(ua_flags); } From patchwork Tue Nov 30 11:44:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646947 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CA615C433F5 for ; Tue, 30 Nov 2021 11:45:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240990AbhK3Lsj (ORCPT ); Tue, 30 Nov 2021 06:48:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240955AbhK3Lse (ORCPT ); Tue, 30 Nov 2021 06:48:34 -0500 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1EE12C06175B for ; Tue, 30 Nov 2021 03:45:14 -0800 (PST) Received: by mail-wm1-x34a.google.com with SMTP id p12-20020a05600c1d8c00b0033a22e48203so12698306wms.6 for ; Tue, 30 Nov 2021 03:45:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=j0T1gtE+z4XQoB9A6TUQpKsLEr+NfbiB6RG+Afi5nLo=; b=sCXEXO2Tan8z+cLUOcNnHGpJ5s78vnTKorhHCc7TXu592u+A0sKS+YJVYdqy9d3uR0 BqlSoTjVGqokirqS0YyvsI4gsWv7MV/bfnJ6ejvxbl9+ILoZNGhq5jgtcAiby1dzf2sp EJelKYuCqUNWtFzpGHD2Sys9IHCgDuO8K5wFnEn/HKi6HSFvv17Jh2JUlxKl7UcucOKk GWpTn5NRmAW0S3OocyOu3ReSodULqcLzOewEnxXW2PegOKXdR5D7Fi4DsG+wozGifv6Q NO5EuBlDTh7WzOOpLqhRJ+PO6e9GK+AvJkmMqJ30PQ3R8tXtsamOaQtltEtzFsaMkwJf nqzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=j0T1gtE+z4XQoB9A6TUQpKsLEr+NfbiB6RG+Afi5nLo=; b=5QlVC38oGceHkYvAgXg+y8R/aFDrham1bAEz6QxHVqb1jzgbaPi7js829f4sbLvqrA AMrwqm5H7oWnzPc5I/ZWTpJB1NypK28wdS7ZclyhuK5kOQ0oAFoVR78PaMWJ0iGWKGXn TnHq+u6R19gAQANK2CGGBquEql3p5jY81QJofEEsIGDhZtlKiy6vbDJ9DFPETpI3gzKa 1gPrcCtD/ajC71pQ+Aeo4T63ueQyEhLedcIgxbAOIqCXw530L1LaAvRHgvHmPRvBp1p+ u7sEUrFRJTp3CRAx6N6vTQCKTLkgJG0igNms5dKbjxaJUaYt5qWX5rLClWu1IlcueIjO sTlg== X-Gm-Message-State: AOAM532r7rp5mW7gWh1cSE4dWeaE/IoeSKWGP+6IEcpnA2aguBsNUBGh j+zLJ8/AaLJ+H8PBKf8iwVprMu5thA== X-Google-Smtp-Source: ABdhPJxuOqFaE94GF9m3pbN0zotYkJdx+UqdzcnrlVy7hSrBd1c3AFiQWf6x/pLCzsralNZAZ3xvpSJCRQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:4e4a:: with SMTP id e10mr4187194wmq.176.1638272712718; Tue, 30 Nov 2021 03:45:12 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:12 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-5-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 04/25] kcsan: Add core support for a subset of weak memory modeling From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Add support for modeling a subset of weak memory, which will enable detection of a subset of data races due to missing memory barriers. KCSAN's approach to detecting missing memory barriers is based on modeling access reordering, and enabled if `CONFIG_KCSAN_WEAK_MEMORY=y`, which depends on `CONFIG_KCSAN_STRICT=y`. The feature can be enabled or disabled at boot and runtime via the `kcsan.weak_memory` boot parameter. Each memory access for which a watchpoint is set up, is also selected for simulated reordering within the scope of its function (at most 1 in-flight access). We are limited to modeling the effects of "buffering" (delaying the access), since the runtime cannot "prefetch" accesses (therefore no acquire modeling). Once an access has been selected for reordering, it is checked along every other access until the end of the function scope. If an appropriate memory barrier is encountered, the access will no longer be considered for reordering. When the result of a memory operation should be ordered by a barrier, KCSAN can then detect data races where the conflict only occurs as a result of a missing barrier due to reordering accesses. Suggested-by: Dmitry Vyukov Signed-off-by: Marco Elver --- v3: * Remove kcsan_noinstr hackery, since we now try to avoid adding any instrumentation to .noinstr.text in the first place. * Restrict config WEAK_MEMORY to only be enabled with tooling where we actually remove instrumentation from noinstr. * Don't define kcsan_weak_memory bool if !KCSAN_WEAK_MEMORY. v2: * Define kcsan_noinstr as noinline if we rely on objtool nop'ing out calls, to avoid things like LTO inlining it. --- include/linux/kcsan-checks.h | 10 +- include/linux/kcsan.h | 10 +- include/linux/sched.h | 3 + kernel/kcsan/core.c | 209 ++++++++++++++++++++++++++++++++--- lib/Kconfig.kcsan | 20 ++++ scripts/Makefile.kcsan | 9 +- 6 files changed, 242 insertions(+), 19 deletions(-) diff --git a/include/linux/kcsan-checks.h b/include/linux/kcsan-checks.h index 5f5965246877..a1c6a89fde71 100644 --- a/include/linux/kcsan-checks.h +++ b/include/linux/kcsan-checks.h @@ -99,7 +99,15 @@ void kcsan_set_access_mask(unsigned long mask); /* Scoped access information. */ struct kcsan_scoped_access { - struct list_head list; + union { + struct list_head list; /* scoped_accesses list */ + /* + * Not an entry in scoped_accesses list; stack depth from where + * the access was initialized. + */ + int stack_depth; + }; + /* Access information. */ const volatile void *ptr; size_t size; diff --git a/include/linux/kcsan.h b/include/linux/kcsan.h index 13cef3458fed..c07c71f5ba4f 100644 --- a/include/linux/kcsan.h +++ b/include/linux/kcsan.h @@ -49,8 +49,16 @@ struct kcsan_ctx { */ unsigned long access_mask; - /* List of scoped accesses. */ + /* List of scoped accesses; likely to be empty. */ struct list_head scoped_accesses; + +#ifdef CONFIG_KCSAN_WEAK_MEMORY + /* + * Scoped access for modeling access reordering to detect missing memory + * barriers; only keep 1 to keep fast-path complexity manageable. + */ + struct kcsan_scoped_access reorder_access; +#endif }; /** diff --git a/include/linux/sched.h b/include/linux/sched.h index 78c351e35fec..0cd40b010487 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1339,6 +1339,9 @@ struct task_struct { #ifdef CONFIG_TRACE_IRQFLAGS struct irqtrace_events kcsan_save_irqtrace; #endif +#ifdef CONFIG_KCSAN_WEAK_MEMORY + int kcsan_stack_depth; +#endif #endif #if IS_ENABLED(CONFIG_KUNIT) diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index bd359f8ee63a..36a75e79a0bd 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -20,6 +21,8 @@ #include #include +#include + #include "encoding.h" #include "kcsan.h" #include "permissive.h" @@ -40,6 +43,13 @@ module_param_named(udelay_interrupt, kcsan_udelay_interrupt, uint, 0644); module_param_named(skip_watch, kcsan_skip_watch, long, 0644); module_param_named(interrupt_watcher, kcsan_interrupt_watcher, bool, 0444); +#ifdef CONFIG_KCSAN_WEAK_MEMORY +static bool kcsan_weak_memory = true; +module_param_named(weak_memory, kcsan_weak_memory, bool, 0644); +#else +#define kcsan_weak_memory false +#endif + bool kcsan_enabled; /* Per-CPU kcsan_ctx for interrupts */ @@ -351,6 +361,67 @@ void kcsan_restore_irqtrace(struct task_struct *task) #endif } +static __always_inline int get_kcsan_stack_depth(void) +{ +#ifdef CONFIG_KCSAN_WEAK_MEMORY + return current->kcsan_stack_depth; +#else + BUILD_BUG(); + return 0; +#endif +} + +static __always_inline void add_kcsan_stack_depth(int val) +{ +#ifdef CONFIG_KCSAN_WEAK_MEMORY + current->kcsan_stack_depth += val; +#else + BUILD_BUG(); +#endif +} + +static __always_inline struct kcsan_scoped_access *get_reorder_access(struct kcsan_ctx *ctx) +{ +#ifdef CONFIG_KCSAN_WEAK_MEMORY + return ctx->disable_scoped ? NULL : &ctx->reorder_access; +#else + return NULL; +#endif +} + +static __always_inline bool +find_reorder_access(struct kcsan_ctx *ctx, const volatile void *ptr, size_t size, + int type, unsigned long ip) +{ + struct kcsan_scoped_access *reorder_access = get_reorder_access(ctx); + + if (!reorder_access) + return false; + + /* + * Note: If accesses are repeated while reorder_access is identical, + * never matches the new access, because !(type & KCSAN_ACCESS_SCOPED). + */ + return reorder_access->ptr == ptr && reorder_access->size == size && + reorder_access->type == type && reorder_access->ip == ip; +} + +static inline void +set_reorder_access(struct kcsan_ctx *ctx, const volatile void *ptr, size_t size, + int type, unsigned long ip) +{ + struct kcsan_scoped_access *reorder_access = get_reorder_access(ctx); + + if (!reorder_access || !kcsan_weak_memory) + return; + + reorder_access->ptr = ptr; + reorder_access->size = size; + reorder_access->type = type | KCSAN_ACCESS_SCOPED; + reorder_access->ip = ip; + reorder_access->stack_depth = get_kcsan_stack_depth(); +} + /* * Pull everything together: check_access() below contains the performance * critical operations; the fast-path (including check_access) functions should @@ -389,8 +460,10 @@ static noinline void kcsan_found_watchpoint(const volatile void *ptr, * The access_mask check relies on value-change comparison. To avoid * reporting a race where e.g. the writer set up the watchpoint, but the * reader has access_mask!=0, we have to ignore the found watchpoint. + * + * reorder_access is never created from an access with access_mask set. */ - if (ctx->access_mask) + if (ctx->access_mask && !find_reorder_access(ctx, ptr, size, type, ip)) return; /* @@ -440,11 +513,13 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned const bool is_assert = (type & KCSAN_ACCESS_ASSERT) != 0; atomic_long_t *watchpoint; u64 old, new, diff; - unsigned long access_mask; enum kcsan_value_change value_change = KCSAN_VALUE_CHANGE_MAYBE; + bool interrupt_watcher = kcsan_interrupt_watcher; unsigned long ua_flags = user_access_save(); struct kcsan_ctx *ctx = get_ctx(); + unsigned long access_mask = ctx->access_mask; unsigned long irq_flags = 0; + bool is_reorder_access; /* * Always reset kcsan_skip counter in slow-path to avoid underflow; see @@ -467,6 +542,17 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned goto out; } + /* + * The local CPU cannot observe reordering of its own accesses, and + * therefore we need to take care of 2 cases to avoid false positives: + * + * 1. Races of the reordered access with interrupts. To avoid, if + * the current access is reorder_access, disable interrupts. + * 2. Avoid races of scoped accesses from nested interrupts (below). + */ + is_reorder_access = find_reorder_access(ctx, ptr, size, type, ip); + if (is_reorder_access) + interrupt_watcher = false; /* * Avoid races of scoped accesses from nested interrupts (or scheduler). * Assume setting up a watchpoint for a non-scoped (normal) access that @@ -482,7 +568,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned * information is lost if dirtied by KCSAN. */ kcsan_save_irqtrace(current); - if (!kcsan_interrupt_watcher) + if (!interrupt_watcher) local_irq_save(irq_flags); watchpoint = insert_watchpoint((unsigned long)ptr, size, is_write); @@ -503,7 +589,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned * Read the current value, to later check and infer a race if the data * was modified via a non-instrumented access, e.g. from a device. */ - old = read_instrumented_memory(ptr, size); + old = is_reorder_access ? 0 : read_instrumented_memory(ptr, size); /* * Delay this thread, to increase probability of observing a racy @@ -515,8 +601,17 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned * Re-read value, and check if it is as expected; if not, we infer a * racy access. */ - access_mask = ctx->access_mask; - new = read_instrumented_memory(ptr, size); + if (!is_reorder_access) { + new = read_instrumented_memory(ptr, size); + } else { + /* + * Reordered accesses cannot be used for value change detection, + * because the memory location may no longer be accessible and + * could result in a fault. + */ + new = 0; + access_mask = 0; + } diff = old ^ new; if (access_mask) @@ -585,11 +680,20 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned */ remove_watchpoint(watchpoint); atomic_long_dec(&kcsan_counters[KCSAN_COUNTER_USED_WATCHPOINTS]); + out_unlock: - if (!kcsan_interrupt_watcher) + if (!interrupt_watcher) local_irq_restore(irq_flags); kcsan_restore_irqtrace(current); ctx->disable_scoped--; + + /* + * Reordered accesses cannot be used for value change detection, + * therefore never consider for reordering if access_mask is set. + * ASSERT_EXCLUSIVE are not real accesses, ignore them as well. + */ + if (!access_mask && !is_assert) + set_reorder_access(ctx, ptr, size, type, ip); out: user_access_restore(ua_flags); } @@ -597,7 +701,6 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned static __always_inline void check_access(const volatile void *ptr, size_t size, int type, unsigned long ip) { - const bool is_write = (type & KCSAN_ACCESS_WRITE) != 0; atomic_long_t *watchpoint; long encoded_watchpoint; @@ -608,12 +711,14 @@ check_access(const volatile void *ptr, size_t size, int type, unsigned long ip) if (unlikely(size == 0)) return; +again: /* * Avoid user_access_save in fast-path: find_watchpoint is safe without * user_access_save, as the address that ptr points to is only used to * check if a watchpoint exists; ptr is never dereferenced. */ - watchpoint = find_watchpoint((unsigned long)ptr, size, !is_write, + watchpoint = find_watchpoint((unsigned long)ptr, size, + !(type & KCSAN_ACCESS_WRITE), &encoded_watchpoint); /* * It is safe to check kcsan_is_enabled() after find_watchpoint in the @@ -627,9 +732,42 @@ check_access(const volatile void *ptr, size_t size, int type, unsigned long ip) else { struct kcsan_ctx *ctx = get_ctx(); /* Call only once in fast-path. */ - if (unlikely(should_watch(ctx, ptr, size, type))) + if (unlikely(should_watch(ctx, ptr, size, type))) { kcsan_setup_watchpoint(ptr, size, type, ip); - else if (unlikely(ctx->scoped_accesses.prev)) + return; + } + + if (!(type & KCSAN_ACCESS_SCOPED)) { + struct kcsan_scoped_access *reorder_access = get_reorder_access(ctx); + + if (reorder_access) { + /* + * reorder_access check: simulates reordering of + * the access after subsequent operations. + */ + ptr = reorder_access->ptr; + type = reorder_access->type; + ip = reorder_access->ip; + /* + * Upon a nested interrupt, this context's + * reorder_access can be modified (shared ctx). + * We know that upon return, reorder_access is + * always invalidated by setting size to 0 via + * __tsan_func_exit(). Therefore we must read + * and check size after the other fields. + */ + barrier(); + size = READ_ONCE(reorder_access->size); + if (size) + goto again; + } + } + + /* + * Always checked last, right before returning from runtime; + * if reorder_access is valid, checked after it was checked. + */ + if (unlikely(ctx->scoped_accesses.prev)) kcsan_check_scoped_accesses(); } } @@ -916,19 +1054,60 @@ DEFINE_TSAN_VOLATILE_READ_WRITE(8); DEFINE_TSAN_VOLATILE_READ_WRITE(16); /* - * The below are not required by KCSAN, but can still be emitted by the - * compiler. + * Function entry and exit are used to determine the validty of reorder_access. + * Reordering of the access ends at the end of the function scope where the + * access happened. This is done for two reasons: + * + * 1. Artificially limits the scope where missing barriers are detected. + * This minimizes false positives due to uninstrumented functions that + * contain the required barriers but were missed. + * + * 2. Simplifies generating the stack trace of the access. */ void __tsan_func_entry(void *call_pc); -void __tsan_func_entry(void *call_pc) +noinline void __tsan_func_entry(void *call_pc) { + if (!IS_ENABLED(CONFIG_KCSAN_WEAK_MEMORY)) + return; + + instrumentation_begin(); + add_kcsan_stack_depth(1); + instrumentation_end(); } EXPORT_SYMBOL(__tsan_func_entry); + void __tsan_func_exit(void); -void __tsan_func_exit(void) +noinline void __tsan_func_exit(void) { + struct kcsan_scoped_access *reorder_access; + + if (!IS_ENABLED(CONFIG_KCSAN_WEAK_MEMORY)) + return; + + instrumentation_begin(); + reorder_access = get_reorder_access(get_ctx()); + if (!reorder_access) + goto out; + + if (get_kcsan_stack_depth() <= reorder_access->stack_depth) { + /* + * Access check to catch cases where write without a barrier + * (supposed release) was last access in function: because + * instrumentation is inserted before the real access, a data + * race due to the write giving up a c-s would only be caught if + * we do the conflicting access after. + */ + check_access(reorder_access->ptr, reorder_access->size, + reorder_access->type, reorder_access->ip); + reorder_access->size = 0; + reorder_access->stack_depth = INT_MIN; + } +out: + add_kcsan_stack_depth(-1); + instrumentation_end(); } EXPORT_SYMBOL(__tsan_func_exit); + void __tsan_init(void); void __tsan_init(void) { diff --git a/lib/Kconfig.kcsan b/lib/Kconfig.kcsan index e0a93ffdef30..e4394ea8068b 100644 --- a/lib/Kconfig.kcsan +++ b/lib/Kconfig.kcsan @@ -191,6 +191,26 @@ config KCSAN_STRICT closely aligns with the rules defined by the Linux-kernel memory consistency model (LKMM). +config KCSAN_WEAK_MEMORY + bool "Enable weak memory modeling to detect missing memory barriers" + default y + depends on KCSAN_STRICT + # We can either let objtool nop __tsan_func_{entry,exit}() and builtin + # atomics instrumentation in .noinstr.text, or use a compiler that can + # implement __no_kcsan to really remove all instrumentation. + depends on STACK_VALIDATION || CC_IS_GCC + help + Enable support for modeling a subset of weak memory, which allows + detecting a subset of data races due to missing memory barriers. + + Depends on KCSAN_STRICT, because the options strenghtening certain + plain accesses by default (depending on !KCSAN_STRICT) reduce the + ability to detect any data races invoving reordered accesses, in + particular reordered writes. + + Weak memory modeling relies on additional instrumentation and may + affect performance. + config KCSAN_REPORT_VALUE_CHANGE_ONLY bool "Only report races where watcher observed a data value change" default y diff --git a/scripts/Makefile.kcsan b/scripts/Makefile.kcsan index 37cb504c77e1..4c7f0d282e42 100644 --- a/scripts/Makefile.kcsan +++ b/scripts/Makefile.kcsan @@ -9,7 +9,12 @@ endif # Keep most options here optional, to allow enabling more compilers if absence # of some options does not break KCSAN nor causes false positive reports. -export CFLAGS_KCSAN := -fsanitize=thread \ - $(call cc-option,$(call cc-param,tsan-instrument-func-entry-exit=0) -fno-optimize-sibling-calls) \ +kcsan-cflags := -fsanitize=thread -fno-optimize-sibling-calls \ $(call cc-option,$(call cc-param,tsan-compound-read-before-write=1),$(call cc-option,$(call cc-param,tsan-instrument-read-before-write=1))) \ $(call cc-param,tsan-distinguish-volatile=1) + +ifndef CONFIG_KCSAN_WEAK_MEMORY +kcsan-cflags += $(call cc-option,$(call cc-param,tsan-instrument-func-entry-exit=0)) +endif + +export CFLAGS_KCSAN := $(kcsan-cflags) From patchwork Tue Nov 30 11:44:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646945 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D68FBC433EF for ; Tue, 30 Nov 2021 11:45:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240985AbhK3Lsi (ORCPT ); Tue, 30 Nov 2021 06:48:38 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240966AbhK3Lsg (ORCPT ); Tue, 30 Nov 2021 06:48:36 -0500 Received: from mail-wm1-x349.google.com (mail-wm1-x349.google.com [IPv6:2a00:1450:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A6FC8C061757 for ; Tue, 30 Nov 2021 03:45:16 -0800 (PST) Received: by mail-wm1-x349.google.com with SMTP id k25-20020a05600c1c9900b00332f798ba1dso13613510wms.4 for ; Tue, 30 Nov 2021 03:45:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=05D+KiscSDDxRASVdY9PJsfP2IqeJxlbD6WimO+AarQ=; b=WjwZ3pB6+Vu2lNRFWXtsCYgVVT5dtFBAzdsiyINGDqy6ee2aK4TgF2jWEgxL5ZPGR+ BTrTrwvc8indlreyV2WKOa9NjmGI82GHiWPYvkbvZnFbBuJNCPsE4Q0zqHcoWLrC5go1 gu/Fo0LO9OQtmwAChCaBi+nxY9hyeY7yFTFhZcRk3+OC/W/SQcaC5YloxNOh/p2OLJY3 a4xBHN5RELZaAM+6LlbS4sF6LMpAMUn3bLGB67G6PQbSBetLyxwWP8VahUs+13wfeINT wnA8fVDdM4CU9kWXpYnFKL5QIprJPNhuPavfTPxgfXB7NsiyaxVa29vZLTO9jytHU3CD 2BiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=05D+KiscSDDxRASVdY9PJsfP2IqeJxlbD6WimO+AarQ=; b=OfcX59UJ2V5/qTuneYiVCi9XOjTeiWM3axjqLRo4mbHQxqpEC1o9M9eXM/aG7MObce UYGVRXp2cZq+Zu/88f9bNAP5Mzk0JApbFjnI1po9cwx4l+i3JcScgSOMZ47U4b3xZt9J 3uuGO3tQydYTlr8wGgBZdT9hktwvwf6RBs3hNVfExnLEk/E5jUq3Jh13wTJEnyl9C7WC oyg4BVajgGe/1R1Bfv1GhivDQah0HXVaQ+bl00TYEUFgt5dOo+P/e5MNWGgGSkyoJVuC +NS+ydyYZGVgDtXZXR0bqw7uvbr843J9ccqBkiscs7Lys4TgX2huJAuqdLzEYPZNFuXH PU/Q== X-Gm-Message-State: AOAM5315C8HRYPNEnl/jbPHfIarRf98jHH8s6M2IDhDiv0hkOAk+q3+I QU5IYro/KE+Keg50QNy9quUISM2JKA== X-Google-Smtp-Source: ABdhPJxGueGvV4chcqDmzeTnc8nSvFhiikLLTsZdezn3PPdCHiHIafheTtGZ0dHxd+eScv2RrVBGUnfycQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:6000:168f:: with SMTP id y15mr39633411wrd.61.1638272715140; Tue, 30 Nov 2021 03:45:15 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:13 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-6-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 05/25] kcsan: Add core memory barrier instrumentation functions From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Add the core memory barrier instrumentation functions. These invalidate the current in-flight reordered access based on the rules for the respective barrier types and in-flight access type. To obtain barrier instrumentation that can be disabled via __no_kcsan with appropriate compiler-support (and not just with objtool help), barrier instrumentation repurposes __atomic_signal_fence(), instead of inserting explicit calls. Crucially, __atomic_signal_fence() normally does not map to any real instructions, but is still intercepted by fsanitize=thread. As a result, like any other instrumentation done by the compiler, barrier instrumentation can be disabled with __no_kcsan. Unfortunately Clang and GCC currently differ in their __no_kcsan aka __no_sanitize_thread behaviour with respect to builtin atomics (and __tsan_func_{entry,exit}) instrumentation. This is already reflected in Kconfig.kcsan's dependencies for KCSAN_WEAK_MEMORY. A later change will introduce support for newer versions of Clang that can implement __no_kcsan to also remove the additional instrumentation introduced by KCSAN_WEAK_MEMORY. Signed-off-by: Marco Elver --- v3: * Rework to avoid inserting explicit calls, and instead repurpose __atomic_signal_fence (see comment at __tsan_atomic_signal_fence), which is known to the ThreadSanitizer instrumentation and can therefore be removed via function attributes. v2: * Rename kcsan_atomic_release() to kcsan_atomic_builtin_memorder() to avoid confusion. --- include/linux/kcsan-checks.h | 71 +++++++++++++++++++++++++++++++++++- kernel/kcsan/core.c | 68 +++++++++++++++++++++++++++++++++- 2 files changed, 136 insertions(+), 3 deletions(-) diff --git a/include/linux/kcsan-checks.h b/include/linux/kcsan-checks.h index a1c6a89fde71..9d2c869167f2 100644 --- a/include/linux/kcsan-checks.h +++ b/include/linux/kcsan-checks.h @@ -36,6 +36,36 @@ */ void __kcsan_check_access(const volatile void *ptr, size_t size, int type); +/* + * See definition of __tsan_atomic_signal_fence() in kernel/kcsan/core.c. + * Note: The mappings are arbitrary, and do not reflect any real mappings of C11 + * memory orders to the LKMM memory orders and vice-versa! + */ +#define __KCSAN_BARRIER_TO_SIGNAL_FENCE_mb __ATOMIC_SEQ_CST +#define __KCSAN_BARRIER_TO_SIGNAL_FENCE_wmb __ATOMIC_ACQ_REL +#define __KCSAN_BARRIER_TO_SIGNAL_FENCE_rmb __ATOMIC_ACQUIRE +#define __KCSAN_BARRIER_TO_SIGNAL_FENCE_release __ATOMIC_RELEASE + +/** + * __kcsan_mb - full memory barrier instrumentation + */ +void __kcsan_mb(void); + +/** + * __kcsan_wmb - write memory barrier instrumentation + */ +void __kcsan_wmb(void); + +/** + * __kcsan_rmb - read memory barrier instrumentation + */ +void __kcsan_rmb(void); + +/** + * __kcsan_release - release barrier instrumentation + */ +void __kcsan_release(void); + /** * kcsan_disable_current - disable KCSAN for the current context * @@ -159,6 +189,10 @@ void kcsan_end_scoped_access(struct kcsan_scoped_access *sa); static inline void __kcsan_check_access(const volatile void *ptr, size_t size, int type) { } +static inline void __kcsan_mb(void) { } +static inline void __kcsan_wmb(void) { } +static inline void __kcsan_rmb(void) { } +static inline void __kcsan_release(void) { } static inline void kcsan_disable_current(void) { } static inline void kcsan_enable_current(void) { } static inline void kcsan_enable_current_nowarn(void) { } @@ -191,12 +225,45 @@ static inline void kcsan_end_scoped_access(struct kcsan_scoped_access *sa) { } */ #define __kcsan_disable_current kcsan_disable_current #define __kcsan_enable_current kcsan_enable_current_nowarn -#else +#else /* __SANITIZE_THREAD__ */ static inline void kcsan_check_access(const volatile void *ptr, size_t size, int type) { } static inline void __kcsan_enable_current(void) { } static inline void __kcsan_disable_current(void) { } -#endif +#endif /* __SANITIZE_THREAD__ */ + +#if defined(CONFIG_KCSAN_WEAK_MEMORY) && defined(__SANITIZE_THREAD__) +/* + * Normal barrier instrumentation is not done via explicit calls, but by mapping + * to a repurposed __atomic_signal_fence(), which normally does not generate any + * real instructions, but is still intercepted by fsanitize=thread. This means, + * like any other compile-time instrumentation, barrier instrumentation can be + * disabled with the __no_kcsan function attribute. + * + * Also see definition of __tsan_atomic_signal_fence() in kernel/kcsan/core.c. + */ +#define __KCSAN_BARRIER_TO_SIGNAL_FENCE(name) \ + static __always_inline void kcsan_##name(void) \ + { \ + barrier(); \ + __atomic_signal_fence(__KCSAN_BARRIER_TO_SIGNAL_FENCE_##name); \ + barrier(); \ + } +__KCSAN_BARRIER_TO_SIGNAL_FENCE(mb) +__KCSAN_BARRIER_TO_SIGNAL_FENCE(wmb) +__KCSAN_BARRIER_TO_SIGNAL_FENCE(rmb) +__KCSAN_BARRIER_TO_SIGNAL_FENCE(release) +#elif defined(CONFIG_KCSAN_WEAK_MEMORY) && defined(__KCSAN_INSTRUMENT_BARRIERS__) +#define kcsan_mb __kcsan_mb +#define kcsan_wmb __kcsan_wmb +#define kcsan_rmb __kcsan_rmb +#define kcsan_release __kcsan_release +#else /* CONFIG_KCSAN_WEAK_MEMORY && ... */ +static inline void kcsan_mb(void) { } +static inline void kcsan_wmb(void) { } +static inline void kcsan_rmb(void) { } +static inline void kcsan_release(void) { } +#endif /* CONFIG_KCSAN_WEAK_MEMORY && ... */ /** * __kcsan_check_read - check regular read access for races diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index 36a75e79a0bd..2254cb75cbb0 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -942,6 +942,22 @@ void __kcsan_check_access(const volatile void *ptr, size_t size, int type) } EXPORT_SYMBOL(__kcsan_check_access); +#define DEFINE_MEMORY_BARRIER(name, order_before_cond) \ + void __kcsan_##name(void) \ + { \ + struct kcsan_scoped_access *sa = get_reorder_access(get_ctx()); \ + if (!sa) \ + return; \ + if (order_before_cond) \ + sa->size = 0; \ + } \ + EXPORT_SYMBOL(__kcsan_##name) + +DEFINE_MEMORY_BARRIER(mb, true); +DEFINE_MEMORY_BARRIER(wmb, sa->type & (KCSAN_ACCESS_WRITE | KCSAN_ACCESS_COMPOUND)); +DEFINE_MEMORY_BARRIER(rmb, !(sa->type & KCSAN_ACCESS_WRITE) || (sa->type & KCSAN_ACCESS_COMPOUND)); +DEFINE_MEMORY_BARRIER(release, true); + /* * KCSAN uses the same instrumentation that is emitted by supported compilers * for ThreadSanitizer (TSAN). @@ -1130,10 +1146,19 @@ EXPORT_SYMBOL(__tsan_init); * functions, whose job is to also execute the operation itself. */ +static __always_inline void kcsan_atomic_builtin_memorder(int memorder) +{ + if (memorder == __ATOMIC_RELEASE || + memorder == __ATOMIC_SEQ_CST || + memorder == __ATOMIC_ACQ_REL) + __kcsan_release(); +} + #define DEFINE_TSAN_ATOMIC_LOAD_STORE(bits) \ u##bits __tsan_atomic##bits##_load(const u##bits *ptr, int memorder); \ u##bits __tsan_atomic##bits##_load(const u##bits *ptr, int memorder) \ { \ + kcsan_atomic_builtin_memorder(memorder); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, KCSAN_ACCESS_ATOMIC, _RET_IP_); \ } \ @@ -1143,6 +1168,7 @@ EXPORT_SYMBOL(__tsan_init); void __tsan_atomic##bits##_store(u##bits *ptr, u##bits v, int memorder); \ void __tsan_atomic##bits##_store(u##bits *ptr, u##bits v, int memorder) \ { \ + kcsan_atomic_builtin_memorder(memorder); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, \ KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC, _RET_IP_); \ @@ -1155,6 +1181,7 @@ EXPORT_SYMBOL(__tsan_init); u##bits __tsan_atomic##bits##_##op(u##bits *ptr, u##bits v, int memorder); \ u##bits __tsan_atomic##bits##_##op(u##bits *ptr, u##bits v, int memorder) \ { \ + kcsan_atomic_builtin_memorder(memorder); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, \ KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | \ @@ -1187,6 +1214,7 @@ EXPORT_SYMBOL(__tsan_init); int __tsan_atomic##bits##_compare_exchange_##strength(u##bits *ptr, u##bits *exp, \ u##bits val, int mo, int fail_mo) \ { \ + kcsan_atomic_builtin_memorder(mo); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, \ KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | \ @@ -1202,6 +1230,7 @@ EXPORT_SYMBOL(__tsan_init); u##bits __tsan_atomic##bits##_compare_exchange_val(u##bits *ptr, u##bits exp, u##bits val, \ int mo, int fail_mo) \ { \ + kcsan_atomic_builtin_memorder(mo); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, \ KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | \ @@ -1233,10 +1262,47 @@ DEFINE_TSAN_ATOMIC_OPS(64); void __tsan_atomic_thread_fence(int memorder); void __tsan_atomic_thread_fence(int memorder) { + kcsan_atomic_builtin_memorder(memorder); __atomic_thread_fence(memorder); } EXPORT_SYMBOL(__tsan_atomic_thread_fence); +/* + * In instrumented files, we emit instrumentation for barriers by mapping the + * kernel barriers to an __atomic_signal_fence(), which is interpreted specially + * and otherwise has no relation to a real __atomic_signal_fence(). No known + * kernel code uses __atomic_signal_fence(). + * + * Since fsanitize=thread instrumentation handles __atomic_signal_fence(), which + * are turned into calls to __tsan_atomic_signal_fence(), such instrumentation + * can be disabled via the __no_kcsan function attribute (vs. an explicit call + * which could not). When __no_kcsan is requested, __atomic_signal_fence() + * generates no code. + * + * Note: The result of using __atomic_signal_fence() with KCSAN enabled is + * potentially limiting the compiler's ability to reorder operations; however, + * if barriers were instrumented with explicit calls (without LTO), the compiler + * couldn't optimize much anyway. The result of a hypothetical architecture + * using __atomic_signal_fence() in normal code would be KCSAN false negatives. + */ void __tsan_atomic_signal_fence(int memorder); -void __tsan_atomic_signal_fence(int memorder) { } +noinline void __tsan_atomic_signal_fence(int memorder) +{ + switch (memorder) { + case __KCSAN_BARRIER_TO_SIGNAL_FENCE_mb: + __kcsan_mb(); + break; + case __KCSAN_BARRIER_TO_SIGNAL_FENCE_wmb: + __kcsan_wmb(); + break; + case __KCSAN_BARRIER_TO_SIGNAL_FENCE_rmb: + __kcsan_rmb(); + break; + case __KCSAN_BARRIER_TO_SIGNAL_FENCE_release: + __kcsan_release(); + break; + default: + break; + } +} EXPORT_SYMBOL(__tsan_atomic_signal_fence); From patchwork Tue Nov 30 11:44:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646949 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9AB7AC433FE for ; Tue, 30 Nov 2021 11:45:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232723AbhK3Lso (ORCPT ); Tue, 30 Nov 2021 06:48:44 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44964 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240947AbhK3Lsi (ORCPT ); Tue, 30 Nov 2021 06:48:38 -0500 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 74398C061574 for ; Tue, 30 Nov 2021 03:45:19 -0800 (PST) Received: by mail-wm1-x34a.google.com with SMTP id 138-20020a1c0090000000b00338bb803204so10300694wma.1 for ; Tue, 30 Nov 2021 03:45:19 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=K7olqGN/TGjGn55ZVkEX4B3oA8fATuBzpoXYDPWo+s8=; b=clBdSB1KsdaD9LhF3VuIZRE5qDbZm9VUy7Jmjj5Uln7EAmbx9/EVEhCZVIjKZMPCK+ qVPPdLDGmjPahXUB+7UHlvMJZMoPvtMBSVaYAWCfkloYps+FrLrtPD/MSZq6V5qCVz9F HLMEG9qAzvEkT52Tvk9GFZWjZEpBWUwzTzdkavD1KWL9RmtktroFJZ7Egs8clPbPBHwn wN7uOQ716yPwwUBgmjReM4uHwoUeb55jkQFwgYZRDRcrW/uO9L1BglBLfcp2U6lDrDpx FloEFIIraKR1zTwfjZbLjFAUKKD+8XROZdOh1bjkVCNAV7lpb1p+AtLZAWaWo2nZkkq5 egZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=K7olqGN/TGjGn55ZVkEX4B3oA8fATuBzpoXYDPWo+s8=; b=bw8O+o5FNnTRZDkUtvRcOCp6ORSbV+VIRkbUVJHhl6NR5qH65dhDvCgrY8w+XV2VVz bfNgVSCWRdXnhh72jXJPfyraPruYQ93eCyrt4ef8kISyV7vHQkddLT88pFpfX8I/ncAy Z5/gBohsdh+iD9rCXAGbIeIhiS5DsZvMZwALdwaCLtTMtU5qk4YO5nsUxXQa62gQZP6p k3Wa1PORCXAg9dlrEWY+N3UhceCKxfYBWDwu9WJxrUb7dXr5rwcnNIJ7WXiCqrhowwhk rYd5j2vMNjmk7okF2p3hx6x7PZ5x5pRx5HDFuZAbrCZKS465qiDSVmyrkxeL+7TsJLRK EnRw== X-Gm-Message-State: AOAM530hutjH6pU3gSr40X/9ieMK08XLwlsgnQpkFYa2akLxSnhZuCBC gTvJ8QkJ3GfeeTkJsxkjZ00Av7VmsQ== X-Google-Smtp-Source: ABdhPJwldFqpTbfajaFXWNrJX3ZXShEvmpEyVyn9WWq1XUdWN/HHO/Y9FjyIOfXlhbvUoQlQcowRnYznoQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:4f0b:: with SMTP id l11mr625490wmq.0.1638272717644; Tue, 30 Nov 2021 03:45:17 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:14 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-7-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 06/25] kcsan, kbuild: Add option for barrier instrumentation only From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Source files that disable KCSAN via KCSAN_SANITIZE := n, remove all instrumentation, including explicit barrier instrumentation. With instrumentation for memory barriers, in few places it is required to enable just the explicit instrumentation for memory barriers to avoid false positives. Providing the Makefile variable KCSAN_INSTRUMENT_BARRIERS_obj.o or KCSAN_INSTRUMENT_BARRIERS (for all files) set to 'y' only enables the explicit barrier instrumentation. Signed-off-by: Marco Elver --- scripts/Makefile.lib | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib index d1f865b8c0cb..ab17f7b2e33c 100644 --- a/scripts/Makefile.lib +++ b/scripts/Makefile.lib @@ -182,6 +182,11 @@ ifeq ($(CONFIG_KCSAN),y) _c_flags += $(if $(patsubst n%,, \ $(KCSAN_SANITIZE_$(basetarget).o)$(KCSAN_SANITIZE)y), \ $(CFLAGS_KCSAN)) +# Some uninstrumented files provide implied barriers required to avoid false +# positives: set KCSAN_INSTRUMENT_BARRIERS for barrier instrumentation only. +_c_flags += $(if $(patsubst n%,, \ + $(KCSAN_INSTRUMENT_BARRIERS_$(basetarget).o)$(KCSAN_INSTRUMENT_BARRIERS)n), \ + -D__KCSAN_INSTRUMENT_BARRIERS__) endif # $(srctree)/$(src) for including checkin headers from generated source files From patchwork Tue Nov 30 11:44:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646955 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 52B74C433EF for ; Tue, 30 Nov 2021 11:45:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241080AbhK3Lsy (ORCPT ); Tue, 30 Nov 2021 06:48:54 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45020 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240998AbhK3Lsl (ORCPT ); Tue, 30 Nov 2021 06:48:41 -0500 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D1978C06175B for ; Tue, 30 Nov 2021 03:45:21 -0800 (PST) Received: by mail-wr1-x449.google.com with SMTP id h13-20020adfa4cd000000b001883fd029e8so3536436wrb.11 for ; Tue, 30 Nov 2021 03:45:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=j+xGmYJ7a6naxsOgId4/Y5ehX22eJTAnRqZAmnKJeqk=; b=BIT8VeP6OqlyivV/xY4OJ8catpXBR2IJSu8XREWcZg71WaAQIG5+hSM4qIocXAFjTH gRBCk6dzWkb5Ke0XMnBRGHcxF+p8r7eNgJsjXXBnQa3KuFdDeGf9PY/FWud8Z5tGk0Dx TMS5WC/YyCMQ4Kco5Sz4NOLsAIYw94zsal4vqc8G0xrQL4bEF/N60XkvMzDgctYFiQVI w2/tGq06afI4dtOL8NvZqEN4Mr+YXm2ckISYsl33DE1TvSuMEVl4UjqaYZZVgMqRPAO1 l6LV3jYTi2PS2YN1bmf1XuSMpmIqz92ZQjvfQsvKif8oMYkkNTrZf5xUTmuLXSRM453W WdYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=j+xGmYJ7a6naxsOgId4/Y5ehX22eJTAnRqZAmnKJeqk=; b=gwP9ffNsrnf/q+9tqUlNFTTQr6300n+0AY+Xb5HFDueGKxNY+4lebyI4Ig0F4GlM2v 0ih8eaaPXCnvcboRrQEUIKEWSvXs1ybMZJDLIFWFzUFYRAv/QQrBxtcAwN6Q6wHJWjKi NLIL/kYQjFzTGGtFYcsRnnQA2VkEy6Fc9+esugGDVHxrwaat/TTT079aWbtLHoYVJPIm BsOCD5MLdBhf0tOgvdbClPdiQJICOueP8WqMwt67vXrwIq481iLzmW+tZNIyTSQ9DQ+5 96iFr+QrubO9w073qLHAnEi6cZ6UyfojtcvrlSoFFBVLLRat/joDD4MK31vjVlHtwQE5 C9RQ== X-Gm-Message-State: AOAM530zRRvOiaejScjf/Y/VNwXYOjGn1X6k37CelGR1u7nQuLPLKxaV wluIjuIzkaYxujFM2scyWsksBLzXjA== X-Google-Smtp-Source: ABdhPJxMKVacuR9XDxyJLMVa2EiD6d4veLO+QybAaMsXncBxd+AFabcdu3YiPCN9Nl29f+z7+XhyR+Y0JA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:1d1b:: with SMTP id l27mr624069wms.1.1638272720147; Tue, 30 Nov 2021 03:45:20 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:15 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-8-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 07/25] kcsan: Call scoped accesses reordered in reports From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org The scoping of an access simply denotes the scope in which it may be reordered. However, in reports, it'll be less confusing to say the access is "reordered". This is more accurate when the race occurred. Signed-off-by: Marco Elver --- kernel/kcsan/kcsan_test.c | 4 ++-- kernel/kcsan/report.c | 16 ++++++++-------- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/kernel/kcsan/kcsan_test.c b/kernel/kcsan/kcsan_test.c index 660729238588..6e3c2b8bc608 100644 --- a/kernel/kcsan/kcsan_test.c +++ b/kernel/kcsan/kcsan_test.c @@ -213,9 +213,9 @@ static bool report_matches(const struct expect_report *r) const bool is_atomic = (ty & KCSAN_ACCESS_ATOMIC); const bool is_scoped = (ty & KCSAN_ACCESS_SCOPED); const char *const access_type_aux = - (is_atomic && is_scoped) ? " (marked, scoped)" + (is_atomic && is_scoped) ? " (marked, reordered)" : (is_atomic ? " (marked)" - : (is_scoped ? " (scoped)" : "")); + : (is_scoped ? " (reordered)" : "")); if (i == 1) { /* Access 2 */ diff --git a/kernel/kcsan/report.c b/kernel/kcsan/report.c index fc15077991c4..1b0e050bdf6a 100644 --- a/kernel/kcsan/report.c +++ b/kernel/kcsan/report.c @@ -215,9 +215,9 @@ static const char *get_access_type(int type) if (type & KCSAN_ACCESS_ASSERT) { if (type & KCSAN_ACCESS_SCOPED) { if (type & KCSAN_ACCESS_WRITE) - return "assert no accesses (scoped)"; + return "assert no accesses (reordered)"; else - return "assert no writes (scoped)"; + return "assert no writes (reordered)"; } else { if (type & KCSAN_ACCESS_WRITE) return "assert no accesses"; @@ -240,17 +240,17 @@ static const char *get_access_type(int type) case KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC: return "read-write (marked)"; case KCSAN_ACCESS_SCOPED: - return "read (scoped)"; + return "read (reordered)"; case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_ATOMIC: - return "read (marked, scoped)"; + return "read (marked, reordered)"; case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_WRITE: - return "write (scoped)"; + return "write (reordered)"; case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC: - return "write (marked, scoped)"; + return "write (marked, reordered)"; case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE: - return "read-write (scoped)"; + return "read-write (reordered)"; case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC: - return "read-write (marked, scoped)"; + return "read-write (marked, reordered)"; default: BUG(); } From patchwork Tue Nov 30 11:44:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646951 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE80AC4332F for ; Tue, 30 Nov 2021 11:45:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241042AbhK3Lsr (ORCPT ); Tue, 30 Nov 2021 06:48:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44970 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241021AbhK3Lso (ORCPT ); Tue, 30 Nov 2021 06:48:44 -0500 Received: from mail-wm1-x349.google.com (mail-wm1-x349.google.com [IPv6:2a00:1450:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22EE5C06175F for ; Tue, 30 Nov 2021 03:45:24 -0800 (PST) Received: by mail-wm1-x349.google.com with SMTP id p12-20020a05600c1d8c00b0033a22e48203so12698587wms.6 for ; Tue, 30 Nov 2021 03:45:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=9MKtF9b1Z9XuvDPczN5keFD3RJ+5M3WbaM5UaZup9oY=; b=Mld6ftuJH52h3z9GNnJEoeByNAIFNrr3QsNiiYZFnnrRHu2TfyHKAOcJkvA9ddFunQ W7HZZRZi9E4yV65PPNgG1ZDF1sY+of3aAI7j56Q5/0C9Rf6OEWGZdfCOHijSM7jH24g8 MVLT5isv+V73ghLIyYaQFtX5YyvwcwZyo/T+1dJKwD98Y6BxcjtQUUE6n88U0hqpVvoY UpGN74vd/umUgR6ig9WaMplU3Lwk7NfegtTFIdPQ5VkTrkzbI2fxuQ/5vuHN+oBn0Xdi sO5Ln/L8f8ToTg1xO7zWC3T1pb7NBofCBNGXDgSJNWb9JpMCv7fM0CpBoL76ByPi+/ZN Y5iA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9MKtF9b1Z9XuvDPczN5keFD3RJ+5M3WbaM5UaZup9oY=; b=OAZ6fmXKbYPwDBe9E8pxc7v0y94i3SPDoK4+bF2t3Q3R0x83Xz2Quf9gwDt/4SmvVU VIxgeh7jKa7SeH5WQmyl8aTpQBRYFIYVETYZDlyNHdkHbzBsoDnhiS2KrEX4LvdSeLAR fmH1PW+apvv3ICuOro1frGLWsgRJ62vu8PqC93Hwrji//J9AiN7QgBUTGlxs61F57Y7x XWBG8BKkgLFZtev4AhpqMPOrd/RnnXWql9NFp/bp9l1f1bcYSSBbphb9z6vTyLwQoDKT bbQM0h+H6t/HkhpMks8gOseRrz6iLfhhmqYbdVqaD83DizXbY22o6T39DMdRbRCktHWY XhyQ== X-Gm-Message-State: AOAM531GzG8zysyJ8wYzzrD/tu3wFF+a8v8SnxsGWYvqzuGlbYjPLVAB 3iuMJ4PEuNwZBYalSHcAraY/OrQUxg== X-Google-Smtp-Source: ABdhPJyx3pgU+2VvMa2o5fdiF7HS3H4QggTKNbC1lmMB3t56ozszLnF7zTA+qkAyyD9AgjtVUW/GzzlxOQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a5d:50c7:: with SMTP id f7mr38501609wrt.327.1638272722693; Tue, 30 Nov 2021 03:45:22 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:16 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-9-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 08/25] kcsan: Show location access was reordered to From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Also show the location the access was reordered to. An example report: | ================================================================== | BUG: KCSAN: data-race in test_kernel_wrong_memorder / test_kernel_wrong_memorder | | read-write to 0xffffffffc01e61a8 of 8 bytes by task 2311 on cpu 5: | test_kernel_wrong_memorder+0x57/0x90 | access_thread+0x99/0xe0 | kthread+0x2ba/0x2f0 | ret_from_fork+0x22/0x30 | | read-write (reordered) to 0xffffffffc01e61a8 of 8 bytes by task 2310 on cpu 7: | test_kernel_wrong_memorder+0x57/0x90 | access_thread+0x99/0xe0 | kthread+0x2ba/0x2f0 | ret_from_fork+0x22/0x30 | | | +-> reordered to: test_kernel_wrong_memorder+0x80/0x90 | | Reported by Kernel Concurrency Sanitizer on: | CPU: 7 PID: 2310 Comm: access_thread Not tainted 5.14.0-rc1+ #18 | Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 | ================================================================== Signed-off-by: Marco Elver Signed-off-by: Marco Elver Reviewed-by: Boqun Feng --- kernel/kcsan/report.c | 35 +++++++++++++++++++++++------------ 1 file changed, 23 insertions(+), 12 deletions(-) diff --git a/kernel/kcsan/report.c b/kernel/kcsan/report.c index 1b0e050bdf6a..67794404042a 100644 --- a/kernel/kcsan/report.c +++ b/kernel/kcsan/report.c @@ -308,10 +308,12 @@ static int get_stack_skipnr(const unsigned long stack_entries[], int num_entries /* * Skips to the first entry that matches the function of @ip, and then replaces - * that entry with @ip, returning the entries to skip. + * that entry with @ip, returning the entries to skip with @replaced containing + * the replaced entry. */ static int -replace_stack_entry(unsigned long stack_entries[], int num_entries, unsigned long ip) +replace_stack_entry(unsigned long stack_entries[], int num_entries, unsigned long ip, + unsigned long *replaced) { unsigned long symbolsize, offset; unsigned long target_func; @@ -330,6 +332,7 @@ replace_stack_entry(unsigned long stack_entries[], int num_entries, unsigned lon func -= offset; if (func == target_func) { + *replaced = stack_entries[skip]; stack_entries[skip] = ip; return skip; } @@ -342,9 +345,10 @@ replace_stack_entry(unsigned long stack_entries[], int num_entries, unsigned lon } static int -sanitize_stack_entries(unsigned long stack_entries[], int num_entries, unsigned long ip) +sanitize_stack_entries(unsigned long stack_entries[], int num_entries, unsigned long ip, + unsigned long *replaced) { - return ip ? replace_stack_entry(stack_entries, num_entries, ip) : + return ip ? replace_stack_entry(stack_entries, num_entries, ip, replaced) : get_stack_skipnr(stack_entries, num_entries); } @@ -360,6 +364,14 @@ static int sym_strcmp(void *addr1, void *addr2) return strncmp(buf1, buf2, sizeof(buf1)); } +static void +print_stack_trace(unsigned long stack_entries[], int num_entries, unsigned long reordered_to) +{ + stack_trace_print(stack_entries, num_entries, 0); + if (reordered_to) + pr_err(" |\n +-> reordered to: %pS\n", (void *)reordered_to); +} + static void print_verbose_info(struct task_struct *task) { if (!task) @@ -378,10 +390,12 @@ static void print_report(enum kcsan_value_change value_change, struct other_info *other_info, u64 old, u64 new, u64 mask) { + unsigned long reordered_to = 0; unsigned long stack_entries[NUM_STACK_ENTRIES] = { 0 }; int num_stack_entries = stack_trace_save(stack_entries, NUM_STACK_ENTRIES, 1); - int skipnr = sanitize_stack_entries(stack_entries, num_stack_entries, ai->ip); + int skipnr = sanitize_stack_entries(stack_entries, num_stack_entries, ai->ip, &reordered_to); unsigned long this_frame = stack_entries[skipnr]; + unsigned long other_reordered_to = 0; unsigned long other_frame = 0; int other_skipnr = 0; /* silence uninit warnings */ @@ -394,7 +408,7 @@ static void print_report(enum kcsan_value_change value_change, if (other_info) { other_skipnr = sanitize_stack_entries(other_info->stack_entries, other_info->num_stack_entries, - other_info->ai.ip); + other_info->ai.ip, &other_reordered_to); other_frame = other_info->stack_entries[other_skipnr]; /* @value_change is only known for the other thread */ @@ -434,10 +448,9 @@ static void print_report(enum kcsan_value_change value_change, other_info->ai.cpu_id); /* Print the other thread's stack trace. */ - stack_trace_print(other_info->stack_entries + other_skipnr, + print_stack_trace(other_info->stack_entries + other_skipnr, other_info->num_stack_entries - other_skipnr, - 0); - + other_reordered_to); if (IS_ENABLED(CONFIG_KCSAN_VERBOSE)) print_verbose_info(other_info->task); @@ -451,9 +464,7 @@ static void print_report(enum kcsan_value_change value_change, get_thread_desc(ai->task_pid), ai->cpu_id); } /* Print stack trace of this thread. */ - stack_trace_print(stack_entries + skipnr, num_stack_entries - skipnr, - 0); - + print_stack_trace(stack_entries + skipnr, num_stack_entries - skipnr, reordered_to); if (IS_ENABLED(CONFIG_KCSAN_VERBOSE)) print_verbose_info(current); From patchwork Tue Nov 30 11:44:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646953 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E3F27C433EF for ; Tue, 30 Nov 2021 11:45:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241066AbhK3Lsv (ORCPT ); Tue, 30 Nov 2021 06:48:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240986AbhK3Lsp (ORCPT ); Tue, 30 Nov 2021 06:48:45 -0500 Received: from mail-wm1-x349.google.com (mail-wm1-x349.google.com [IPv6:2a00:1450:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B6BCC061748 for ; Tue, 30 Nov 2021 03:45:26 -0800 (PST) Received: by mail-wm1-x349.google.com with SMTP id 144-20020a1c0496000000b003305ac0e03aso13597293wme.8 for ; Tue, 30 Nov 2021 03:45:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rG1xNdPiM+b/19SfERFIEiLMABj+g1AA81VAk56ezfY=; b=P81cETHjfxAPwcXjxLEZoAKIClsMw0V0wD96HhwxcWsW+J35pjN4VZotJ0h51Pevyb G+wsHCsj6PaNbXpEzOb+mwsLQcl2U089MyHL5XWXa0CuIbGa89sbqWRlYWYT+qYvMhrF OQdjMifbE8i9BvuU784f2inF6Z6ZkIaBuprMWhQC/UYSsgI3wtWUrwHOwIa/Leqtefqq sJWWJfRoQx4l/22qMAsurBMJur+PevDjruCtLeW8hjawcRo+ERNcdhJWQv9EgGpMpM5H QtYhGhCW2Is32Dsg/YCLDxGCqA2Rh3TbFJUS8FeoHPV7CTCKP6M5fnXqNhK7KJioPWII h1CQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rG1xNdPiM+b/19SfERFIEiLMABj+g1AA81VAk56ezfY=; b=LVxCR8lUtqxbHV2taCd1FWUGS4twnrU+X149jLMd/skx6bw4HDi2pC/R8NQIhtAJEq j++Z/zzB2AIp4zCz+al56VuZJEngqoAHd96INGZvX5OPhug2herik5vrPeqDlm13OAH8 aKdXb4kStQOB04A08N4Ezmf+4E4zey0KSG+Z/wX9LlZdtu/MZ/+zXdtKMeI+rgW/gJB1 7prDSFuk2rFTtlV9uxMoer0WQ1ez7E3j+pwwLB84AqHrlP1XIL3Re4xvfWi9MmUqped/ jwv5+2oJ4qUewion/zNhjK4sxOi1KHoIUzoHCReqNUIlyl1fDUDASjjmonrAeWgw3PO9 7Ctw== X-Gm-Message-State: AOAM532Nrm43XqRvQJMjequk602UAclIjzTi8iwm58TTP0rMFeeY33FA 9bntpONHVSNU6mFA3vd7UEpBOQtUeg== X-Google-Smtp-Source: ABdhPJxbkloraurf0oEulH3ag6+7yY6BzRbHK2TeZCzdaSdJW7S6qqAtdeXCdcM3HaCBmlZIxfVSP6YzOA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:adf:f0c8:: with SMTP id x8mr41133135wro.290.1638272725050; Tue, 30 Nov 2021 03:45:25 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:17 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-10-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 09/25] kcsan: Document modeling of weak memory From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Document how KCSAN models a subset of weak memory and the subset of missing memory barriers it can detect as a result. Signed-off-by: Marco Elver --- v2: * Note the reason that address or control dependencies do not require special handling. --- Documentation/dev-tools/kcsan.rst | 76 +++++++++++++++++++++++++------ 1 file changed, 63 insertions(+), 13 deletions(-) diff --git a/Documentation/dev-tools/kcsan.rst b/Documentation/dev-tools/kcsan.rst index 7db43c7c09b8..3ae866dcc924 100644 --- a/Documentation/dev-tools/kcsan.rst +++ b/Documentation/dev-tools/kcsan.rst @@ -204,17 +204,17 @@ Ultimately this allows to determine the possible executions of concurrent code, and if that code is free from data races. KCSAN is aware of *marked atomic operations* (``READ_ONCE``, ``WRITE_ONCE``, -``atomic_*``, etc.), but is oblivious of any ordering guarantees and simply -assumes that memory barriers are placed correctly. In other words, KCSAN -assumes that as long as a plain access is not observed to race with another -conflicting access, memory operations are correctly ordered. - -This means that KCSAN will not report *potential* data races due to missing -memory ordering. Developers should therefore carefully consider the required -memory ordering requirements that remain unchecked. If, however, missing -memory ordering (that is observable with a particular compiler and -architecture) leads to an observable data race (e.g. entering a critical -section erroneously), KCSAN would report the resulting data race. +``atomic_*``, etc.), and a subset of ordering guarantees implied by memory +barriers. With ``CONFIG_KCSAN_WEAK_MEMORY=y``, KCSAN models load or store +buffering, and can detect missing ``smp_mb()``, ``smp_wmb()``, ``smp_rmb()``, +``smp_store_release()``, and all ``atomic_*`` operations with equivalent +implied barriers. + +Note, KCSAN will not report all data races due to missing memory ordering, +specifically where a memory barrier would be required to prohibit subsequent +memory operation from reordering before the barrier. Developers should +therefore carefully consider the required memory ordering requirements that +remain unchecked. Race Detection Beyond Data Races -------------------------------- @@ -268,6 +268,56 @@ marked operations, if all accesses to a variable that is accessed concurrently are properly marked, KCSAN will never trigger a watchpoint and therefore never report the accesses. +Modeling Weak Memory +~~~~~~~~~~~~~~~~~~~~ + +KCSAN's approach to detecting data races due to missing memory barriers is +based on modeling access reordering (with ``CONFIG_KCSAN_WEAK_MEMORY=y``). +Each plain memory access for which a watchpoint is set up, is also selected for +simulated reordering within the scope of its function (at most 1 in-flight +access). + +Once an access has been selected for reordering, it is checked along every +other access until the end of the function scope. If an appropriate memory +barrier is encountered, the access will no longer be considered for simulated +reordering. + +When the result of a memory operation should be ordered by a barrier, KCSAN can +then detect data races where the conflict only occurs as a result of a missing +barrier. Consider the example:: + + int x, flag; + void T1(void) + { + x = 1; // data race! + WRITE_ONCE(flag, 1); // correct: smp_store_release(&flag, 1) + } + void T2(void) + { + while (!READ_ONCE(flag)); // correct: smp_load_acquire(&flag) + ... = x; // data race! + } + +When weak memory modeling is enabled, KCSAN can consider ``x`` in ``T1`` for +simulated reordering. After the write of ``flag``, ``x`` is again checked for +concurrent accesses: because ``T2`` is able to proceed after the write of +``flag``, a data race is detected. With the correct barriers in place, ``x`` +would not be considered for reordering after the proper release of ``flag``, +and no data race would be detected. + +Deliberate trade-offs in complexity but also practical limitations mean only a +subset of data races due to missing memory barriers can be detected. With +currently available compiler support, the implementation is limited to modeling +the effects of "buffering" (delaying accesses), since the runtime cannot +"prefetch" accesses. Also recall that watchpoints are only set up for plain +accesses, and the only access type for which KCSAN simulates reordering. This +means reordering of marked accesses is not modeled. + +A consequence of the above is that acquire operations do not require barrier +instrumentation (no prefetching). Furthermore, marked accesses introducing +address or control dependencies do not require special handling (the marked +access cannot be reordered, later dependent accesses cannot be prefetched). + Key Properties ~~~~~~~~~~~~~~ @@ -290,8 +340,8 @@ Key Properties 4. **Detects Racy Writes from Devices:** Due to checking data values upon setting up watchpoints, racy writes from devices can also be detected. -5. **Memory Ordering:** KCSAN is *not* explicitly aware of the LKMM's ordering - rules; this may result in missed data races (false negatives). +5. **Memory Ordering:** KCSAN is aware of only a subset of LKMM ordering rules; + this may result in missed data races (false negatives). 6. **Analysis Accuracy:** For observed executions, due to using a sampling strategy, the analysis is *unsound* (false negatives possible), but aims to From patchwork Tue Nov 30 11:44:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646957 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B0E54C4332F for ; Tue, 30 Nov 2021 11:45:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241170AbhK3LtI (ORCPT ); Tue, 30 Nov 2021 06:49:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44968 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241027AbhK3Lss (ORCPT ); Tue, 30 Nov 2021 06:48:48 -0500 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E2520C06175C for ; Tue, 30 Nov 2021 03:45:28 -0800 (PST) Received: by mail-wm1-x34a.google.com with SMTP id n41-20020a05600c502900b003335ab97f41so12709579wmr.3 for ; Tue, 30 Nov 2021 03:45:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=UU4Lhk72EIyuCjy+G0t4z2bUBJwzL7HcimRlPhkjEC4=; b=Qf1SuGMOlmDvzKhIKei4tnzeifu697gt4+vPZmQcibFNX577BHwmnXZj/43waD1AFo zqqT6A3OzdKbGAxlzkO5UEoOJrj8Ib4JYt4r8NC8THoIspAyDexcoQ+HC59ZrCciIG6G Y0VXnkOleQAMCEJEzP4uOSbxFqMSuYOTS7PFKjrFz+hivg1sZFOLVTAGY8CqkDjW1GV+ a75BRnHn5LDO0YGeARpnpZlga5QclV3aHltFO/KTZgFV3VjrYcdVKBLFNx+MGXoTUU6/ B2ZT3iHpx6WJhoHbl+3frEAS8lAD2vpAMIDk1kiKMlouBt40xlrsSLBbihlNPqys44OP 0U5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=UU4Lhk72EIyuCjy+G0t4z2bUBJwzL7HcimRlPhkjEC4=; b=asAKfp44KmzPkZOVtsoXJ+Gv2+H6rZuJsFZBy77Og3gbhF7dsN9e+pQQaPe17RvJ1S aGtTzsdbj/mwdoDw5rmsaH9ZwDQ3TmNtHWX3SKTLlwL6+FOoipklqL/UesFexnRpAhmO cnGG+ultcUZB52oVQk3lEGrCel9BUKE9SoRpr++Rw7ZMz62A2XRkrZgKl8Yheyn4Ewcz wGEvqMu+6Jtsq0xZQ793E+9V+AllEP0vv/wd3MbJfVkmpNctjQhd2u79ty85D5wQUwJt x+VhMZbvLwKnrFeB8XbQvbEtG6Ljy4UuSNmusA0HxrOKhd5cASxOCpB8KaHu3+NUtkps d/xg== X-Gm-Message-State: AOAM532T7gR+dK3Yy1xahTEGWH1cVJvnTo5YmxUDO157Bt27DtT+Jkfu tuiQoBHqW05MTydQr79ZbyR6qArYvg== X-Google-Smtp-Source: ABdhPJxT/z2Am1c0zAK22jXUJmps1vWdH3CmrVGKav+XMyFhnnpTrHTyxg//BbOZE2TGj8a7z9k8PB/5QA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:2252:: with SMTP id a18mr4415894wmm.133.1638272727443; Tue, 30 Nov 2021 03:45:27 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:18 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-11-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 10/25] kcsan: test: Match reordered or normal accesses From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Due to reordering accesses with weak memory modeling, any access can now appear as "(reordered)". Match any permutation of accesses if CONFIG_KCSAN_WEAK_MEMORY=y, so that we effectively match an access if it is denoted "(reordered)" or not. Signed-off-by: Marco Elver --- kernel/kcsan/kcsan_test.c | 92 +++++++++++++++++++++++++++------------ 1 file changed, 63 insertions(+), 29 deletions(-) diff --git a/kernel/kcsan/kcsan_test.c b/kernel/kcsan/kcsan_test.c index 6e3c2b8bc608..ec054879201b 100644 --- a/kernel/kcsan/kcsan_test.c +++ b/kernel/kcsan/kcsan_test.c @@ -151,7 +151,7 @@ struct expect_report { /* Check observed report matches information in @r. */ __no_kcsan -static bool report_matches(const struct expect_report *r) +static bool __report_matches(const struct expect_report *r) { const bool is_assert = (r->access[0].type | r->access[1].type) & KCSAN_ACCESS_ASSERT; bool ret = false; @@ -253,6 +253,40 @@ static bool report_matches(const struct expect_report *r) return ret; } +static __always_inline const struct expect_report * +__report_set_scoped(struct expect_report *r, int accesses) +{ + BUILD_BUG_ON(accesses > 3); + + if (accesses & 1) + r->access[0].type |= KCSAN_ACCESS_SCOPED; + else + r->access[0].type &= ~KCSAN_ACCESS_SCOPED; + + if (accesses & 2) + r->access[1].type |= KCSAN_ACCESS_SCOPED; + else + r->access[1].type &= ~KCSAN_ACCESS_SCOPED; + + return r; +} + +__no_kcsan +static bool report_matches_any_reordered(struct expect_report *r) +{ + return __report_matches(__report_set_scoped(r, 0)) || + __report_matches(__report_set_scoped(r, 1)) || + __report_matches(__report_set_scoped(r, 2)) || + __report_matches(__report_set_scoped(r, 3)); +} + +#ifdef CONFIG_KCSAN_WEAK_MEMORY +/* Due to reordering accesses, any access may appear as "(reordered)". */ +#define report_matches report_matches_any_reordered +#else +#define report_matches __report_matches +#endif + /* ===== Test kernels ===== */ static long test_sink; @@ -438,13 +472,13 @@ static noinline void test_kernel_xor_1bit(void) __no_kcsan static void test_basic(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_write, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, }, }; - static const struct expect_report never = { + struct expect_report never = { .access = { { test_kernel_read, &test_var, sizeof(test_var), 0 }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, @@ -469,14 +503,14 @@ static void test_basic(struct kunit *test) __no_kcsan static void test_concurrent_races(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { /* NULL will match any address. */ { test_kernel_rmw_array, NULL, 0, __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) }, { test_kernel_rmw_array, NULL, 0, __KCSAN_ACCESS_RW(0) }, }, }; - static const struct expect_report never = { + struct expect_report never = { .access = { { test_kernel_rmw_array, NULL, 0, 0 }, { test_kernel_rmw_array, NULL, 0, 0 }, @@ -498,13 +532,13 @@ static void test_concurrent_races(struct kunit *test) __no_kcsan static void test_novalue_change(struct kunit *test) { - const struct expect_report expect_rw = { + struct expect_report expect_rw = { .access = { { test_kernel_write_nochange, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, }, }; - const struct expect_report expect_ww = { + struct expect_report expect_ww = { .access = { { test_kernel_write_nochange, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, { test_kernel_write_nochange, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, @@ -530,13 +564,13 @@ static void test_novalue_change(struct kunit *test) __no_kcsan static void test_novalue_change_exception(struct kunit *test) { - const struct expect_report expect_rw = { + struct expect_report expect_rw = { .access = { { test_kernel_write_nochange_rcu, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, }, }; - const struct expect_report expect_ww = { + struct expect_report expect_ww = { .access = { { test_kernel_write_nochange_rcu, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, { test_kernel_write_nochange_rcu, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, @@ -556,7 +590,7 @@ static void test_novalue_change_exception(struct kunit *test) __no_kcsan static void test_unknown_origin(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_read, &test_var, sizeof(test_var), 0 }, { NULL }, @@ -578,7 +612,7 @@ static void test_unknown_origin(struct kunit *test) __no_kcsan static void test_write_write_assume_atomic(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_write, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, { test_kernel_write, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, @@ -604,7 +638,7 @@ static void test_write_write_assume_atomic(struct kunit *test) __no_kcsan static void test_write_write_struct(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_write_struct, &test_struct, sizeof(test_struct), KCSAN_ACCESS_WRITE }, { test_kernel_write_struct, &test_struct, sizeof(test_struct), KCSAN_ACCESS_WRITE }, @@ -626,7 +660,7 @@ static void test_write_write_struct(struct kunit *test) __no_kcsan static void test_write_write_struct_part(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_write_struct, &test_struct, sizeof(test_struct), KCSAN_ACCESS_WRITE }, { test_kernel_write_struct_part, &test_struct.val[3], sizeof(test_struct.val[3]), KCSAN_ACCESS_WRITE }, @@ -658,7 +692,7 @@ static void test_read_atomic_write_atomic(struct kunit *test) __no_kcsan static void test_read_plain_atomic_write(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_read, &test_var, sizeof(test_var), 0 }, { test_kernel_write_atomic, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC }, @@ -679,7 +713,7 @@ static void test_read_plain_atomic_write(struct kunit *test) __no_kcsan static void test_read_plain_atomic_rmw(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_read, &test_var, sizeof(test_var), 0 }, { test_kernel_atomic_rmw, &test_var, sizeof(test_var), @@ -701,13 +735,13 @@ static void test_read_plain_atomic_rmw(struct kunit *test) __no_kcsan static void test_zero_size_access(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_write_struct, &test_struct, sizeof(test_struct), KCSAN_ACCESS_WRITE }, { test_kernel_write_struct, &test_struct, sizeof(test_struct), KCSAN_ACCESS_WRITE }, }, }; - const struct expect_report never = { + struct expect_report never = { .access = { { test_kernel_write_struct, &test_struct, sizeof(test_struct), KCSAN_ACCESS_WRITE }, { test_kernel_read_struct_zero_size, &test_struct.val[3], 0, 0 }, @@ -741,7 +775,7 @@ static void test_data_race(struct kunit *test) __no_kcsan static void test_assert_exclusive_writer(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_assert_writer, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT }, { test_kernel_write_nochange, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, @@ -759,7 +793,7 @@ static void test_assert_exclusive_writer(struct kunit *test) __no_kcsan static void test_assert_exclusive_access(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_assert_access, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, @@ -777,19 +811,19 @@ static void test_assert_exclusive_access(struct kunit *test) __no_kcsan static void test_assert_exclusive_access_writer(struct kunit *test) { - const struct expect_report expect_access_writer = { + struct expect_report expect_access_writer = { .access = { { test_kernel_assert_access, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE }, { test_kernel_assert_writer, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT }, }, }; - const struct expect_report expect_access_access = { + struct expect_report expect_access_access = { .access = { { test_kernel_assert_access, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE }, { test_kernel_assert_access, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE }, }, }; - const struct expect_report never = { + struct expect_report never = { .access = { { test_kernel_assert_writer, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT }, { test_kernel_assert_writer, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT }, @@ -813,7 +847,7 @@ static void test_assert_exclusive_access_writer(struct kunit *test) __no_kcsan static void test_assert_exclusive_bits_change(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_assert_bits_change, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT }, { test_kernel_change_bits, &test_var, sizeof(test_var), @@ -844,13 +878,13 @@ static void test_assert_exclusive_bits_nochange(struct kunit *test) __no_kcsan static void test_assert_exclusive_writer_scoped(struct kunit *test) { - const struct expect_report expect_start = { + struct expect_report expect_start = { .access = { { test_kernel_assert_writer_scoped, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_SCOPED }, { test_kernel_write_nochange, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, }, }; - const struct expect_report expect_inscope = { + struct expect_report expect_inscope = { .access = { { test_enter_scope, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_SCOPED }, { test_kernel_write_nochange, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, @@ -871,16 +905,16 @@ static void test_assert_exclusive_writer_scoped(struct kunit *test) __no_kcsan static void test_assert_exclusive_access_scoped(struct kunit *test) { - const struct expect_report expect_start1 = { + struct expect_report expect_start1 = { .access = { { test_kernel_assert_access_scoped, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_SCOPED }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, }, }; - const struct expect_report expect_start2 = { + struct expect_report expect_start2 = { .access = { expect_start1.access[0], expect_start1.access[0] }, }; - const struct expect_report expect_inscope = { + struct expect_report expect_inscope = { .access = { { test_enter_scope, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_SCOPED }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, @@ -985,7 +1019,7 @@ static void test_atomic_builtins(struct kunit *test) __no_kcsan static void test_1bit_value_change(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_read, &test_var, sizeof(test_var), 0 }, { test_kernel_xor_1bit, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) }, From patchwork Tue Nov 30 11:44:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646959 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 31615C433F5 for ; Tue, 30 Nov 2021 11:45:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240968AbhK3LtM (ORCPT ); Tue, 30 Nov 2021 06:49:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241063AbhK3Lsu (ORCPT ); Tue, 30 Nov 2021 06:48:50 -0500 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C5A4C06175E for ; Tue, 30 Nov 2021 03:45:31 -0800 (PST) Received: by mail-wm1-x34a.google.com with SMTP id 205-20020a1c00d6000000b003335d1384f1so13625911wma.3 for ; Tue, 30 Nov 2021 03:45:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ueZBGQAcZiHgxT7Io+0KmuSM9UUBjHQcOH6LTEM0h4A=; b=hXfoeu4gUvvu0k1gsQqlZOqBmvStNsVxlexZHrnYJUglsbuIxS7GSmBy1X4Fktd9PL rSF6dekKmySAL3Y9jENAj8HbcEI/Ocm4HqK6cZE2mv8Hw6CaG5mDV+KXgJ3NwnAfM3Yn wReX9JDbU/LJKPFyH0PqQqv/ADQU8EOcJtbhgjwnFw6h9LCuGs3sGUugTIZs1xA+hv6C WE6lhF98cifEJTRE96Z4n4QcWqv4woyeFyOIDnjW0ign3yQHBvu3Furi2RhHrQEAqLO3 p06ivAlXPqouc7IdpcSZmUay2PPIgjN+FoFRuKH/rKwlDQEOwTsu0xi+Q48HBoy0FJco ujjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ueZBGQAcZiHgxT7Io+0KmuSM9UUBjHQcOH6LTEM0h4A=; b=Bp+ys0y/0GbcwxjVzjvvlloBzBdLS8URaeaCtMLXqzMFcCI2Q0AYfv+8uo5Zqu9UD6 gzs4yGzCoIti1Bt51bYxBq+x0xgMfR/CD5C4jbjPpJxksPPEx9itePn96UELmiVOI7U8 bBAHQn2Rgu8RTOZHhYqTs8ki6+PjTKl7FnYVwx/BmVZ7wkMLdx9F5W/7ugGprfCpJUjk RMDrGhVv26f6QIPa1A18Xryetzqt+yJFmUAtIWmNzCZt5o6X0bUIyMzbT4Zq5NCvbq/u q+vgnNhSxzydCKdowo4W+hAFOu/ehX+o1tPRSLaybg3YHcff+NvD3rKDax4JKg7vc6wU YU9Q== X-Gm-Message-State: AOAM531j8RojiVmNIbhzfzmMu07++YkmQlJ+xJDqIVQIzEoHnydxDg0P pV3WZcxxPzBms+wJTtbD+xhsPBX3FA== X-Google-Smtp-Source: ABdhPJxbYqUT0YmQIu6BijxXI+zzHTJkt9oNiZR12yiBDaVD+wQt8KlWLFrfumwJR7sIhEawmDupKZGw2Q== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:2117:: with SMTP id u23mr4424316wml.19.1638272729928; Tue, 30 Nov 2021 03:45:29 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:19 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-12-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 11/25] kcsan: test: Add test cases for memory barrier instrumentation From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Adds test cases to check that memory barriers are instrumented correctly, and detection of missing memory barriers is working as intended if CONFIG_KCSAN_STRICT=y. Signed-off-by: Marco Elver --- v3: * Remove __no_kcsan from barrier test, given __no_kcsan would now remove barrier instrumentation, too. --- kernel/kcsan/kcsan_test.c | 319 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 319 insertions(+) diff --git a/kernel/kcsan/kcsan_test.c b/kernel/kcsan/kcsan_test.c index ec054879201b..5bf94550bcdf 100644 --- a/kernel/kcsan/kcsan_test.c +++ b/kernel/kcsan/kcsan_test.c @@ -16,9 +16,12 @@ #define pr_fmt(fmt) "kcsan_test: " fmt #include +#include +#include #include #include #include +#include #include #include #include @@ -305,6 +308,16 @@ static DEFINE_SEQLOCK(test_seqlock); __no_kcsan static noinline void sink_value(long v) { WRITE_ONCE(test_sink, v); } +/* + * Generates a delay and some accesses that enter the runtime but do not produce + * data races. + */ +static noinline void test_delay(int iter) +{ + while (iter--) + sink_value(READ_ONCE(test_sink)); +} + static noinline void test_kernel_read(void) { sink_value(test_var); } static noinline void test_kernel_write(void) @@ -466,8 +479,219 @@ static noinline void test_kernel_xor_1bit(void) kcsan_nestable_atomic_end(); } +#define TEST_KERNEL_LOCKED(name, acquire, release) \ + static noinline void test_kernel_##name(void) \ + { \ + long *flag = &test_struct.val[0]; \ + long v = 0; \ + if (!(acquire)) \ + return; \ + while (v++ < 100) { \ + test_var++; \ + barrier(); \ + } \ + release; \ + test_delay(10); \ + } + +TEST_KERNEL_LOCKED(with_memorder, + cmpxchg_acquire(flag, 0, 1) == 0, + smp_store_release(flag, 0)); +TEST_KERNEL_LOCKED(wrong_memorder, + cmpxchg_relaxed(flag, 0, 1) == 0, + WRITE_ONCE(*flag, 0)); +TEST_KERNEL_LOCKED(atomic_builtin_with_memorder, + __atomic_compare_exchange_n(flag, &v, 1, 0, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED), + __atomic_store_n(flag, 0, __ATOMIC_RELEASE)); +TEST_KERNEL_LOCKED(atomic_builtin_wrong_memorder, + __atomic_compare_exchange_n(flag, &v, 1, 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED), + __atomic_store_n(flag, 0, __ATOMIC_RELAXED)); + /* ===== Test cases ===== */ +/* + * Tests that various barriers have the expected effect on internal state. Not + * exhaustive on atomic_t operations. Unlike the selftest, also checks for + * too-strict barrier instrumentation; these can be tolerated, because it does + * not cause false positives, but at least we should be aware of such cases. + */ +static void test_barrier_nothreads(struct kunit *test) +{ +#ifdef CONFIG_KCSAN_WEAK_MEMORY + struct kcsan_scoped_access *reorder_access = ¤t->kcsan_ctx.reorder_access; +#else + struct kcsan_scoped_access *reorder_access = NULL; +#endif + arch_spinlock_t arch_spinlock = __ARCH_SPIN_LOCK_UNLOCKED; + DEFINE_SPINLOCK(spinlock); + DEFINE_MUTEX(mutex); + atomic_t dummy; + + KCSAN_TEST_REQUIRES(test, reorder_access != NULL); + KCSAN_TEST_REQUIRES(test, IS_ENABLED(CONFIG_SMP)); + +#define __KCSAN_EXPECT_BARRIER(access_type, barrier, order_before, name) \ + do { \ + reorder_access->type = (access_type) | KCSAN_ACCESS_SCOPED; \ + reorder_access->size = sizeof(test_var); \ + barrier; \ + KUNIT_EXPECT_EQ_MSG(test, reorder_access->size, \ + order_before ? 0 : sizeof(test_var), \ + "improperly instrumented type=(" #access_type "): " name); \ + } while (0) +#define KCSAN_EXPECT_READ_BARRIER(b, o) __KCSAN_EXPECT_BARRIER(0, b, o, #b) +#define KCSAN_EXPECT_WRITE_BARRIER(b, o) __KCSAN_EXPECT_BARRIER(KCSAN_ACCESS_WRITE, b, o, #b) +#define KCSAN_EXPECT_RW_BARRIER(b, o) __KCSAN_EXPECT_BARRIER(KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE, b, o, #b) + + /* Force creating a valid entry in reorder_access first. */ + test_var = 0; + while (test_var++ < 1000000 && reorder_access->size != sizeof(test_var)) + __kcsan_check_read(&test_var, sizeof(test_var)); + KUNIT_ASSERT_EQ(test, reorder_access->size, sizeof(test_var)); + + kcsan_nestable_atomic_begin(); /* No watchpoints in called functions. */ + + KCSAN_EXPECT_READ_BARRIER(mb(), true); + KCSAN_EXPECT_READ_BARRIER(wmb(), false); + KCSAN_EXPECT_READ_BARRIER(rmb(), true); + KCSAN_EXPECT_READ_BARRIER(smp_mb(), true); + KCSAN_EXPECT_READ_BARRIER(smp_wmb(), false); + KCSAN_EXPECT_READ_BARRIER(smp_rmb(), true); + KCSAN_EXPECT_READ_BARRIER(dma_wmb(), false); + KCSAN_EXPECT_READ_BARRIER(dma_rmb(), true); + KCSAN_EXPECT_READ_BARRIER(smp_mb__before_atomic(), true); + KCSAN_EXPECT_READ_BARRIER(smp_mb__after_atomic(), true); + KCSAN_EXPECT_READ_BARRIER(smp_mb__after_spinlock(), true); + KCSAN_EXPECT_READ_BARRIER(smp_store_mb(test_var, 0), true); + KCSAN_EXPECT_READ_BARRIER(smp_load_acquire(&test_var), false); + KCSAN_EXPECT_READ_BARRIER(smp_store_release(&test_var, 0), true); + KCSAN_EXPECT_READ_BARRIER(xchg(&test_var, 0), true); + KCSAN_EXPECT_READ_BARRIER(xchg_release(&test_var, 0), true); + KCSAN_EXPECT_READ_BARRIER(xchg_relaxed(&test_var, 0), false); + KCSAN_EXPECT_READ_BARRIER(cmpxchg(&test_var, 0, 0), true); + KCSAN_EXPECT_READ_BARRIER(cmpxchg_release(&test_var, 0, 0), true); + KCSAN_EXPECT_READ_BARRIER(cmpxchg_relaxed(&test_var, 0, 0), false); + KCSAN_EXPECT_READ_BARRIER(atomic_read(&dummy), false); + KCSAN_EXPECT_READ_BARRIER(atomic_read_acquire(&dummy), false); + KCSAN_EXPECT_READ_BARRIER(atomic_set(&dummy, 0), false); + KCSAN_EXPECT_READ_BARRIER(atomic_set_release(&dummy, 0), true); + KCSAN_EXPECT_READ_BARRIER(atomic_add(1, &dummy), false); + KCSAN_EXPECT_READ_BARRIER(atomic_add_return(1, &dummy), true); + KCSAN_EXPECT_READ_BARRIER(atomic_add_return_acquire(1, &dummy), false); + KCSAN_EXPECT_READ_BARRIER(atomic_add_return_release(1, &dummy), true); + KCSAN_EXPECT_READ_BARRIER(atomic_add_return_relaxed(1, &dummy), false); + KCSAN_EXPECT_READ_BARRIER(atomic_fetch_add(1, &dummy), true); + KCSAN_EXPECT_READ_BARRIER(atomic_fetch_add_acquire(1, &dummy), false); + KCSAN_EXPECT_READ_BARRIER(atomic_fetch_add_release(1, &dummy), true); + KCSAN_EXPECT_READ_BARRIER(atomic_fetch_add_relaxed(1, &dummy), false); + KCSAN_EXPECT_READ_BARRIER(test_and_set_bit(0, &test_var), true); + KCSAN_EXPECT_READ_BARRIER(test_and_clear_bit(0, &test_var), true); + KCSAN_EXPECT_READ_BARRIER(test_and_change_bit(0, &test_var), true); + KCSAN_EXPECT_READ_BARRIER(clear_bit_unlock(0, &test_var), true); + KCSAN_EXPECT_READ_BARRIER(__clear_bit_unlock(0, &test_var), true); + KCSAN_EXPECT_READ_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var), true); + KCSAN_EXPECT_READ_BARRIER(arch_spin_lock(&arch_spinlock), false); + KCSAN_EXPECT_READ_BARRIER(arch_spin_unlock(&arch_spinlock), true); + KCSAN_EXPECT_READ_BARRIER(spin_lock(&spinlock), false); + KCSAN_EXPECT_READ_BARRIER(spin_unlock(&spinlock), true); + KCSAN_EXPECT_READ_BARRIER(mutex_lock(&mutex), false); + KCSAN_EXPECT_READ_BARRIER(mutex_unlock(&mutex), true); + + KCSAN_EXPECT_WRITE_BARRIER(mb(), true); + KCSAN_EXPECT_WRITE_BARRIER(wmb(), true); + KCSAN_EXPECT_WRITE_BARRIER(rmb(), false); + KCSAN_EXPECT_WRITE_BARRIER(smp_mb(), true); + KCSAN_EXPECT_WRITE_BARRIER(smp_wmb(), true); + KCSAN_EXPECT_WRITE_BARRIER(smp_rmb(), false); + KCSAN_EXPECT_WRITE_BARRIER(dma_wmb(), true); + KCSAN_EXPECT_WRITE_BARRIER(dma_rmb(), false); + KCSAN_EXPECT_WRITE_BARRIER(smp_mb__before_atomic(), true); + KCSAN_EXPECT_WRITE_BARRIER(smp_mb__after_atomic(), true); + KCSAN_EXPECT_WRITE_BARRIER(smp_mb__after_spinlock(), true); + KCSAN_EXPECT_WRITE_BARRIER(smp_store_mb(test_var, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(smp_load_acquire(&test_var), false); + KCSAN_EXPECT_WRITE_BARRIER(smp_store_release(&test_var, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(xchg(&test_var, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(xchg_release(&test_var, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(xchg_relaxed(&test_var, 0), false); + KCSAN_EXPECT_WRITE_BARRIER(cmpxchg(&test_var, 0, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(cmpxchg_release(&test_var, 0, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(cmpxchg_relaxed(&test_var, 0, 0), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_read(&dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_read_acquire(&dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_set(&dummy, 0), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_set_release(&dummy, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(atomic_add(1, &dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_add_return(1, &dummy), true); + KCSAN_EXPECT_WRITE_BARRIER(atomic_add_return_acquire(1, &dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_add_return_release(1, &dummy), true); + KCSAN_EXPECT_WRITE_BARRIER(atomic_add_return_relaxed(1, &dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_fetch_add(1, &dummy), true); + KCSAN_EXPECT_WRITE_BARRIER(atomic_fetch_add_acquire(1, &dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_fetch_add_release(1, &dummy), true); + KCSAN_EXPECT_WRITE_BARRIER(atomic_fetch_add_relaxed(1, &dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(test_and_set_bit(0, &test_var), true); + KCSAN_EXPECT_WRITE_BARRIER(test_and_clear_bit(0, &test_var), true); + KCSAN_EXPECT_WRITE_BARRIER(test_and_change_bit(0, &test_var), true); + KCSAN_EXPECT_WRITE_BARRIER(clear_bit_unlock(0, &test_var), true); + KCSAN_EXPECT_WRITE_BARRIER(__clear_bit_unlock(0, &test_var), true); + KCSAN_EXPECT_WRITE_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var), true); + KCSAN_EXPECT_WRITE_BARRIER(arch_spin_lock(&arch_spinlock), false); + KCSAN_EXPECT_WRITE_BARRIER(arch_spin_unlock(&arch_spinlock), true); + KCSAN_EXPECT_WRITE_BARRIER(spin_lock(&spinlock), false); + KCSAN_EXPECT_WRITE_BARRIER(spin_unlock(&spinlock), true); + KCSAN_EXPECT_WRITE_BARRIER(mutex_lock(&mutex), false); + KCSAN_EXPECT_WRITE_BARRIER(mutex_unlock(&mutex), true); + + KCSAN_EXPECT_RW_BARRIER(mb(), true); + KCSAN_EXPECT_RW_BARRIER(wmb(), true); + KCSAN_EXPECT_RW_BARRIER(rmb(), true); + KCSAN_EXPECT_RW_BARRIER(smp_mb(), true); + KCSAN_EXPECT_RW_BARRIER(smp_wmb(), true); + KCSAN_EXPECT_RW_BARRIER(smp_rmb(), true); + KCSAN_EXPECT_RW_BARRIER(dma_wmb(), true); + KCSAN_EXPECT_RW_BARRIER(dma_rmb(), true); + KCSAN_EXPECT_RW_BARRIER(smp_mb__before_atomic(), true); + KCSAN_EXPECT_RW_BARRIER(smp_mb__after_atomic(), true); + KCSAN_EXPECT_RW_BARRIER(smp_mb__after_spinlock(), true); + KCSAN_EXPECT_RW_BARRIER(smp_store_mb(test_var, 0), true); + KCSAN_EXPECT_RW_BARRIER(smp_load_acquire(&test_var), false); + KCSAN_EXPECT_RW_BARRIER(smp_store_release(&test_var, 0), true); + KCSAN_EXPECT_RW_BARRIER(xchg(&test_var, 0), true); + KCSAN_EXPECT_RW_BARRIER(xchg_release(&test_var, 0), true); + KCSAN_EXPECT_RW_BARRIER(xchg_relaxed(&test_var, 0), false); + KCSAN_EXPECT_RW_BARRIER(cmpxchg(&test_var, 0, 0), true); + KCSAN_EXPECT_RW_BARRIER(cmpxchg_release(&test_var, 0, 0), true); + KCSAN_EXPECT_RW_BARRIER(cmpxchg_relaxed(&test_var, 0, 0), false); + KCSAN_EXPECT_RW_BARRIER(atomic_read(&dummy), false); + KCSAN_EXPECT_RW_BARRIER(atomic_read_acquire(&dummy), false); + KCSAN_EXPECT_RW_BARRIER(atomic_set(&dummy, 0), false); + KCSAN_EXPECT_RW_BARRIER(atomic_set_release(&dummy, 0), true); + KCSAN_EXPECT_RW_BARRIER(atomic_add(1, &dummy), false); + KCSAN_EXPECT_RW_BARRIER(atomic_add_return(1, &dummy), true); + KCSAN_EXPECT_RW_BARRIER(atomic_add_return_acquire(1, &dummy), false); + KCSAN_EXPECT_RW_BARRIER(atomic_add_return_release(1, &dummy), true); + KCSAN_EXPECT_RW_BARRIER(atomic_add_return_relaxed(1, &dummy), false); + KCSAN_EXPECT_RW_BARRIER(atomic_fetch_add(1, &dummy), true); + KCSAN_EXPECT_RW_BARRIER(atomic_fetch_add_acquire(1, &dummy), false); + KCSAN_EXPECT_RW_BARRIER(atomic_fetch_add_release(1, &dummy), true); + KCSAN_EXPECT_RW_BARRIER(atomic_fetch_add_relaxed(1, &dummy), false); + KCSAN_EXPECT_RW_BARRIER(test_and_set_bit(0, &test_var), true); + KCSAN_EXPECT_RW_BARRIER(test_and_clear_bit(0, &test_var), true); + KCSAN_EXPECT_RW_BARRIER(test_and_change_bit(0, &test_var), true); + KCSAN_EXPECT_RW_BARRIER(clear_bit_unlock(0, &test_var), true); + KCSAN_EXPECT_RW_BARRIER(__clear_bit_unlock(0, &test_var), true); + KCSAN_EXPECT_RW_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var), true); + KCSAN_EXPECT_RW_BARRIER(arch_spin_lock(&arch_spinlock), false); + KCSAN_EXPECT_RW_BARRIER(arch_spin_unlock(&arch_spinlock), true); + KCSAN_EXPECT_RW_BARRIER(spin_lock(&spinlock), false); + KCSAN_EXPECT_RW_BARRIER(spin_unlock(&spinlock), true); + KCSAN_EXPECT_RW_BARRIER(mutex_lock(&mutex), false); + KCSAN_EXPECT_RW_BARRIER(mutex_unlock(&mutex), true); + + kcsan_nestable_atomic_end(); +} + /* Simple test with normal data race. */ __no_kcsan static void test_basic(struct kunit *test) @@ -1039,6 +1263,90 @@ static void test_1bit_value_change(struct kunit *test) KUNIT_EXPECT_TRUE(test, match); } +__no_kcsan +static void test_correct_barrier(struct kunit *test) +{ + struct expect_report expect = { + .access = { + { test_kernel_with_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) }, + { test_kernel_with_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(0) }, + }, + }; + bool match_expect = false; + + test_struct.val[0] = 0; /* init unlocked */ + begin_test_checks(test_kernel_with_memorder, test_kernel_with_memorder); + do { + match_expect = report_matches_any_reordered(&expect); + } while (!end_test_checks(match_expect)); + KUNIT_EXPECT_FALSE(test, match_expect); +} + +__no_kcsan +static void test_missing_barrier(struct kunit *test) +{ + struct expect_report expect = { + .access = { + { test_kernel_wrong_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) }, + { test_kernel_wrong_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(0) }, + }, + }; + bool match_expect = false; + + test_struct.val[0] = 0; /* init unlocked */ + begin_test_checks(test_kernel_wrong_memorder, test_kernel_wrong_memorder); + do { + match_expect = report_matches_any_reordered(&expect); + } while (!end_test_checks(match_expect)); + if (IS_ENABLED(CONFIG_KCSAN_WEAK_MEMORY)) + KUNIT_EXPECT_TRUE(test, match_expect); + else + KUNIT_EXPECT_FALSE(test, match_expect); +} + +__no_kcsan +static void test_atomic_builtins_correct_barrier(struct kunit *test) +{ + struct expect_report expect = { + .access = { + { test_kernel_atomic_builtin_with_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) }, + { test_kernel_atomic_builtin_with_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(0) }, + }, + }; + bool match_expect = false; + + test_struct.val[0] = 0; /* init unlocked */ + begin_test_checks(test_kernel_atomic_builtin_with_memorder, + test_kernel_atomic_builtin_with_memorder); + do { + match_expect = report_matches_any_reordered(&expect); + } while (!end_test_checks(match_expect)); + KUNIT_EXPECT_FALSE(test, match_expect); +} + +__no_kcsan +static void test_atomic_builtins_missing_barrier(struct kunit *test) +{ + struct expect_report expect = { + .access = { + { test_kernel_atomic_builtin_wrong_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) }, + { test_kernel_atomic_builtin_wrong_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(0) }, + }, + }; + bool match_expect = false; + + test_struct.val[0] = 0; /* init unlocked */ + begin_test_checks(test_kernel_atomic_builtin_wrong_memorder, + test_kernel_atomic_builtin_wrong_memorder); + do { + match_expect = report_matches_any_reordered(&expect); + } while (!end_test_checks(match_expect)); + if (IS_ENABLED(CONFIG_KCSAN_WEAK_MEMORY)) + KUNIT_EXPECT_TRUE(test, match_expect); + else + KUNIT_EXPECT_FALSE(test, match_expect); +} + /* * Generate thread counts for all test cases. Values generated are in interval * [2, 5] followed by exponentially increasing thread counts from 8 to 32. @@ -1088,6 +1396,7 @@ static const void *nthreads_gen_params(const void *prev, char *desc) #define KCSAN_KUNIT_CASE(test_name) KUNIT_CASE_PARAM(test_name, nthreads_gen_params) static struct kunit_case kcsan_test_cases[] = { + KUNIT_CASE(test_barrier_nothreads), KCSAN_KUNIT_CASE(test_basic), KCSAN_KUNIT_CASE(test_concurrent_races), KCSAN_KUNIT_CASE(test_novalue_change), @@ -1112,6 +1421,10 @@ static struct kunit_case kcsan_test_cases[] = { KCSAN_KUNIT_CASE(test_seqlock_noreport), KCSAN_KUNIT_CASE(test_atomic_builtins), KCSAN_KUNIT_CASE(test_1bit_value_change), + KCSAN_KUNIT_CASE(test_correct_barrier), + KCSAN_KUNIT_CASE(test_missing_barrier), + KCSAN_KUNIT_CASE(test_atomic_builtins_correct_barrier), + KCSAN_KUNIT_CASE(test_atomic_builtins_missing_barrier), {}, }; @@ -1176,6 +1489,9 @@ static int test_init(struct kunit *test) observed.nlines = 0; spin_unlock_irqrestore(&observed.lock, flags); + if (strstr(test->name, "nothreads")) + return 0; + if (!torture_init_begin((char *)test->name, 1)) return -EBUSY; @@ -1218,6 +1534,9 @@ static void test_exit(struct kunit *test) struct task_struct **stop_thread; int i; + if (strstr(test->name, "nothreads")) + return; + if (torture_cleanup_begin()) return; From patchwork Tue Nov 30 11:44:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646961 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12F8CC4321E for ; Tue, 30 Nov 2021 11:45:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241222AbhK3LtQ (ORCPT ); Tue, 30 Nov 2021 06:49:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45086 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240983AbhK3Lsx (ORCPT ); Tue, 30 Nov 2021 06:48:53 -0500 Received: from mail-ed1-x54a.google.com (mail-ed1-x54a.google.com [IPv6:2a00:1450:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DD68AC061763 for ; Tue, 30 Nov 2021 03:45:33 -0800 (PST) Received: by mail-ed1-x54a.google.com with SMTP id m12-20020a056402430c00b003e9f10bbb7dso16664657edc.18 for ; Tue, 30 Nov 2021 03:45:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=e4DEXl9ENL8f6grIX8Smd5WFfYq3AcFIjaS7u+NEo6I=; b=VRFC3Sd61/1Tk7+lMqH9mkXHzsXrk6VDRpNr/RODxLb0s8opBH5XBdY0sCSwlL4os0 aNipizz3jan6hjYbfkih51nEKluedB8DR5M+li6i6hvW06Y0WJ5x5tx3zq2x3JKh92zp beJxpvn/kGN1x6ljKuhUDI2IbBAwSqkUeaN+7HU3YpFEvrF+c/6aydWsQctO4Ymd0udk IJ84ZzgzCmWkNlqi/QghFcQ2djhgKy4WBgUVGPVv6c4CskqMJrMZy7XrZOPfGlo9LN1Z 65TsX2NnW+8SOrVW+OK5wZAA3QUZMDcz6AMizaKTb/XX111gUThqKlkXg/p7apaNndAq SUZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=e4DEXl9ENL8f6grIX8Smd5WFfYq3AcFIjaS7u+NEo6I=; b=XVQ8Yz2+DGVIgD77dWTa6+uFbegpAFqheLmFtT2EkCWPdvv8lQo6byXHv+1qp7MYeP Z7CWtqqvBF/fujyQujPxbT2qgl/3qyLiPB8XT/aMdH6lLQAGPUK5KSbpC1IDrkHn/GZ7 zoIng2yt6lq0jFuHHIHC6lAuOVjftnOSLlZh3i98/hqp/9uRM1e20HoRJMQoUkJBjXyZ l4vOIStwNnYcobhPXa098+wobi2iqCAQGsnlY5SpDgQUX7sMsXt+Kzsic00CHqqErxyP b+Icq1JU55P+57XKDDy3klQL8ANCh0ETzTG5QY69gKn6wohS2oyCdjXhLw6itjMxwunh fWxA== X-Gm-Message-State: AOAM533ii1ewuYAT6pCrfhArtHhOrySarscotEYl6rSjwk8eyp6pwfYV i/+d28wvZq5kZPlJvBncQizXj9ITyg== X-Google-Smtp-Source: ABdhPJz7RyZqdN7S4/+caxzBBOK3JiCGoO88loaeaBFa4FUTBsvk6hn509wWF6mKeqicro2oUCPsrxEoNg== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a17:907:3f29:: with SMTP id hq41mr66294129ejc.216.1638272732309; Tue, 30 Nov 2021 03:45:32 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:20 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-13-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 12/25] kcsan: Ignore GCC 11+ warnings about TSan runtime support From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org GCC 11 has introduced a new warning option, -Wtsan [1], to warn about unsupported operations in the TSan runtime. But KCSAN != TSan runtime, so none of the warnings apply. [1] https://gcc.gnu.org/onlinedocs/gcc-11.1.0/gcc/Warning-Options.html Ignore the warnings. Currently the warning only fires in the test for __atomic_thread_fence(): kernel/kcsan/kcsan_test.c: In function ‘test_atomic_builtins’: kernel/kcsan/kcsan_test.c:1234:17: warning: ‘atomic_thread_fence’ is not supported with ‘-fsanitize=thread’ [-Wtsan] 1234 | __atomic_thread_fence(__ATOMIC_SEQ_CST); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ which exists to ensure the KCSAN runtime keeps supporting the builtin instrumentation. Signed-off-by: Marco Elver --- scripts/Makefile.kcsan | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/scripts/Makefile.kcsan b/scripts/Makefile.kcsan index 4c7f0d282e42..19f693b68a96 100644 --- a/scripts/Makefile.kcsan +++ b/scripts/Makefile.kcsan @@ -13,6 +13,12 @@ kcsan-cflags := -fsanitize=thread -fno-optimize-sibling-calls \ $(call cc-option,$(call cc-param,tsan-compound-read-before-write=1),$(call cc-option,$(call cc-param,tsan-instrument-read-before-write=1))) \ $(call cc-param,tsan-distinguish-volatile=1) +ifdef CONFIG_CC_IS_GCC +# GCC started warning about operations unsupported by the TSan runtime. But +# KCSAN != TSan, so just ignore these warnings. +kcsan-cflags += -Wno-tsan +endif + ifndef CONFIG_KCSAN_WEAK_MEMORY kcsan-cflags += $(call cc-option,$(call cc-param,tsan-instrument-func-entry-exit=0)) endif From patchwork Tue Nov 30 11:44:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646963 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8191CC433EF for ; Tue, 30 Nov 2021 11:46:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241159AbhK3Ltj (ORCPT ); Tue, 30 Nov 2021 06:49:39 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45150 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241164AbhK3LtH (ORCPT ); Tue, 30 Nov 2021 06:49:07 -0500 Received: from mail-wr1-x44a.google.com (mail-wr1-x44a.google.com [IPv6:2a00:1450:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4B03DC0613F4 for ; Tue, 30 Nov 2021 03:45:36 -0800 (PST) Received: by mail-wr1-x44a.google.com with SMTP id v18-20020a5d5912000000b001815910d2c0so3522084wrd.1 for ; Tue, 30 Nov 2021 03:45:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Iys8lzbI1Nma9PwE1F52pU9iK1Hh8M3Ac0otxw9nSQU=; b=J8vmyXS/GrvIPUiRrmnpXnM23vlbl8gTzQcjputy0NigdXskM1exWYnaspB797D3ft NXocSIAu50iDgAED/KttvbRMHhNCBHezQlIAnYRMZ9EPCRUB6BLKxpnwzxpiM39ki6iF pKduWtSrQDLyGQCqz3v0RRIdwfkFvzy28hNjq1+FBeiURxDnOysPWhEFvDI3ZK7AwxnE UCBCzkcCNrNRkugsGIh5jp9/fWoiC2jjF10Zv/FFkFd/qQiXeNsbmkpyElWbkd7+p4m6 HTQRreCIEdhOMH2P8xe0yM+3SsOfimydVe+jG1AoZ81MRqit79EgVxBGt4D/Y5n4+XsH qjKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Iys8lzbI1Nma9PwE1F52pU9iK1Hh8M3Ac0otxw9nSQU=; b=l8uwljwZ5oq6OnccGqHL2pn8HuceVJInzxDtG2VnPkdHn59Vl3DKy6O5lfgEdJE0WO Wy1Pm1FAqwJsuY1diHSZ04oBRsQZhIKh2NX7cxMXHzeuV+TnskPks/eDB/N5KWrWnmQi Ooa6qnSdTDenksXGq8KUGFt1V+SaWXlxxtLpydxGM6PPdZ0Gb7905u98vwbal33dUhKd DgoNehRnhe/FoZjERh+9xVRiFwBdywYAD7rm0I8UXtchQglP2Ca+URupBZAbzys0a34I ZpGENIAbgnZlbuQmqIoXxmLyvZkVwUQrAuPlNK8GeK7a3ZfxEsQh8xMZertJtSjalnJw 3HSA== X-Gm-Message-State: AOAM531qVkH8XEUBh7tOJZpbWd7/KnN3JTHggxGkaya6oy9yasWm5Vk8 1dZtLCbuj2f0njg+VmLWwrN8GOQSjA== X-Google-Smtp-Source: ABdhPJwVwVRvSp7fPNqadzmvPFUOPgjn135U4JnWgImgUWee4DG8AJGh488Q1cMCbq1Z+I47903wd1OauA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:1993:: with SMTP id t19mr4402473wmq.21.1638272734879; Tue, 30 Nov 2021 03:45:34 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:21 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-14-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 13/25] kcsan: selftest: Add test case to check memory barrier instrumentation From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Memory barrier instrumentation is crucial to avoid false positives. To avoid surprises, run a simple test case in the boot-time selftest to ensure memory barriers are still instrumented correctly. Signed-off-by: Marco Elver --- kernel/kcsan/Makefile | 2 + kernel/kcsan/selftest.c | 141 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 143 insertions(+) diff --git a/kernel/kcsan/Makefile b/kernel/kcsan/Makefile index c2bb07f5bcc7..ff47e896de3b 100644 --- a/kernel/kcsan/Makefile +++ b/kernel/kcsan/Makefile @@ -11,6 +11,8 @@ CFLAGS_core.o := $(call cc-option,-fno-conserve-stack) \ -fno-stack-protector -DDISABLE_BRANCH_PROFILING obj-y := core.o debugfs.o report.o + +KCSAN_INSTRUMENT_BARRIERS_selftest.o := y obj-$(CONFIG_KCSAN_SELFTEST) += selftest.o CFLAGS_kcsan_test.o := $(CFLAGS_KCSAN) -g -fno-omit-frame-pointer diff --git a/kernel/kcsan/selftest.c b/kernel/kcsan/selftest.c index b4295a3892b7..08c6b84b9ebe 100644 --- a/kernel/kcsan/selftest.c +++ b/kernel/kcsan/selftest.c @@ -7,10 +7,15 @@ #define pr_fmt(fmt) "kcsan: " fmt +#include +#include #include +#include #include #include #include +#include +#include #include #include "encoding.h" @@ -103,6 +108,141 @@ static bool __init test_matching_access(void) return true; } +/* + * Correct memory barrier instrumentation is critical to avoiding false + * positives: simple test to check at boot certain barriers are always properly + * instrumented. See kcsan_test for a more complete test. + */ +static bool __init test_barrier(void) +{ +#ifdef CONFIG_KCSAN_WEAK_MEMORY + struct kcsan_scoped_access *reorder_access = ¤t->kcsan_ctx.reorder_access; +#else + struct kcsan_scoped_access *reorder_access = NULL; +#endif + bool ret = true; + arch_spinlock_t arch_spinlock = __ARCH_SPIN_LOCK_UNLOCKED; + DEFINE_SPINLOCK(spinlock); + atomic_t dummy; + long test_var; + + if (!reorder_access || !IS_ENABLED(CONFIG_SMP)) + return true; + +#define __KCSAN_CHECK_BARRIER(access_type, barrier, name) \ + do { \ + reorder_access->type = (access_type) | KCSAN_ACCESS_SCOPED; \ + reorder_access->size = 1; \ + barrier; \ + if (reorder_access->size != 0) { \ + pr_err("improperly instrumented type=(" #access_type "): " name "\n"); \ + ret = false; \ + } \ + } while (0) +#define KCSAN_CHECK_READ_BARRIER(b) __KCSAN_CHECK_BARRIER(0, b, #b) +#define KCSAN_CHECK_WRITE_BARRIER(b) __KCSAN_CHECK_BARRIER(KCSAN_ACCESS_WRITE, b, #b) +#define KCSAN_CHECK_RW_BARRIER(b) __KCSAN_CHECK_BARRIER(KCSAN_ACCESS_WRITE | KCSAN_ACCESS_COMPOUND, b, #b) + + kcsan_nestable_atomic_begin(); /* No watchpoints in called functions. */ + + KCSAN_CHECK_READ_BARRIER(mb()); + KCSAN_CHECK_READ_BARRIER(rmb()); + KCSAN_CHECK_READ_BARRIER(smp_mb()); + KCSAN_CHECK_READ_BARRIER(smp_rmb()); + KCSAN_CHECK_READ_BARRIER(dma_rmb()); + KCSAN_CHECK_READ_BARRIER(smp_mb__before_atomic()); + KCSAN_CHECK_READ_BARRIER(smp_mb__after_atomic()); + KCSAN_CHECK_READ_BARRIER(smp_mb__after_spinlock()); + KCSAN_CHECK_READ_BARRIER(smp_store_mb(test_var, 0)); + KCSAN_CHECK_READ_BARRIER(smp_store_release(&test_var, 0)); + KCSAN_CHECK_READ_BARRIER(xchg(&test_var, 0)); + KCSAN_CHECK_READ_BARRIER(xchg_release(&test_var, 0)); + KCSAN_CHECK_READ_BARRIER(cmpxchg(&test_var, 0, 0)); + KCSAN_CHECK_READ_BARRIER(cmpxchg_release(&test_var, 0, 0)); + KCSAN_CHECK_READ_BARRIER(atomic_set_release(&dummy, 0)); + KCSAN_CHECK_READ_BARRIER(atomic_add_return(1, &dummy)); + KCSAN_CHECK_READ_BARRIER(atomic_add_return_release(1, &dummy)); + KCSAN_CHECK_READ_BARRIER(atomic_fetch_add(1, &dummy)); + KCSAN_CHECK_READ_BARRIER(atomic_fetch_add_release(1, &dummy)); + KCSAN_CHECK_READ_BARRIER(test_and_set_bit(0, &test_var)); + KCSAN_CHECK_READ_BARRIER(test_and_clear_bit(0, &test_var)); + KCSAN_CHECK_READ_BARRIER(test_and_change_bit(0, &test_var)); + KCSAN_CHECK_READ_BARRIER(clear_bit_unlock(0, &test_var)); + KCSAN_CHECK_READ_BARRIER(__clear_bit_unlock(0, &test_var)); + KCSAN_CHECK_READ_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var)); + arch_spin_lock(&arch_spinlock); + KCSAN_CHECK_READ_BARRIER(arch_spin_unlock(&arch_spinlock)); + spin_lock(&spinlock); + KCSAN_CHECK_READ_BARRIER(spin_unlock(&spinlock)); + + KCSAN_CHECK_WRITE_BARRIER(mb()); + KCSAN_CHECK_WRITE_BARRIER(wmb()); + KCSAN_CHECK_WRITE_BARRIER(smp_mb()); + KCSAN_CHECK_WRITE_BARRIER(smp_wmb()); + KCSAN_CHECK_WRITE_BARRIER(dma_wmb()); + KCSAN_CHECK_WRITE_BARRIER(smp_mb__before_atomic()); + KCSAN_CHECK_WRITE_BARRIER(smp_mb__after_atomic()); + KCSAN_CHECK_WRITE_BARRIER(smp_mb__after_spinlock()); + KCSAN_CHECK_WRITE_BARRIER(smp_store_mb(test_var, 0)); + KCSAN_CHECK_WRITE_BARRIER(smp_store_release(&test_var, 0)); + KCSAN_CHECK_WRITE_BARRIER(xchg(&test_var, 0)); + KCSAN_CHECK_WRITE_BARRIER(xchg_release(&test_var, 0)); + KCSAN_CHECK_WRITE_BARRIER(cmpxchg(&test_var, 0, 0)); + KCSAN_CHECK_WRITE_BARRIER(cmpxchg_release(&test_var, 0, 0)); + KCSAN_CHECK_WRITE_BARRIER(atomic_set_release(&dummy, 0)); + KCSAN_CHECK_WRITE_BARRIER(atomic_add_return(1, &dummy)); + KCSAN_CHECK_WRITE_BARRIER(atomic_add_return_release(1, &dummy)); + KCSAN_CHECK_WRITE_BARRIER(atomic_fetch_add(1, &dummy)); + KCSAN_CHECK_WRITE_BARRIER(atomic_fetch_add_release(1, &dummy)); + KCSAN_CHECK_WRITE_BARRIER(test_and_set_bit(0, &test_var)); + KCSAN_CHECK_WRITE_BARRIER(test_and_clear_bit(0, &test_var)); + KCSAN_CHECK_WRITE_BARRIER(test_and_change_bit(0, &test_var)); + KCSAN_CHECK_WRITE_BARRIER(clear_bit_unlock(0, &test_var)); + KCSAN_CHECK_WRITE_BARRIER(__clear_bit_unlock(0, &test_var)); + KCSAN_CHECK_WRITE_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var)); + arch_spin_lock(&arch_spinlock); + KCSAN_CHECK_WRITE_BARRIER(arch_spin_unlock(&arch_spinlock)); + spin_lock(&spinlock); + KCSAN_CHECK_WRITE_BARRIER(spin_unlock(&spinlock)); + + KCSAN_CHECK_RW_BARRIER(mb()); + KCSAN_CHECK_RW_BARRIER(wmb()); + KCSAN_CHECK_RW_BARRIER(rmb()); + KCSAN_CHECK_RW_BARRIER(smp_mb()); + KCSAN_CHECK_RW_BARRIER(smp_wmb()); + KCSAN_CHECK_RW_BARRIER(smp_rmb()); + KCSAN_CHECK_RW_BARRIER(dma_wmb()); + KCSAN_CHECK_RW_BARRIER(dma_rmb()); + KCSAN_CHECK_RW_BARRIER(smp_mb__before_atomic()); + KCSAN_CHECK_RW_BARRIER(smp_mb__after_atomic()); + KCSAN_CHECK_RW_BARRIER(smp_mb__after_spinlock()); + KCSAN_CHECK_RW_BARRIER(smp_store_mb(test_var, 0)); + KCSAN_CHECK_RW_BARRIER(smp_store_release(&test_var, 0)); + KCSAN_CHECK_RW_BARRIER(xchg(&test_var, 0)); + KCSAN_CHECK_RW_BARRIER(xchg_release(&test_var, 0)); + KCSAN_CHECK_RW_BARRIER(cmpxchg(&test_var, 0, 0)); + KCSAN_CHECK_RW_BARRIER(cmpxchg_release(&test_var, 0, 0)); + KCSAN_CHECK_RW_BARRIER(atomic_set_release(&dummy, 0)); + KCSAN_CHECK_RW_BARRIER(atomic_add_return(1, &dummy)); + KCSAN_CHECK_RW_BARRIER(atomic_add_return_release(1, &dummy)); + KCSAN_CHECK_RW_BARRIER(atomic_fetch_add(1, &dummy)); + KCSAN_CHECK_RW_BARRIER(atomic_fetch_add_release(1, &dummy)); + KCSAN_CHECK_RW_BARRIER(test_and_set_bit(0, &test_var)); + KCSAN_CHECK_RW_BARRIER(test_and_clear_bit(0, &test_var)); + KCSAN_CHECK_RW_BARRIER(test_and_change_bit(0, &test_var)); + KCSAN_CHECK_RW_BARRIER(clear_bit_unlock(0, &test_var)); + KCSAN_CHECK_RW_BARRIER(__clear_bit_unlock(0, &test_var)); + KCSAN_CHECK_RW_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var)); + arch_spin_lock(&arch_spinlock); + KCSAN_CHECK_RW_BARRIER(arch_spin_unlock(&arch_spinlock)); + spin_lock(&spinlock); + KCSAN_CHECK_RW_BARRIER(spin_unlock(&spinlock)); + + kcsan_nestable_atomic_end(); + + return ret; +} + static int __init kcsan_selftest(void) { int passed = 0; @@ -120,6 +260,7 @@ static int __init kcsan_selftest(void) RUN_TEST(test_requires); RUN_TEST(test_encode_decode); RUN_TEST(test_matching_access); + RUN_TEST(test_barrier); pr_info("selftest: %d/%d tests passed\n", passed, total); if (passed != total) From patchwork Tue Nov 30 11:44:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646965 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9E88C433F5 for ; Tue, 30 Nov 2021 11:46:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241027AbhK3Ltm (ORCPT ); Tue, 30 Nov 2021 06:49:42 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45170 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241179AbhK3LtK (ORCPT ); Tue, 30 Nov 2021 06:49:10 -0500 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9AC52C0613FD for ; Tue, 30 Nov 2021 03:45:38 -0800 (PST) Received: by mail-wm1-x34a.google.com with SMTP id y141-20020a1c7d93000000b0033c2ae3583fso10288748wmc.5 for ; Tue, 30 Nov 2021 03:45:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=HO2j8fDcTjG33FWF1g1vyRfUzwW+oeuePm13p24Xfnw=; b=sAns+f6z6lC9Rzr6sv1VOaF9Z+/b9Rm7Fc6eiv8LDEf1/K8jbpxDmmnPBMBZfSxySX dTpjl3fNGMy78twn2A4yIBadlhfaWtCCsObqHVem6zBDVFZVCC4n+IQUTL7f0sQCFvwb o57Imh2NYHQtkXdFhwKWYCTpsWBhlOl1CCg0T4fc9xHd4rCYaw53hmF7vk0QAz6SS6GP wIiH+wJJ9skB13S9TyInZZn9WwBTgfFJcdUIkQyoG9cGBoOA6WySIvzdR0anU8rbzqLX RWbyTf4PvpmsxkiuslYmzwxUNIXoBo8PPE9Ue6o5o9xu9E1j8Bu4cV0qwZSk4l4zKZky Ogjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=HO2j8fDcTjG33FWF1g1vyRfUzwW+oeuePm13p24Xfnw=; b=njYH79ebp0B8IxpEyyPnXK/lqN4u3nw+2QnITFvgPuqg0VwLqfdDggEdJQAO/uaMSh rM9QaUayWiDwiNsSPvljA/6xuqB2VCvNlcBvF4yKPKcJO6F/kdznUOlQ8pbaAUiHzTXY BWc5pkLf7KEXjKpjh+LfhYdYCpqV3TgLbTHpkXTglDl1t/+JycD3Ec43w8mGACSVO0gm cs5PL+jEK8V2BY9S+xy803ofms4kOOU7tnj/5wP2GzlbNrSCkS2f6v8KJ2F/wsX0Q1Nq Y7gQwr4jmdQ6LR8902Srw2zUjncmjiqbuLRxfC7uJrY7NRGZHQ+dlZmrJV5CRHClBDs4 BQWQ== X-Gm-Message-State: AOAM532ybtxuMFW2onAFAhFF/KpqHlcH5G23wW8ecvgNGlstXmhs40Af 9jLwiT7s1SjlWjJMSCuP+3rK4Tlkzg== X-Google-Smtp-Source: ABdhPJx2Rz4y/jenX9xM8/e7rGubW4uw91rJcNHmjU3XY1sl9Bg9qcjG6x+0kmUf26GRvGsaHzk4ZiHgag== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:4308:: with SMTP id p8mr4243163wme.132.1638272737238; Tue, 30 Nov 2021 03:45:37 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:22 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-15-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 14/25] locking/barriers, kcsan: Add instrumentation for barriers From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Adds the required KCSAN instrumentation for barriers if CONFIG_SMP. KCSAN supports modeling the effects of: smp_mb() smp_rmb() smp_wmb() smp_store_release() Signed-off-by: Marco Elver --- include/asm-generic/barrier.h | 29 +++++++++++++++-------------- include/linux/spinlock.h | 2 +- 2 files changed, 16 insertions(+), 15 deletions(-) diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h index 640f09479bdf..27a9c9edfef6 100644 --- a/include/asm-generic/barrier.h +++ b/include/asm-generic/barrier.h @@ -14,6 +14,7 @@ #ifndef __ASSEMBLY__ #include +#include #include #ifndef nop @@ -62,15 +63,15 @@ #ifdef CONFIG_SMP #ifndef smp_mb -#define smp_mb() __smp_mb() +#define smp_mb() do { kcsan_mb(); __smp_mb(); } while (0) #endif #ifndef smp_rmb -#define smp_rmb() __smp_rmb() +#define smp_rmb() do { kcsan_rmb(); __smp_rmb(); } while (0) #endif #ifndef smp_wmb -#define smp_wmb() __smp_wmb() +#define smp_wmb() do { kcsan_wmb(); __smp_wmb(); } while (0) #endif #else /* !CONFIG_SMP */ @@ -123,19 +124,19 @@ do { \ #ifdef CONFIG_SMP #ifndef smp_store_mb -#define smp_store_mb(var, value) __smp_store_mb(var, value) +#define smp_store_mb(var, value) do { kcsan_mb(); __smp_store_mb(var, value); } while (0) #endif #ifndef smp_mb__before_atomic -#define smp_mb__before_atomic() __smp_mb__before_atomic() +#define smp_mb__before_atomic() do { kcsan_mb(); __smp_mb__before_atomic(); } while (0) #endif #ifndef smp_mb__after_atomic -#define smp_mb__after_atomic() __smp_mb__after_atomic() +#define smp_mb__after_atomic() do { kcsan_mb(); __smp_mb__after_atomic(); } while (0) #endif #ifndef smp_store_release -#define smp_store_release(p, v) __smp_store_release(p, v) +#define smp_store_release(p, v) do { kcsan_release(); __smp_store_release(p, v); } while (0) #endif #ifndef smp_load_acquire @@ -178,13 +179,13 @@ do { \ #endif /* CONFIG_SMP */ /* Barriers for virtual machine guests when talking to an SMP host */ -#define virt_mb() __smp_mb() -#define virt_rmb() __smp_rmb() -#define virt_wmb() __smp_wmb() -#define virt_store_mb(var, value) __smp_store_mb(var, value) -#define virt_mb__before_atomic() __smp_mb__before_atomic() -#define virt_mb__after_atomic() __smp_mb__after_atomic() -#define virt_store_release(p, v) __smp_store_release(p, v) +#define virt_mb() do { kcsan_mb(); __smp_mb(); } while (0) +#define virt_rmb() do { kcsan_rmb(); __smp_rmb(); } while (0) +#define virt_wmb() do { kcsan_wmb(); __smp_wmb(); } while (0) +#define virt_store_mb(var, value) do { kcsan_mb(); __smp_store_mb(var, value); } while (0) +#define virt_mb__before_atomic() do { kcsan_mb(); __smp_mb__before_atomic(); } while (0) +#define virt_mb__after_atomic() do { kcsan_mb(); __smp_mb__after_atomic(); } while (0) +#define virt_store_release(p, v) do { kcsan_release(); __smp_store_release(p, v); } while (0) #define virt_load_acquire(p) __smp_load_acquire(p) /** diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index b4e5ca23f840..5c0c5174155d 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -171,7 +171,7 @@ do { \ * Architectures that can implement ACQUIRE better need to take care. */ #ifndef smp_mb__after_spinlock -#define smp_mb__after_spinlock() do { } while (0) +#define smp_mb__after_spinlock() kcsan_mb() #endif #ifdef CONFIG_DEBUG_SPINLOCK From patchwork Tue Nov 30 11:44:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646967 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9C977C43217 for ; Tue, 30 Nov 2021 11:46:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241056AbhK3Ltn (ORCPT ); Tue, 30 Nov 2021 06:49:43 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45028 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241017AbhK3LtM (ORCPT ); Tue, 30 Nov 2021 06:49:12 -0500 Received: from mail-wm1-x349.google.com (mail-wm1-x349.google.com [IPv6:2a00:1450:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EFB39C06179B for ; Tue, 30 Nov 2021 03:45:40 -0800 (PST) Received: by mail-wm1-x349.google.com with SMTP id y141-20020a1c7d93000000b0033c2ae3583fso10288808wmc.5 for ; Tue, 30 Nov 2021 03:45:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=T9LcpiESjlbrW6ACmq5IFNnAkv2Wwnn3szo64Ts4MTU=; b=r4Nym9IUPqr6smt4hSh29FgX/wEpgW1oWFkMBkLt45lZV4MsS8x2Pp29b/KcYXqGW5 wkgAwLy1JnLd35I/Fj+l/UEsgKzPaN3fJHplny6OF4yF9Jtq0944uyqDNW80nE8eTOOs tD0d68ft7qYxlfnEvLZgygghFhWYO6UbSBwDu9I0tRDGua5IbN6gD97jk1bRW4gRuuZy L60TzPcxWQipS6+h/EMoBzRNufVYK1sioP7pHL1JTqKPI5TW33XwwzjSy7RaUlALcE+o dtm8pG3WMMblJczibJlop55o0nu+CZMM8tox6oeFdPlHITAjgFrbiy/Zf6uEeXftuYvt DRSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=T9LcpiESjlbrW6ACmq5IFNnAkv2Wwnn3szo64Ts4MTU=; b=WZFSBrKwXdWsFBJAi2gXR/ZB7w9w2LzWdpYokPuhjQF6N0xVNrQqMSeBJt9bELrzZa iPoUxFwTabknXs3V7MDn9T6AON/Q9+2rX5UosAARRzPvD4WKcE/z7imkwPBpKtrfbGnF mIKg//4Q52DsGWW3hXAIKRNnyxdmIAVF89bz8eQSsKrAZBQZe0bPBqoNq+0FzisG5RCI wb3C3orS9AjpYZVU83akMdPMS9+nY7cxr6lVs99ucVDGUhgzMsrPQaSeKPwK4kWygoIF AbfWMz1xxv8xyDklqxnSjhiKmMmHYUKw4xlO2CDfEtSwgF/6HVXR3ZSkvuvmhqcWKea0 g7bg== X-Gm-Message-State: AOAM5333lV2G/AoxXO9P0+hvwenUbBx6jgU6BcxHfq5Fz7zJfik5m6+f tE61yC+QnH9L5tT0sxdFipVgoNI8Lw== X-Google-Smtp-Source: ABdhPJyiLlXYxmElXhBKv1qI383tG/tMJUVWxzbrc6gBjqU+GoqEax55tKxqpMzboGdxCm7oCpqAkhpQnw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:adf:c406:: with SMTP id v6mr39945592wrf.570.1638272739561; Tue, 30 Nov 2021 03:45:39 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:23 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-16-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 15/25] locking/barriers, kcsan: Support generic instrumentation From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Thus far only smp_*() barriers had been defined by asm-generic/barrier.h based on __smp_*() barriers, because the !SMP case is usually generic. With the introduction of instrumentation, it also makes sense to have asm-generic/barrier.h assist in the definition of instrumented versions of mb(), rmb(), wmb(), dma_rmb(), and dma_wmb(). Because there is no requirement to distinguish the !SMP case, the definition can be simpler: we can avoid also providing fallbacks for the __ prefixed cases, and only check if `defined(__)`, to finally define the KCSAN-instrumented versions. This also allows for the compiler to complain if an architecture accidentally defines both the normal and __ prefixed variant. Signed-off-by: Marco Elver --- include/asm-generic/barrier.h | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h index 27a9c9edfef6..02c4339c8eeb 100644 --- a/include/asm-generic/barrier.h +++ b/include/asm-generic/barrier.h @@ -21,6 +21,31 @@ #define nop() asm volatile ("nop") #endif +/* + * Architectures that want generic instrumentation can define __ prefixed + * variants of all barriers. + */ + +#ifdef __mb +#define mb() do { kcsan_mb(); __mb(); } while (0) +#endif + +#ifdef __rmb +#define rmb() do { kcsan_rmb(); __rmb(); } while (0) +#endif + +#ifdef __wmb +#define wmb() do { kcsan_wmb(); __wmb(); } while (0) +#endif + +#ifdef __dma_rmb +#define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0) +#endif + +#ifdef __dma_wmb +#define dma_wmb() do { kcsan_wmb(); __dma_wmb(); } while (0) +#endif + /* * Force strict CPU ordering. And yes, this is required on UP too when we're * talking to devices. From patchwork Tue Nov 30 11:44:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2A054C4332F for ; Tue, 30 Nov 2021 11:46:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241059AbhK3Ltw (ORCPT ); Tue, 30 Nov 2021 06:49:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45198 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241208AbhK3LtP (ORCPT ); Tue, 30 Nov 2021 06:49:15 -0500 Received: from mail-wm1-x349.google.com (mail-wm1-x349.google.com [IPv6:2a00:1450:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F03EC0617A5 for ; Tue, 30 Nov 2021 03:45:43 -0800 (PST) Received: by mail-wm1-x349.google.com with SMTP id b142-20020a1c8094000000b0033f27b76819so7430404wmd.4 for ; Tue, 30 Nov 2021 03:45:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uCLdwqCnwbeDZQfoWjXYjkSmTccU5m6nCKSAs1uqhFI=; b=V7mTS18U6xzg9WKWPrIxbXNhMuTGT1y7j/ltBhDzWNDpNTb2WKHOY5/nBZNJCqiMvj aE5cO0ysP9vg+Z7z/78CRbzP5Qilrrr4xeyAcBG3LL0VNaw+MrJVGdbQ2b76ukXnQoYf xq8bW4GnMJdv5GTYA+LgEGPBu9fezpld0FQIc1fL+PVt1xAytHsVMY00us1VUZ0GvS3p 309P7gj9SGxd/EHuiVPraZC0UzHxlEFy+ZlA8IVfeRFa+oD/AHE3ngMztChNJKaBxYiB DhFvRNCRkUl1/q+yPSfKaUqVC6LAOEL9nCDDgbbZOrqNy+aNkp2zuM6WOWf0j74I8z4h qAMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uCLdwqCnwbeDZQfoWjXYjkSmTccU5m6nCKSAs1uqhFI=; b=AlDiK8QSO+G2EzV+EZ1CsQGDeLskx/55Fkeorpe07G+ZCM2mHHIdjwMy8EIJujPeBm XDOzRgCiEKeHVHmtPVznV5R3zIBz/e2i/9IWs6+8i3EFFIRhu8KgcCncUk/XVg4cJLz8 /ScD/joCZ7jOhlzwdbWcnsoT38EQgH6U25byzpW/Mmegw8RmFIhgZZ5JeeHIwwtu+Bno YaX5SJ6DtWkcWT2pEDCQF8+TSkbqPFSLCypICZFMUaaW0VOchWniDNKI+fproTud4AXL TatbvdKfc80+d1Re8yOJaklDlKBI98L8tvIXnHochd0jt8KtPHIYQvH+fdRWNbAORwsm LIFQ== X-Gm-Message-State: AOAM533elsar2TyFa6gXSheilEgp38ARGtU0pQKttbeg+BVKZyzVTbjV Qc5fWnWNFmxnySt05wbezxi/X3HSSg== X-Google-Smtp-Source: ABdhPJxY4+tlEctmfsl7CloPlBSpjIpN5Y44bObf5mYVLiogrsaWTkzYPGAEuXD5I6z6UKAZEzCy6kwpPg== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a5d:618f:: with SMTP id j15mr39136465wru.506.1638272742105; Tue, 30 Nov 2021 03:45:42 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:24 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-17-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 16/25] locking/atomics, kcsan: Add instrumentation for barriers From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Adds the required KCSAN instrumentation for barriers of atomics. Signed-off-by: Marco Elver --- include/linux/atomic/atomic-instrumented.h | 135 ++++++++++++++++++++- scripts/atomic/gen-atomic-instrumented.sh | 41 +++++-- 2 files changed, 166 insertions(+), 10 deletions(-) diff --git a/include/linux/atomic/atomic-instrumented.h b/include/linux/atomic/atomic-instrumented.h index a0f654370da3..5d69b143c28e 100644 --- a/include/linux/atomic/atomic-instrumented.h +++ b/include/linux/atomic/atomic-instrumented.h @@ -45,6 +45,7 @@ atomic_set(atomic_t *v, int i) static __always_inline void atomic_set_release(atomic_t *v, int i) { + kcsan_release(); instrument_atomic_write(v, sizeof(*v)); arch_atomic_set_release(v, i); } @@ -59,6 +60,7 @@ atomic_add(int i, atomic_t *v) static __always_inline int atomic_add_return(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_add_return(i, v); } @@ -73,6 +75,7 @@ atomic_add_return_acquire(int i, atomic_t *v) static __always_inline int atomic_add_return_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_add_return_release(i, v); } @@ -87,6 +90,7 @@ atomic_add_return_relaxed(int i, atomic_t *v) static __always_inline int atomic_fetch_add(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_add(i, v); } @@ -101,6 +105,7 @@ atomic_fetch_add_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_add_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_add_release(i, v); } @@ -122,6 +127,7 @@ atomic_sub(int i, atomic_t *v) static __always_inline int atomic_sub_return(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_sub_return(i, v); } @@ -136,6 +142,7 @@ atomic_sub_return_acquire(int i, atomic_t *v) static __always_inline int atomic_sub_return_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_sub_return_release(i, v); } @@ -150,6 +157,7 @@ atomic_sub_return_relaxed(int i, atomic_t *v) static __always_inline int atomic_fetch_sub(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_sub(i, v); } @@ -164,6 +172,7 @@ atomic_fetch_sub_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_sub_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_sub_release(i, v); } @@ -185,6 +194,7 @@ atomic_inc(atomic_t *v) static __always_inline int atomic_inc_return(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_return(v); } @@ -199,6 +209,7 @@ atomic_inc_return_acquire(atomic_t *v) static __always_inline int atomic_inc_return_release(atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_return_release(v); } @@ -213,6 +224,7 @@ atomic_inc_return_relaxed(atomic_t *v) static __always_inline int atomic_fetch_inc(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_inc(v); } @@ -227,6 +239,7 @@ atomic_fetch_inc_acquire(atomic_t *v) static __always_inline int atomic_fetch_inc_release(atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_inc_release(v); } @@ -248,6 +261,7 @@ atomic_dec(atomic_t *v) static __always_inline int atomic_dec_return(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_return(v); } @@ -262,6 +276,7 @@ atomic_dec_return_acquire(atomic_t *v) static __always_inline int atomic_dec_return_release(atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_return_release(v); } @@ -276,6 +291,7 @@ atomic_dec_return_relaxed(atomic_t *v) static __always_inline int atomic_fetch_dec(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_dec(v); } @@ -290,6 +306,7 @@ atomic_fetch_dec_acquire(atomic_t *v) static __always_inline int atomic_fetch_dec_release(atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_dec_release(v); } @@ -311,6 +328,7 @@ atomic_and(int i, atomic_t *v) static __always_inline int atomic_fetch_and(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_and(i, v); } @@ -325,6 +343,7 @@ atomic_fetch_and_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_and_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_and_release(i, v); } @@ -346,6 +365,7 @@ atomic_andnot(int i, atomic_t *v) static __always_inline int atomic_fetch_andnot(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_andnot(i, v); } @@ -360,6 +380,7 @@ atomic_fetch_andnot_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_andnot_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_andnot_release(i, v); } @@ -381,6 +402,7 @@ atomic_or(int i, atomic_t *v) static __always_inline int atomic_fetch_or(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_or(i, v); } @@ -395,6 +417,7 @@ atomic_fetch_or_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_or_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_or_release(i, v); } @@ -416,6 +439,7 @@ atomic_xor(int i, atomic_t *v) static __always_inline int atomic_fetch_xor(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_xor(i, v); } @@ -430,6 +454,7 @@ atomic_fetch_xor_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_xor_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_xor_release(i, v); } @@ -444,6 +469,7 @@ atomic_fetch_xor_relaxed(int i, atomic_t *v) static __always_inline int atomic_xchg(atomic_t *v, int i) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_xchg(v, i); } @@ -458,6 +484,7 @@ atomic_xchg_acquire(atomic_t *v, int i) static __always_inline int atomic_xchg_release(atomic_t *v, int i) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_xchg_release(v, i); } @@ -472,6 +499,7 @@ atomic_xchg_relaxed(atomic_t *v, int i) static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_cmpxchg(v, old, new); } @@ -486,6 +514,7 @@ atomic_cmpxchg_acquire(atomic_t *v, int old, int new) static __always_inline int atomic_cmpxchg_release(atomic_t *v, int old, int new) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_cmpxchg_release(v, old, new); } @@ -500,6 +529,7 @@ atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic_try_cmpxchg(v, old, new); @@ -516,6 +546,7 @@ atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) static __always_inline bool atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic_try_cmpxchg_release(v, old, new); @@ -532,6 +563,7 @@ atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) static __always_inline bool atomic_sub_and_test(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_sub_and_test(i, v); } @@ -539,6 +571,7 @@ atomic_sub_and_test(int i, atomic_t *v) static __always_inline bool atomic_dec_and_test(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_and_test(v); } @@ -546,6 +579,7 @@ atomic_dec_and_test(atomic_t *v) static __always_inline bool atomic_inc_and_test(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_and_test(v); } @@ -553,6 +587,7 @@ atomic_inc_and_test(atomic_t *v) static __always_inline bool atomic_add_negative(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_add_negative(i, v); } @@ -560,6 +595,7 @@ atomic_add_negative(int i, atomic_t *v) static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_add_unless(v, a, u); } @@ -567,6 +603,7 @@ atomic_fetch_add_unless(atomic_t *v, int a, int u) static __always_inline bool atomic_add_unless(atomic_t *v, int a, int u) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_add_unless(v, a, u); } @@ -574,6 +611,7 @@ atomic_add_unless(atomic_t *v, int a, int u) static __always_inline bool atomic_inc_not_zero(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_not_zero(v); } @@ -581,6 +619,7 @@ atomic_inc_not_zero(atomic_t *v) static __always_inline bool atomic_inc_unless_negative(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_unless_negative(v); } @@ -588,6 +627,7 @@ atomic_inc_unless_negative(atomic_t *v) static __always_inline bool atomic_dec_unless_positive(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_unless_positive(v); } @@ -595,6 +635,7 @@ atomic_dec_unless_positive(atomic_t *v) static __always_inline int atomic_dec_if_positive(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_if_positive(v); } @@ -623,6 +664,7 @@ atomic64_set(atomic64_t *v, s64 i) static __always_inline void atomic64_set_release(atomic64_t *v, s64 i) { + kcsan_release(); instrument_atomic_write(v, sizeof(*v)); arch_atomic64_set_release(v, i); } @@ -637,6 +679,7 @@ atomic64_add(s64 i, atomic64_t *v) static __always_inline s64 atomic64_add_return(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_add_return(i, v); } @@ -651,6 +694,7 @@ atomic64_add_return_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_add_return_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_add_return_release(i, v); } @@ -665,6 +709,7 @@ atomic64_add_return_relaxed(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_add(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_add(i, v); } @@ -679,6 +724,7 @@ atomic64_fetch_add_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_add_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_add_release(i, v); } @@ -700,6 +746,7 @@ atomic64_sub(s64 i, atomic64_t *v) static __always_inline s64 atomic64_sub_return(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_sub_return(i, v); } @@ -714,6 +761,7 @@ atomic64_sub_return_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_sub_return_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_sub_return_release(i, v); } @@ -728,6 +776,7 @@ atomic64_sub_return_relaxed(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_sub(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_sub(i, v); } @@ -742,6 +791,7 @@ atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_sub_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_sub_release(i, v); } @@ -763,6 +813,7 @@ atomic64_inc(atomic64_t *v) static __always_inline s64 atomic64_inc_return(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_return(v); } @@ -777,6 +828,7 @@ atomic64_inc_return_acquire(atomic64_t *v) static __always_inline s64 atomic64_inc_return_release(atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_return_release(v); } @@ -791,6 +843,7 @@ atomic64_inc_return_relaxed(atomic64_t *v) static __always_inline s64 atomic64_fetch_inc(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_inc(v); } @@ -805,6 +858,7 @@ atomic64_fetch_inc_acquire(atomic64_t *v) static __always_inline s64 atomic64_fetch_inc_release(atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_inc_release(v); } @@ -826,6 +880,7 @@ atomic64_dec(atomic64_t *v) static __always_inline s64 atomic64_dec_return(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_return(v); } @@ -840,6 +895,7 @@ atomic64_dec_return_acquire(atomic64_t *v) static __always_inline s64 atomic64_dec_return_release(atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_return_release(v); } @@ -854,6 +910,7 @@ atomic64_dec_return_relaxed(atomic64_t *v) static __always_inline s64 atomic64_fetch_dec(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_dec(v); } @@ -868,6 +925,7 @@ atomic64_fetch_dec_acquire(atomic64_t *v) static __always_inline s64 atomic64_fetch_dec_release(atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_dec_release(v); } @@ -889,6 +947,7 @@ atomic64_and(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_and(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_and(i, v); } @@ -903,6 +962,7 @@ atomic64_fetch_and_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_and_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_and_release(i, v); } @@ -924,6 +984,7 @@ atomic64_andnot(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_andnot(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_andnot(i, v); } @@ -938,6 +999,7 @@ atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_andnot_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_andnot_release(i, v); } @@ -959,6 +1021,7 @@ atomic64_or(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_or(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_or(i, v); } @@ -973,6 +1036,7 @@ atomic64_fetch_or_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_or_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_or_release(i, v); } @@ -994,6 +1058,7 @@ atomic64_xor(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_xor(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_xor(i, v); } @@ -1008,6 +1073,7 @@ atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_xor_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_xor_release(i, v); } @@ -1022,6 +1088,7 @@ atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) static __always_inline s64 atomic64_xchg(atomic64_t *v, s64 i) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_xchg(v, i); } @@ -1036,6 +1103,7 @@ atomic64_xchg_acquire(atomic64_t *v, s64 i) static __always_inline s64 atomic64_xchg_release(atomic64_t *v, s64 i) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_xchg_release(v, i); } @@ -1050,6 +1118,7 @@ atomic64_xchg_relaxed(atomic64_t *v, s64 i) static __always_inline s64 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_cmpxchg(v, old, new); } @@ -1064,6 +1133,7 @@ atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) static __always_inline s64 atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_cmpxchg_release(v, old, new); } @@ -1078,6 +1148,7 @@ atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic64_try_cmpxchg(v, old, new); @@ -1094,6 +1165,7 @@ atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) static __always_inline bool atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic64_try_cmpxchg_release(v, old, new); @@ -1110,6 +1182,7 @@ atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) static __always_inline bool atomic64_sub_and_test(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_sub_and_test(i, v); } @@ -1117,6 +1190,7 @@ atomic64_sub_and_test(s64 i, atomic64_t *v) static __always_inline bool atomic64_dec_and_test(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_and_test(v); } @@ -1124,6 +1198,7 @@ atomic64_dec_and_test(atomic64_t *v) static __always_inline bool atomic64_inc_and_test(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_and_test(v); } @@ -1131,6 +1206,7 @@ atomic64_inc_and_test(atomic64_t *v) static __always_inline bool atomic64_add_negative(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_add_negative(i, v); } @@ -1138,6 +1214,7 @@ atomic64_add_negative(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_add_unless(v, a, u); } @@ -1145,6 +1222,7 @@ atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) static __always_inline bool atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_add_unless(v, a, u); } @@ -1152,6 +1230,7 @@ atomic64_add_unless(atomic64_t *v, s64 a, s64 u) static __always_inline bool atomic64_inc_not_zero(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_not_zero(v); } @@ -1159,6 +1238,7 @@ atomic64_inc_not_zero(atomic64_t *v) static __always_inline bool atomic64_inc_unless_negative(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_unless_negative(v); } @@ -1166,6 +1246,7 @@ atomic64_inc_unless_negative(atomic64_t *v) static __always_inline bool atomic64_dec_unless_positive(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_unless_positive(v); } @@ -1173,6 +1254,7 @@ atomic64_dec_unless_positive(atomic64_t *v) static __always_inline s64 atomic64_dec_if_positive(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_if_positive(v); } @@ -1201,6 +1283,7 @@ atomic_long_set(atomic_long_t *v, long i) static __always_inline void atomic_long_set_release(atomic_long_t *v, long i) { + kcsan_release(); instrument_atomic_write(v, sizeof(*v)); arch_atomic_long_set_release(v, i); } @@ -1215,6 +1298,7 @@ atomic_long_add(long i, atomic_long_t *v) static __always_inline long atomic_long_add_return(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_add_return(i, v); } @@ -1229,6 +1313,7 @@ atomic_long_add_return_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_add_return_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_add_return_release(i, v); } @@ -1243,6 +1328,7 @@ atomic_long_add_return_relaxed(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_add(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_add(i, v); } @@ -1257,6 +1343,7 @@ atomic_long_fetch_add_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_add_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_add_release(i, v); } @@ -1278,6 +1365,7 @@ atomic_long_sub(long i, atomic_long_t *v) static __always_inline long atomic_long_sub_return(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_sub_return(i, v); } @@ -1292,6 +1380,7 @@ atomic_long_sub_return_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_sub_return_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_sub_return_release(i, v); } @@ -1306,6 +1395,7 @@ atomic_long_sub_return_relaxed(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_sub(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_sub(i, v); } @@ -1320,6 +1410,7 @@ atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_sub_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_sub_release(i, v); } @@ -1341,6 +1432,7 @@ atomic_long_inc(atomic_long_t *v) static __always_inline long atomic_long_inc_return(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_inc_return(v); } @@ -1355,6 +1447,7 @@ atomic_long_inc_return_acquire(atomic_long_t *v) static __always_inline long atomic_long_inc_return_release(atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_inc_return_release(v); } @@ -1369,6 +1462,7 @@ atomic_long_inc_return_relaxed(atomic_long_t *v) static __always_inline long atomic_long_fetch_inc(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_inc(v); } @@ -1383,6 +1477,7 @@ atomic_long_fetch_inc_acquire(atomic_long_t *v) static __always_inline long atomic_long_fetch_inc_release(atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_inc_release(v); } @@ -1404,6 +1499,7 @@ atomic_long_dec(atomic_long_t *v) static __always_inline long atomic_long_dec_return(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_dec_return(v); } @@ -1418,6 +1514,7 @@ atomic_long_dec_return_acquire(atomic_long_t *v) static __always_inline long atomic_long_dec_return_release(atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_dec_return_release(v); } @@ -1432,6 +1529,7 @@ atomic_long_dec_return_relaxed(atomic_long_t *v) static __always_inline long atomic_long_fetch_dec(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_dec(v); } @@ -1446,6 +1544,7 @@ atomic_long_fetch_dec_acquire(atomic_long_t *v) static __always_inline long atomic_long_fetch_dec_release(atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_dec_release(v); } @@ -1467,6 +1566,7 @@ atomic_long_and(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_and(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_and(i, v); } @@ -1481,6 +1581,7 @@ atomic_long_fetch_and_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_and_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_and_release(i, v); } @@ -1502,6 +1603,7 @@ atomic_long_andnot(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_andnot(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_andnot(i, v); } @@ -1516,6 +1618,7 @@ atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_andnot_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_andnot_release(i, v); } @@ -1537,6 +1640,7 @@ atomic_long_or(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_or(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_or(i, v); } @@ -1551,6 +1655,7 @@ atomic_long_fetch_or_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_or_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_or_release(i, v); } @@ -1572,6 +1677,7 @@ atomic_long_xor(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_xor(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_xor(i, v); } @@ -1586,6 +1692,7 @@ atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_xor_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_xor_release(i, v); } @@ -1600,6 +1707,7 @@ atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) static __always_inline long atomic_long_xchg(atomic_long_t *v, long i) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_xchg(v, i); } @@ -1614,6 +1722,7 @@ atomic_long_xchg_acquire(atomic_long_t *v, long i) static __always_inline long atomic_long_xchg_release(atomic_long_t *v, long i) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_xchg_release(v, i); } @@ -1628,6 +1737,7 @@ atomic_long_xchg_relaxed(atomic_long_t *v, long i) static __always_inline long atomic_long_cmpxchg(atomic_long_t *v, long old, long new) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_cmpxchg(v, old, new); } @@ -1642,6 +1752,7 @@ atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) static __always_inline long atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_cmpxchg_release(v, old, new); } @@ -1656,6 +1767,7 @@ atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) static __always_inline bool atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic_long_try_cmpxchg(v, old, new); @@ -1672,6 +1784,7 @@ atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) static __always_inline bool atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic_long_try_cmpxchg_release(v, old, new); @@ -1688,6 +1801,7 @@ atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) static __always_inline bool atomic_long_sub_and_test(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_sub_and_test(i, v); } @@ -1695,6 +1809,7 @@ atomic_long_sub_and_test(long i, atomic_long_t *v) static __always_inline bool atomic_long_dec_and_test(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_dec_and_test(v); } @@ -1702,6 +1817,7 @@ atomic_long_dec_and_test(atomic_long_t *v) static __always_inline bool atomic_long_inc_and_test(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_inc_and_test(v); } @@ -1709,6 +1825,7 @@ atomic_long_inc_and_test(atomic_long_t *v) static __always_inline bool atomic_long_add_negative(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_add_negative(i, v); } @@ -1716,6 +1833,7 @@ atomic_long_add_negative(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_add_unless(v, a, u); } @@ -1723,6 +1841,7 @@ atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) static __always_inline bool atomic_long_add_unless(atomic_long_t *v, long a, long u) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_add_unless(v, a, u); } @@ -1730,6 +1849,7 @@ atomic_long_add_unless(atomic_long_t *v, long a, long u) static __always_inline bool atomic_long_inc_not_zero(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_inc_not_zero(v); } @@ -1737,6 +1857,7 @@ atomic_long_inc_not_zero(atomic_long_t *v) static __always_inline bool atomic_long_inc_unless_negative(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_inc_unless_negative(v); } @@ -1744,6 +1865,7 @@ atomic_long_inc_unless_negative(atomic_long_t *v) static __always_inline bool atomic_long_dec_unless_positive(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_dec_unless_positive(v); } @@ -1751,6 +1873,7 @@ atomic_long_dec_unless_positive(atomic_long_t *v) static __always_inline long atomic_long_dec_if_positive(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_dec_if_positive(v); } @@ -1758,6 +1881,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define xchg(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_mb(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_xchg(__ai_ptr, __VA_ARGS__); \ }) @@ -1772,6 +1896,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define xchg_release(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_release(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_xchg_release(__ai_ptr, __VA_ARGS__); \ }) @@ -1786,6 +1911,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define cmpxchg(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_mb(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_cmpxchg(__ai_ptr, __VA_ARGS__); \ }) @@ -1800,6 +1926,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define cmpxchg_release(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_release(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_cmpxchg_release(__ai_ptr, __VA_ARGS__); \ }) @@ -1814,6 +1941,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define cmpxchg64(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_mb(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_cmpxchg64(__ai_ptr, __VA_ARGS__); \ }) @@ -1828,6 +1956,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define cmpxchg64_release(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_release(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_cmpxchg64_release(__ai_ptr, __VA_ARGS__); \ }) @@ -1843,6 +1972,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ typeof(oldp) __ai_oldp = (oldp); \ + kcsan_mb(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_atomic_write(__ai_oldp, sizeof(*__ai_oldp)); \ arch_try_cmpxchg(__ai_ptr, __ai_oldp, __VA_ARGS__); \ @@ -1861,6 +1991,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ typeof(oldp) __ai_oldp = (oldp); \ + kcsan_release(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_atomic_write(__ai_oldp, sizeof(*__ai_oldp)); \ arch_try_cmpxchg_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \ @@ -1892,6 +2023,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define sync_cmpxchg(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_mb(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_sync_cmpxchg(__ai_ptr, __VA_ARGS__); \ }) @@ -1899,6 +2031,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define cmpxchg_double(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_mb(); \ instrument_atomic_write(__ai_ptr, 2 * sizeof(*__ai_ptr)); \ arch_cmpxchg_double(__ai_ptr, __VA_ARGS__); \ }) @@ -1912,4 +2045,4 @@ atomic_long_dec_if_positive(atomic_long_t *v) }) #endif /* _LINUX_ATOMIC_INSTRUMENTED_H */ -// 2a9553f0a9d5619f19151092df5cabbbf16ce835 +// 87c974b93032afd42143613434d1a7788fa598f9 diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh index 035ceb4ee85c..68f902731d01 100755 --- a/scripts/atomic/gen-atomic-instrumented.sh +++ b/scripts/atomic/gen-atomic-instrumented.sh @@ -34,6 +34,14 @@ gen_param_check() gen_params_checks() { local meta="$1"; shift + local order="$1"; shift + + if [ "${order}" = "_release" ]; then + printf "\tkcsan_release();\n" + elif [ -z "${order}" ] && ! meta_in "$meta" "slv"; then + # RMW with return value is fully ordered + printf "\tkcsan_mb();\n" + fi while [ "$#" -gt 0 ]; do gen_param_check "$meta" "$1" @@ -56,7 +64,7 @@ gen_proto_order_variant() local ret="$(gen_ret_type "${meta}" "${int}")" local params="$(gen_params "${int}" "${atomic}" "$@")" - local checks="$(gen_params_checks "${meta}" "$@")" + local checks="$(gen_params_checks "${meta}" "${order}" "$@")" local args="$(gen_args "$@")" local retstmt="$(gen_ret_stmt "${meta}")" @@ -75,29 +83,44 @@ EOF gen_xchg() { local xchg="$1"; shift + local order="$1"; shift local mult="$1"; shift + kcsan_barrier="" + if [ "${xchg%_local}" = "${xchg}" ]; then + case "$order" in + _release) kcsan_barrier="kcsan_release()" ;; + "") kcsan_barrier="kcsan_mb()" ;; + esac + fi + if [ "${xchg%${xchg#try_cmpxchg}}" = "try_cmpxchg" ] ; then cat < X-Patchwork-Id: 12646969 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0286AC4321E for ; Tue, 30 Nov 2021 11:46:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241313AbhK3Ltz (ORCPT ); Tue, 30 Nov 2021 06:49:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241213AbhK3LtQ (ORCPT ); Tue, 30 Nov 2021 06:49:16 -0500 Received: from mail-wm1-x349.google.com (mail-wm1-x349.google.com [IPv6:2a00:1450:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0942AC061399 for ; Tue, 30 Nov 2021 03:45:46 -0800 (PST) Received: by mail-wm1-x349.google.com with SMTP id p12-20020a05600c1d8c00b0033a22e48203so12699143wms.6 for ; Tue, 30 Nov 2021 03:45:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=JvNCvfwB/M304r5/BDcWOzh3dLnMrHum3apHlfxEx/4=; b=HGTmGOZbV8BO8UVwawHP2qaUUIQR8l5KZLll/+rqQfeQwOOUV8mNOVjYuSRralZ5Aj Twljfp8LxW2WknHiRqI9QoWbiqLagWNhyAuGUORzqavLTJN5zuDJkA6xswrqgI98Q+uE sGDIjOsQo9WMuJw/ByUioVp1OJy74McWnV33/n/ke3sFX2NvSAiuq9skxWedjHC8nrai PFaVXbEHKi7lgTq4v/bsg0Ngh7AzO5iLGy1cgulyhEEKXvYvFqiiN0vi/YfKe4jiq5uk +dCCCqPcYM82viw2lZRJNNCNvZn4sNAzh4K9IyWAuaDW1a3v+W75mITtZVW93eEWrquG Qiug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=JvNCvfwB/M304r5/BDcWOzh3dLnMrHum3apHlfxEx/4=; b=iTtUaTHl3FO8bsup9Od0PWGGU5TrgJOWn4r6WRH/zyK5vNE9HfkKqI5zg5ZOr6M7gu obStd1exSKVu+aWSIWMeGsvmp39qtfg9BGMWyoEvzErvEk5eEyzioWXr71jZqhecy3qT BDiQmUfUZ+Eh3utR/CkLBvYrwOALl/s6EBNLp8eQIKlG2l4ohe/Rvh2LR46cY7uDbBFZ vrQcp4ObRCRkgOO1GpguFtNrS6BSBSBZBW3a4OOQYbC2motIEGHQmUnAXyyp79W/W15g bW6+l5i2dF5hdPjYqJIlCzm5ydzA6Ybsw3KwDwAtdo1QcQFiBbUQTPkRJ/nHlixG+9J+ 4m3Q== X-Gm-Message-State: AOAM533xaZrjFQa5ZCJmmq4+uBHHUwnQPmiunhHfjLjbfksbVuJHg7+m ngWU9Kc0HBUowdAAlfN196cR0u8npg== X-Google-Smtp-Source: ABdhPJzSDzQegokGIwsGrKkVQ/IhdBWsBMMU4nwhK3bFeKxSJ75NPWkWR6nK52NPFK40FBPIH38r66gw+w== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:2e46:: with SMTP id q6mr4344091wmf.6.1638272744574; Tue, 30 Nov 2021 03:45:44 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:25 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-18-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 17/25] asm-generic/bitops, kcsan: Add instrumentation for barriers From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Adds the required KCSAN instrumentation for barriers of atomic bitops. Signed-off-by: Marco Elver --- include/asm-generic/bitops/instrumented-atomic.h | 3 +++ include/asm-generic/bitops/instrumented-lock.h | 3 +++ 2 files changed, 6 insertions(+) diff --git a/include/asm-generic/bitops/instrumented-atomic.h b/include/asm-generic/bitops/instrumented-atomic.h index 81915dcd4b4e..c90192b1c755 100644 --- a/include/asm-generic/bitops/instrumented-atomic.h +++ b/include/asm-generic/bitops/instrumented-atomic.h @@ -67,6 +67,7 @@ static inline void change_bit(long nr, volatile unsigned long *addr) */ static inline bool test_and_set_bit(long nr, volatile unsigned long *addr) { + kcsan_mb(); instrument_atomic_read_write(addr + BIT_WORD(nr), sizeof(long)); return arch_test_and_set_bit(nr, addr); } @@ -80,6 +81,7 @@ static inline bool test_and_set_bit(long nr, volatile unsigned long *addr) */ static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr) { + kcsan_mb(); instrument_atomic_read_write(addr + BIT_WORD(nr), sizeof(long)); return arch_test_and_clear_bit(nr, addr); } @@ -93,6 +95,7 @@ static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr) */ static inline bool test_and_change_bit(long nr, volatile unsigned long *addr) { + kcsan_mb(); instrument_atomic_read_write(addr + BIT_WORD(nr), sizeof(long)); return arch_test_and_change_bit(nr, addr); } diff --git a/include/asm-generic/bitops/instrumented-lock.h b/include/asm-generic/bitops/instrumented-lock.h index 75ef606f7145..eb64bd4f11f3 100644 --- a/include/asm-generic/bitops/instrumented-lock.h +++ b/include/asm-generic/bitops/instrumented-lock.h @@ -22,6 +22,7 @@ */ static inline void clear_bit_unlock(long nr, volatile unsigned long *addr) { + kcsan_release(); instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long)); arch_clear_bit_unlock(nr, addr); } @@ -37,6 +38,7 @@ static inline void clear_bit_unlock(long nr, volatile unsigned long *addr) */ static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr) { + kcsan_release(); instrument_write(addr + BIT_WORD(nr), sizeof(long)); arch___clear_bit_unlock(nr, addr); } @@ -71,6 +73,7 @@ static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr) static inline bool clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr) { + kcsan_release(); instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long)); return arch_clear_bit_unlock_is_negative_byte(nr, addr); } From patchwork Tue Nov 30 11:44:26 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646973 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FAE7C433EF for ; Tue, 30 Nov 2021 11:46:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240950AbhK3Lt5 (ORCPT ); Tue, 30 Nov 2021 06:49:57 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45212 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241220AbhK3LtQ (ORCPT ); Tue, 30 Nov 2021 06:49:16 -0500 Received: from mail-ed1-x54a.google.com (mail-ed1-x54a.google.com [IPv6:2a00:1450:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 66F59C0613A1 for ; Tue, 30 Nov 2021 03:45:48 -0800 (PST) Received: by mail-ed1-x54a.google.com with SMTP id w4-20020aa7cb44000000b003e7c0f7cfffso16717369edt.2 for ; Tue, 30 Nov 2021 03:45:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=FxnGx7glVXa5fb5vt97IxdOcLQo6ydeIuX4RCBDcgdw=; b=IuT8ETl+CUu5djE2Sc10KkM6Tnru1Bt8YznawbCvRm9Jgvp9dSU0yLe6+joi6N8O7z AIQMX+OAPVIR41Cfns6ghKjzo+Oma52RXEvce97BWN8XleA4vFUT7GdZPuKFrpZGlHcF 6yYRNP/ViTSHKPJfti0Eaj1rJb2iVf9U20oiJZuzUw0D5yJerS3Bzu0/Hx+jtOL+x1E1 R2wPF1vj6a03NK0VTctonSiFDQ5cYmZGKeIfRYdvtl5e6D9EGeuanhZY7SdpfSTa6yMo 5PmPh+xd9A8VTGIvQ+j2BqV2x4NHt+irBixkDfvH2/M2xazm3E9FnFudLg/aEvPjG3A7 dYGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FxnGx7glVXa5fb5vt97IxdOcLQo6ydeIuX4RCBDcgdw=; b=Ty6Cu8joEhdrk5e55gmpp543m835g4O5EzoqDjAzAZTIczPTJw+djlxuTsCHgEeYAI QDqdpSE/Gyrmh0ajrvZoTrKdCZtSTvoHXZXJsD9GoVwDiPiFVCuo6FDpKJUuYOlTQ6rI mlsgQ32zeiJFJweop7EKD8Fi/auZ6D0I/JgnNnw+dtwN/X/7qFCFzRVzvxJggv1vzXtD 30PLQiwm7MAHiLw0sjHc6DM3eCbGGf4N5aTmD/yd6blTbuN9q03cPVLMzqhXwPekYY7C 0X1Kc5gZjQbAX3ydUSzj4QXxhiBuKNN6q66qV6Je2mrQuvhs23RO3ltfXt6ja9/iFU6b hQjA== X-Gm-Message-State: AOAM531QO2nqa/V/w4ZdfEPfZj6A1pQNcS1wSN6xpvTHPVF+FtiU2Kze tLSBfw9sLCRzi+bf22xkYooBf+Wyxg== X-Google-Smtp-Source: ABdhPJxhBj5XWZjZdXBGffDMCoQxOuz6ni/khGb4YnRO6y4oi2+lwBW5Amf+ofP9vXsmezHPbLSJ3bZlIQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a17:907:9720:: with SMTP id jg32mr69349183ejc.304.1638272746945; Tue, 30 Nov 2021 03:45:46 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:26 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-19-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 18/25] x86/barriers, kcsan: Use generic instrumentation for non-smp barriers From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Prefix all barriers with __, now that asm-generic/barriers.h supports defining the final instrumented version of these barriers. The change is limited to barriers used by x86-64. Signed-off-by: Marco Elver --- arch/x86/include/asm/barrier.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h index 3ba772a69cc8..35389b2af88e 100644 --- a/arch/x86/include/asm/barrier.h +++ b/arch/x86/include/asm/barrier.h @@ -19,9 +19,9 @@ #define wmb() asm volatile(ALTERNATIVE("lock; addl $0,-4(%%esp)", "sfence", \ X86_FEATURE_XMM2) ::: "memory", "cc") #else -#define mb() asm volatile("mfence":::"memory") -#define rmb() asm volatile("lfence":::"memory") -#define wmb() asm volatile("sfence" ::: "memory") +#define __mb() asm volatile("mfence":::"memory") +#define __rmb() asm volatile("lfence":::"memory") +#define __wmb() asm volatile("sfence" ::: "memory") #endif /** @@ -51,8 +51,8 @@ static inline unsigned long array_index_mask_nospec(unsigned long index, /* Prevent speculative execution past this barrier. */ #define barrier_nospec() alternative("", "lfence", X86_FEATURE_LFENCE_RDTSC) -#define dma_rmb() barrier() -#define dma_wmb() barrier() +#define __dma_rmb() barrier() +#define __dma_wmb() barrier() #define __smp_mb() asm volatile("lock; addl $0,-4(%%" _ASM_SP ")" ::: "memory", "cc") From patchwork Tue Nov 30 11:44:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646975 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEAB4C433EF for ; Tue, 30 Nov 2021 11:46:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235922AbhK3LuO (ORCPT ); Tue, 30 Nov 2021 06:50:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45280 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241126AbhK3Lte (ORCPT ); Tue, 30 Nov 2021 06:49:34 -0500 Received: from mail-wm1-x349.google.com (mail-wm1-x349.google.com [IPv6:2a00:1450:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 022DDC06175B for ; Tue, 30 Nov 2021 03:45:51 -0800 (PST) Received: by mail-wm1-x349.google.com with SMTP id g11-20020a1c200b000000b003320d092d08so10285491wmg.9 for ; Tue, 30 Nov 2021 03:45:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=TQFyZU0v42DaaTkISoD0OFzde9Hlzhfr9CYPS/61CiQ=; b=UzkPOK44rKyoX0ydzg9d6O+jrGkdKn2U4Uh9I6GaB3gkQKhjEvJW6OswApZ/DpFhIs MjWkd+FkuLnMui5ECel20ebBFkuRLa1f1neQ6VVz1S7yCh0IvJGlwo8aeF9bhgAZqyug l2zCEvskzXpYTMvQpQGB0z4ARki5+15v+PWKTCRoN0cGTSZ65LPhGkDFa7K5dBSQtt1+ LiZoJY4ePdc7NQgpEjukKVvNVBinKAvWLDa3Mfnn9KnFectfd/m0Z+qn/ceEmQlEYCrQ OGNEtLuGhqXgJgSt12TdFVVE+Mz8qn5rbzRLOK/zD3ItiiBRgqjdDcz7tWKq+w3BIhpz +2gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=TQFyZU0v42DaaTkISoD0OFzde9Hlzhfr9CYPS/61CiQ=; b=39BHQB6qUyeovcRRzEy7YrvCOh1s2VIwCpWyK5GmmQKndQyJem5pgz7cs6AvZvIVP1 qRfRTCEKK54wV5C2C93oh/avggahFhEXseeHM8jn3u4fF4SmqNr0NSFa4vJdwGdhtMQU F1PneNI59QGHE3Pwsxk+iWyJdaHotrJJq5A2iZ9+CQsKbhPdThk97qmD8tuhuFhQ2s/+ /hBemcsA1B6lD8vVKb92iYw0p3/VHxfz1X5fcU7EEUZmZtbrvNxWlYTu6LgBWdS9RVbS IK133qUkFW0eOf2DnAyxWkMVd/lzVarwzCooYty8UxB/y1+dHhnQqrHHFI/LQwnCfMdn b79Q== X-Gm-Message-State: AOAM533GVOoa5Lzip6bF69lkmzAMWWfUW7zt0P49BmXlAzb9oEHiRFvo 8I124HrSlbaAhTlbv64HM9jNALLb/Q== X-Google-Smtp-Source: ABdhPJxb0DkHs7VvMvArdCvCQrK9MrDdf3hAA2QD7Q29GE02b72kjglRFvit2CdEhNhUNiK6zoXiTRV6/Q== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:19c8:: with SMTP id u8mr4203223wmq.155.1638272749477; Tue, 30 Nov 2021 03:45:49 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:27 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-20-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 19/25] x86/qspinlock, kcsan: Instrument barrier of pv_queued_spin_unlock() From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org If CONFIG_PARAVIRT_SPINLOCKS=y, queued_spin_unlock() is implemented using pv_queued_spin_unlock() which is entirely inline asm based. As such, we do not receive any KCSAN barrier instrumentation via regular atomic operations. Add the missing KCSAN barrier instrumentation for the CONFIG_PARAVIRT_SPINLOCKS case. Signed-off-by: Marco Elver --- arch/x86/include/asm/qspinlock.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h index d86ab942219c..d87451df480b 100644 --- a/arch/x86/include/asm/qspinlock.h +++ b/arch/x86/include/asm/qspinlock.h @@ -53,6 +53,7 @@ static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) static inline void queued_spin_unlock(struct qspinlock *lock) { + kcsan_release(); pv_queued_spin_unlock(lock); } From patchwork Tue Nov 30 11:44:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646983 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9A07C43219 for ; Tue, 30 Nov 2021 11:47:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S237593AbhK3LuY (ORCPT ); Tue, 30 Nov 2021 06:50:24 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44964 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241149AbhK3Ltj (ORCPT ); Tue, 30 Nov 2021 06:49:39 -0500 Received: from mail-qk1-x749.google.com (mail-qk1-x749.google.com [IPv6:2607:f8b0:4864:20::749]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AA829C0613B8 for ; Tue, 30 Nov 2021 03:45:52 -0800 (PST) Received: by mail-qk1-x749.google.com with SMTP id v14-20020a05620a0f0e00b0043355ed67d1so28329020qkl.7 for ; Tue, 30 Nov 2021 03:45:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rzdCErYR7QVJMqQ7C8iOEn+pKBOJL+BpLP2uqDT2sRI=; b=iiJU/NFxnkO4QpNuEziwm44ZddXcfbqTjJ27RND0XQcNYVVoxyYjAqEozptVgUXNwN tUotgFLhO6hTdCXy/8o38iy2aym7FJT+LW4h57+kXk+ITZwaPtmf/ukrKduYHtwrNNVy u6zRnFErKVJc5tA07nIb20yQJs/rjFgBkkJfiMGCJ0+/frkqjjvq7sB5Qlsbun/G1B3v mpVqufLXr0RiWnvRXu2PZPDLndJRxYoHPg3pEXSdStYWib1eEQIROpsu8lv6/UvQvgN4 KcSqzQSZ9rtkXbOEAqLYcx/YblyNiiImRh54NndbnYSq+4cJ1Y/85RKTI5bPUsrRqqxr ZZsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rzdCErYR7QVJMqQ7C8iOEn+pKBOJL+BpLP2uqDT2sRI=; b=lcF5MSegclJZ9HnooN2ZV9iyg/raoW4LhcHuzozH9a9ojlDpsfrixJNVF3HBPv/fG9 P99tMofA2c7kSqcxne4B6IawU0L0I/I2XTQnI2Ae+d/lKpNZ9WX47ne6MB5oashRb0W3 QXtLSpkFyYt+xA8YpUrAwNXvZ31XALqBQCAA1I+ejpGJCenW9s2owt+soPXOWyauXWz7 o1TMKKCiCugiy+DKpJkJQNrEnMb1vfifLJUpwbQ8LAxseopkFGrfdyEJ2GGvaKFo2YgJ p5MQKW5lp1tpmnW6kqQxnHiBafCfrbVdbITAkVlTRe+8Eb3pv3HV3kFd2BXVvZy5vEYV mn6w== X-Gm-Message-State: AOAM530ZbHDo8m5CbKXBHW3KB7EkDKuKBgRz3E6A18+SguzxZQoQ6P0c l7d277hIAhTXxxy6vLBC93lHsIYzmg== X-Google-Smtp-Source: ABdhPJxS2DvHFjm5rRAa2Cv18Mr86xucn8I+904NmrEkNJGoQhVDVa+JTgno3MDa7PBFHMKHIzeyW8Gdjg== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:ac8:5c45:: with SMTP id j5mr51253786qtj.58.1638272751779; Tue, 30 Nov 2021 03:45:51 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:28 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-21-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 20/25] mm, kcsan: Enable barrier instrumentation From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Some memory management calls imply memory barriers that are required to avoid false positives. For example, without the correct instrumentation, we could observe data races of the following variant: T0 | T1 ------------------------+------------------------ | *a = 42; ---+ | kfree(a); | | | | b = kmalloc(..); // b == a <-+ | *b = 42; // not a data race! | Therefore, instrument memory barriers in all allocator code currently not being instrumented in a default build. Signed-off-by: Marco Elver --- mm/Makefile | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/Makefile b/mm/Makefile index d6c0042e3aa0..7919cd7f13f2 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -15,6 +15,8 @@ KCSAN_SANITIZE_slab_common.o := n KCSAN_SANITIZE_slab.o := n KCSAN_SANITIZE_slub.o := n KCSAN_SANITIZE_page_alloc.o := n +# But enable explicit instrumentation for memory barriers. +KCSAN_INSTRUMENT_BARRIERS := y # These files are disabled because they produce non-interesting and/or # flaky coverage that is not a function of syscall inputs. E.g. slab is out of From patchwork Tue Nov 30 11:44:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646985 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45CE9C4321E for ; Tue, 30 Nov 2021 11:47:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241181AbhK3LuZ (ORCPT ); Tue, 30 Nov 2021 06:50:25 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241154AbhK3Ltj (ORCPT ); Tue, 30 Nov 2021 06:49:39 -0500 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC64EC06175E for ; Tue, 30 Nov 2021 03:45:55 -0800 (PST) Received: by mail-wr1-x449.google.com with SMTP id q7-20020adff507000000b0017d160d35a8so3531019wro.4 for ; Tue, 30 Nov 2021 03:45:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=SnYVQxLC6YSHg0QBspCa3KEICf+BvvptqJyS2cHTmDQ=; b=YAPrqaSvFXPKIyChd2B9G/mC+V4Jc4D9Ysa+dt9+gyWoCPV+nRINqJd57Duo402ra3 fACCmxrugjWDUChLO48YEakWmtgKIBS2NenkgfzdEIUaFeOLMeRRLt3y3O0O8m42KVyP EYZTlGjECGMCo3XeJTyzQ9huKYBCMEUzAAG6gZ+6TrfzQbi+hkq/C1sNW9Osl71l2q00 KnvJkh1XBMAWo2p+iVXz1ZScDqJ9cYV8qalOkG24HTG/65Ii8z7IiAdL8qvRMC2Opkvc XOyIIRR2dMM8QMdcIa4mT34J5n4EmvIPm98FB4otkGxOdejNKSt9f8kvehv1Vt3mjuJX t2wA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=SnYVQxLC6YSHg0QBspCa3KEICf+BvvptqJyS2cHTmDQ=; b=uYMO3EaWnFZdqoEpN1b10gWPJVwYTPp7Wxke1USeLahRm0FT9ZyOCDAGhbL9YooWCq 20hZ4+ncWuya5QATM6AjMcttrHNe2DG17KsAYTfHiWXv/1P3YhCHQnP8+ERClG9HON38 axUnPvx2A30YyBLHeFk3LXz6n6z6wmwq4PHe4EOHAZBuA1bEpsz1fpVs4iWZxMhyj73V xKiL4E6M6VxW1fF7tkVkxls26nkXm7adkFlaf+oqXYSesxH59hlOm/q35OAbdTcBf1CZ TK8PmxGO2UL2V6jiB7hQF4id4FIDDbbSTMq0/Ya1337RMYjKKsZIuYAZr6TKIHK8JLRg WMqg== X-Gm-Message-State: AOAM533DHTKjkZmFWteUGxVVsVO8esugwNCSCr78uSx5ngN8/8fEzXzO YIxMxPKqeyjijZ8SJCUFcjrRcfouOQ== X-Google-Smtp-Source: ABdhPJy3VXvdk8vo/XrXiDkU+BIk+TJTbB/pqh504ouz3u8/5fsbp+KQ9JgMbQcOudhStkssJotFYNwe1A== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:4f0b:: with SMTP id l11mr626057wmq.0.1638272754116; Tue, 30 Nov 2021 03:45:54 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:29 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-22-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 21/25] sched, kcsan: Enable memory barrier instrumentation From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org There's no fundamental reason to disable KCSAN for scheduler code, except for excessive noise and performance concerns (instrumenting scheduler code is usually a good way to stress test KCSAN itself). However, several core sched functions imply memory barriers that are invisible to KCSAN without instrumentation, but are required to avoid false positives. Therefore, unconditionally enable instrumentation of memory barriers in scheduler code. Also update the comment to reflect this and be a bit more brief. Signed-off-by: Marco Elver --- kernel/sched/Makefile | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile index c7421f2d05e1..c83b37af155b 100644 --- a/kernel/sched/Makefile +++ b/kernel/sched/Makefile @@ -11,11 +11,10 @@ ccflags-y += $(call cc-disable-warning, unused-but-set-variable) # that is not a function of syscall inputs. E.g. involuntary context switches. KCOV_INSTRUMENT := n -# There are numerous data races here, however, most of them are due to plain accesses. -# This would make it even harder for syzbot to find reproducers, because these -# bugs trigger without specific input. Disable by default, but should re-enable -# eventually. +# Disable KCSAN to avoid excessive noise and performance degradation. To avoid +# false positives ensure barriers implied by sched functions are instrumented. KCSAN_SANITIZE := n +KCSAN_INSTRUMENT_BARRIERS := y ifneq ($(CONFIG_SCHED_OMIT_FRAME_POINTER),y) # According to Alan Modra , the -fno-omit-frame-pointer is From patchwork Tue Nov 30 11:44:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646977 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33DC7C4332F for ; Tue, 30 Nov 2021 11:46:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240991AbhK3LuQ (ORCPT ); Tue, 30 Nov 2021 06:50:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241252AbhK3Ltl (ORCPT ); Tue, 30 Nov 2021 06:49:41 -0500 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 09305C061763 for ; Tue, 30 Nov 2021 03:45:58 -0800 (PST) Received: by mail-wr1-x449.google.com with SMTP id v18-20020a5d5912000000b001815910d2c0so3522294wrd.1 for ; Tue, 30 Nov 2021 03:45:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=JzBhpw0Qm81Z9lbF5IGyZhQAb+5o04LDLkCFlxupKLc=; b=pHRgTgtBe9w8GJo+UVuCrCXkkBePrXHeJXXGSGxiMQTDZhPB1HMZjVhCQEHB+8iuxA QTqapU0TX6O4TE0Y48efyZW2bq1hF37Xt0PGzM68wPVdqNEwB+3nHHqWFUL1gWu3kHtR Cq7Wrdud5dEh12cWU90OhIU4V/9Q7ULAOS9mOORJ6Eqe+R8j1ryew/Yd+VDrcTllPVFh 5lcxmqYwyS4EYnDSPUY2x7vLF1W4hPQ8ICF36xbdENuMBTVVtdo+SbNEIlEehDKxtwYR kgzu3xNPjqGc6hHJvBg6+Ezt/2WHsUBVgd1XEwAmeXQPIFvn+uf/vjPzU0O+2NhHvmqd ksUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=JzBhpw0Qm81Z9lbF5IGyZhQAb+5o04LDLkCFlxupKLc=; b=jgmyUpCuZq9hQMMv0nHgwohQsbP6xNHhx+D3CMu+//rhmPDTpq864I2uAyb5h2POfS jcqDdsb9ilpYjlBPDwYBms/iP2Ewab38dPeiy8u9+DDut4nRaLIy/FRacQWw+aTDNNeZ zyQiFK8lmxUsNOSKZ6B+Hl+E/csbaJYPuQIYmhuKx3rTd2xRJ7dzV9iLC5VD8MSATScc OUlAil3kXyNTxpTuYBs8HgYJuo59Vx76wGBi3vbyOAe1pgyZnoRYxODhPMZUqaLddRch cEr+vuR/aFJuD2SAx9PVQJ9g4dFOUcVCYi1+OgNtEiHnHqE9oRAgJ1Sy+ONWTNOZypqP DxEw== X-Gm-Message-State: AOAM532Yfa6zLjga5s/AdTMAGQUH3ZxM9trahwWqyCN1kpOeA/WSvkBm lgkZr7xO8A5736dZbQOJWG/ymGXkvA== X-Google-Smtp-Source: ABdhPJyGlw39n0+Jfq+xh033MGOvVeVGOoalcV20EEznri+jie430fZfA9+7esf3Wo7SpGvol2V1AwHpQw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a7b:cc8f:: with SMTP id p15mr4408290wma.129.1638272756667; Tue, 30 Nov 2021 03:45:56 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:30 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-23-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 22/25] objtool, kcsan: Add memory barrier instrumentation to whitelist From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Adds KCSAN's memory barrier instrumentation to objtool's uaccess whitelist. Signed-off-by: Marco Elver --- tools/objtool/check.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 21735829b860..61dfb66b30b6 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -849,6 +849,10 @@ static const char *uaccess_safe_builtin[] = { "__asan_report_store16_noabort", /* KCSAN */ "__kcsan_check_access", + "__kcsan_mb", + "__kcsan_wmb", + "__kcsan_rmb", + "__kcsan_release", "kcsan_found_watchpoint", "kcsan_setup_watchpoint", "kcsan_check_scoped_accesses", From patchwork Tue Nov 30 11:44:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646979 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CF60C43217 for ; Tue, 30 Nov 2021 11:46:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241126AbhK3LuQ (ORCPT ); Tue, 30 Nov 2021 06:50:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45064 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241176AbhK3Ltm (ORCPT ); Tue, 30 Nov 2021 06:49:42 -0500 Received: from mail-wr1-x449.google.com (mail-wr1-x449.google.com [IPv6:2a00:1450:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0B15DC061373 for ; Tue, 30 Nov 2021 03:46:01 -0800 (PST) Received: by mail-wr1-x449.google.com with SMTP id u4-20020a5d4684000000b0017c8c1de97dso3521818wrq.16 for ; Tue, 30 Nov 2021 03:46:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=1F+MofKXejp7ry8lfe8Rm0bUxzdx8s4N4hiT9E57RpU=; b=AxG+zGUAuQz1+vXIOQ6DG6QRjOq3T51cJb6xki7f4JHpONQpFZJx8GE/b7CRWzPCsG OLRaS1SxhNRItDJ+4oOUU5VmP/nYksafcZlT3TVFS1SWFBK3vno6Gg3Jm9z1cK6Zuest ui1H7XkT5tngcQHFEK6CC8n6Ghp1ypFOZCz94aNFgLZvN+GCnNenOMRE9+mty4qRXv1N 2qGht/8T/o21I4oyxH8o4aKxIfvvRVeX0nUK1yVoam9e7llKFicSRQUZiaWijAt560uq Dg7b5UU9ZIUcjYU29xL3ofX78AIwM6oIKYpH7LDX2euCle6y9eO4jBbsvtISE16oErOr I/ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=1F+MofKXejp7ry8lfe8Rm0bUxzdx8s4N4hiT9E57RpU=; b=e3BiaIX5m2qItElmuyO4KLXpniTi/tHmijmK5S/NMFNAaQxdY5ijNMU0tervrQjoWU R0nXq6ST+M1QJOuKJzwfu44Q82a3zuMiCIPte+NbIhSElXZQSipQ/v+FZwWPm1mZNX6J b3S80sCsobMsmfirnG+b14cqelvOAuzYCQjxlUHE+YUSyBvJ9E9f7i6Yzo6tIjs/QCJd aEFnksZkl7FGxE7u2BVuToxKlH4BEwYndHsRx7U+n9FvlWetXdor2Ug1QW3XHIaJ/8/c dJ24Aqiu+lUFv7aXCoRLsamjIXFjxjcZ28AZ5K4XKqm3NUWsJ2dTrE/WQ+u3W7TeQiB7 IQhg== X-Gm-Message-State: AOAM531EWuwJhSSjp2BKA4+VJDrWkQgJuvACr9OwEmnOK3jQX9Qg5EvX jBrefm1pE+kzVnAemaau8R451CeQYA== X-Google-Smtp-Source: ABdhPJyPWvl8zulDnZGqXZ4CGIOsIfQU/gfUk/ZE53ZBn+QfJqlzKFrFzo/IqOCXSYJaXQPpXS6SD3dG+A== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:1d1b:: with SMTP id l27mr624656wms.1.1638272759191; Tue, 30 Nov 2021 03:45:59 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:31 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-24-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 23/25] objtool, kcsan: Remove memory barrier instrumentation from noinstr From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org, Josh Poimboeuf Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Teach objtool to turn instrumentation required for memory barrier modeling into nops in noinstr text. The __tsan_func_entry/exit calls are still emitted by compilers even with the __no_sanitize_thread attribute. The memory barrier instrumentation will be inserted explicitly (without compiler help), and thus needs to also explicitly be removed. Signed-off-by: Marco Elver Acked-by: Josh Poimboeuf --- v3: * s/removable_instr/profiling_func/ (suggested by Josh Poimboeuf) * s/__kcsan_(mb|wmb|rmb|release)/__atomic_signal_fence/, because Clang < 14.0 will still emit these in noinstr even with __no_kcsan. * Fix and add more comments. v2: * Rewrite after rebase to v5.16-rc1. --- tools/objtool/check.c | 37 ++++++++++++++++++++++++----- tools/objtool/include/objtool/elf.h | 2 +- 2 files changed, 32 insertions(+), 7 deletions(-) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 61dfb66b30b6..a9a1f7259d62 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -1072,11 +1072,11 @@ static void annotate_call_site(struct objtool_file *file, } /* - * Many compilers cannot disable KCOV with a function attribute - * so they need a little help, NOP out any KCOV calls from noinstr - * text. + * Many compilers cannot disable KCOV or sanitizer calls with a function + * attribute so they need a little help, NOP out any such calls from + * noinstr text. */ - if (insn->sec->noinstr && sym->kcov) { + if (insn->sec->noinstr && sym->profiling_func) { if (reloc) { reloc->type = R_NONE; elf_write_reloc(file->elf, reloc); @@ -1991,6 +1991,31 @@ static int read_intra_function_calls(struct objtool_file *file) return 0; } +/* + * Return true if name matches an instrumentation function, where calls to that + * function from noinstr code can safely be removed, but compilers won't do so. + */ +static bool is_profiling_func(const char *name) +{ + /* + * Many compilers cannot disable KCOV with a function attribute. + */ + if (!strncmp(name, "__sanitizer_cov_", 16)) + return true; + + /* + * Some compilers currently do not remove __tsan_func_entry/exit nor + * __tsan_atomic_signal_fence (used for barrier instrumentation) with + * the __no_sanitize_thread attribute, remove them. Once the kernel's + * minimum Clang version is 14.0, this can be removed. + */ + if (!strncmp(name, "__tsan_func_", 12) || + !strcmp(name, "__tsan_atomic_signal_fence")) + return true; + + return false; +} + static int classify_symbols(struct objtool_file *file) { struct section *sec; @@ -2011,8 +2036,8 @@ static int classify_symbols(struct objtool_file *file) if (!strcmp(func->name, "__fentry__")) func->fentry = true; - if (!strncmp(func->name, "__sanitizer_cov_", 16)) - func->kcov = true; + if (is_profiling_func(func->name)) + func->profiling_func = true; } } diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h index cdc739fa9a6f..d22336781401 100644 --- a/tools/objtool/include/objtool/elf.h +++ b/tools/objtool/include/objtool/elf.h @@ -58,7 +58,7 @@ struct symbol { u8 static_call_tramp : 1; u8 retpoline_thunk : 1; u8 fentry : 1; - u8 kcov : 1; + u8 profiling_func : 1; struct list_head pv_target; }; From patchwork Tue Nov 30 11:44:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646987 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E703C433FE for ; Tue, 30 Nov 2021 11:47:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241267AbhK3Lu0 (ORCPT ); Tue, 30 Nov 2021 06:50:26 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241264AbhK3Ltm (ORCPT ); Tue, 30 Nov 2021 06:49:42 -0500 Received: from mail-wm1-x349.google.com (mail-wm1-x349.google.com [IPv6:2a00:1450:4864:20::349]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 06040C06137B for ; Tue, 30 Nov 2021 03:46:03 -0800 (PST) Received: by mail-wm1-x349.google.com with SMTP id p12-20020a05600c1d8c00b0033a22e48203so12699586wms.6 for ; Tue, 30 Nov 2021 03:46:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=hHpI+TkJP++fcdc46vUsOPoUrJOI1E0VxBDtAT4JUtM=; b=ZydYBw/EQptiAQjTJgW8VFb6M+tj/jnOJSKAzEOo9ttb4/prBXxenHyAcxKFnhYPtG yxkoZSjdkoQwGFG90n0H+8xMHAoecs8kp2mS+zTaZsYuo/VXrNHwT1dCCGqN+qd2xKqO A4acfRlS2z6HmsLoxx1UolNjWfJ2+Ztsu6wU/VCsHuubkWa6zG7vvPdxEtYFrhi/J/x1 SgdPBckdgJ9qE7I2ZNNnAqkYt2+gpBZ+AartGApj/044P+zkOJcXZg2hjzRL7no/XtPz MihwHIfIAQnWjB96wkXuEnhFMgqyeDJ1+RyraL9HQrgcKqeYe6aB+N/OeDIQjy/DWPFQ OLpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=hHpI+TkJP++fcdc46vUsOPoUrJOI1E0VxBDtAT4JUtM=; b=4oGymfbeK8F8aNn6g5uv/7ewugEqCoIqn7xRrbxp/OEZPgvnFfH8hDtfJTLvda9zBQ IxU5S4F0KJfGGr6qqQlYpF6/mqelPB5ManKqUz5VnC2NUfgY1xjEJ55u22/hzFLlxjvY bmkN7eGRPvA4ktV06y7lc0lskGriZfzGkWu5JndB+0wnpPs7iMYGUu6QWc/dQrfiXMQ3 sge04IDHRKsCNGL8tkls/aLu3u3ajXDadGffUtDo1Fmd8et402Fq2O0n6XW+FMc9bRXE M05dYDyiUIqWV8nbiQhlVs4T5/Q4t2eEZZcTHmXwhFStL7Em+kBi0a8mez26NtW36DbD /p+Q== X-Gm-Message-State: AOAM532XGfQVYI/nDhdv5Ww6Y94bS4Akdf2lCyEoOfVeyIZYJpwX7m78 y468iRxtLuNq1J4y4iKjmTd/Yq7PYw== X-Google-Smtp-Source: ABdhPJxBEJ1Ij4N3anSHCQ6YBfqk8LJXNhrNf/MNCN7ZMLSlwxOUn6GdF8bZmMY3PAFHnunTgtGrJWD/kQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:4e07:: with SMTP id b7mr4217301wmq.16.1638272761639; Tue, 30 Nov 2021 03:46:01 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:32 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-25-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 24/25] compiler_attributes.h: Add __disable_sanitizer_instrumentation From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org From: Alexander Potapenko The new attribute maps to __attribute__((disable_sanitizer_instrumentation)), which will be supported by Clang >= 14.0. Future support in GCC is also possible. This attribute disables compiler instrumentation for kernel sanitizer tools, making it easier to implement noinstr. It is different from the existing __no_sanitize* attributes, which may still allow certain types of instrumentation to prevent false positives. Signed-off-by: Alexander Potapenko Signed-off-by: Marco Elver --- v3: * New patch. --- include/linux/compiler_attributes.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h index b9121afd8733..37e260020221 100644 --- a/include/linux/compiler_attributes.h +++ b/include/linux/compiler_attributes.h @@ -308,6 +308,24 @@ # define __compiletime_warning(msg) #endif +/* + * Optional: only supported since clang >= 14.0 + * + * clang: https://clang.llvm.org/docs/AttributeReference.html#disable-sanitizer-instrumentation + * + * disable_sanitizer_instrumentation is not always similar to + * no_sanitize(()): the latter may still let specific sanitizers + * insert code into functions to prevent false positives. Unlike that, + * disable_sanitizer_instrumentation prevents all kinds of instrumentation to + * functions with the attribute. + */ +#if __has_attribute(disable_sanitizer_instrumentation) +# define __disable_sanitizer_instrumentation \ + __attribute__((disable_sanitizer_instrumentation)) +#else +# define __disable_sanitizer_instrumentation +#endif + /* * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-weak-function-attribute * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html#index-weak-variable-attribute From patchwork Tue Nov 30 11:44:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646981 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46F14C43217 for ; Tue, 30 Nov 2021 11:47:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241020AbhK3LuS (ORCPT ); Tue, 30 Nov 2021 06:50:18 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45068 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235630AbhK3Ltu (ORCPT ); Tue, 30 Nov 2021 06:49:50 -0500 Received: from mail-wm1-x34a.google.com (mail-wm1-x34a.google.com [IPv6:2a00:1450:4864:20::34a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CAF54C0619D3 for ; Tue, 30 Nov 2021 03:46:05 -0800 (PST) Received: by mail-wm1-x34a.google.com with SMTP id l6-20020a05600c4f0600b0033321934a39so12691518wmq.9 for ; Tue, 30 Nov 2021 03:46:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ykQMG7f7HitTY02bjgroWOqQwac8nTYOVAZLqTXm/og=; b=qfKYXOVzY2MTzD2zIwUpOpLCYIanN5AfAyzY4pD6y/00NCNsZhMgCXyQuHgqHRjMQP 2Q4GpB2kKiA9NG+mxwbdwhx/T6uztpsr0cPDZKwOOV4NBkpo8wKmqUMwj6QIZ50+WTGC rj+XujQ6x15SViviNrLc8ryKZTpDmfQ59NmqPKusALt7vIXW09vUdxRhx/IHgX0O1Fi0 peXM6+tWJqwR46RttMZspmzxuv6mgkZjTPQRNEx0+Evh/SfhzkOebPUPea948l/PAD60 8d8gkegUNMa+NHH1FPVvEqF67z8VdDW4GAXutQLRg58SDKDe/Mh9BClDvEsEpsff6SiM sbTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ykQMG7f7HitTY02bjgroWOqQwac8nTYOVAZLqTXm/og=; b=P3Jn9ONUCDaeFAX+RXXYaxK3tOJZ7KRhM9xoAoWPQhGqv2yjtvJAuxVrkD8Lhlgr1t JlUpE7QdSi1lSD4MRUCt76jlXfzUuS9o+5WUfspNx+8culs5zWFrKgOIGdhf9evspRkW ra99ThNorLC9JnY5EvLxEPCyo0qrS0IVn1V5gp3Gdpz8mPfRWlUKTnUlo3cg/CSOexk/ eXy64geDzXHnXim4ldZ/c1nsCXf3b5MCFLNpvxWIbj/MVwZZ7hzpwKDVym6b/v0sCCXw ovPFJlHbblIJeFL6k9/Vvsqbv3x6w7KiM507zfxjox9PNANSyyf6HPtjYbCHOljf+645 WMFw== X-Gm-Message-State: AOAM5325RKGyuRBp/5znGDrxKP/b9Rc7orT1SuGrIABQjElcF78v0Gfc xQfZzv4roicJHwA82G7o5YXUCOtFkw== X-Google-Smtp-Source: ABdhPJw6whH8Hls3EC+BgAy91BvIDfMtsebOG9cJCkHwE7EfRIVxArLwzYtCPhRhlBjkro/EFip+BpGtoQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:4f0b:: with SMTP id l11mr626212wmq.0.1638272763966; Tue, 30 Nov 2021 03:46:03 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:33 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-26-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 25/25] kcsan: Support WEAK_MEMORY with Clang where no objtool support exists From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kbuild@vger.kernel.org Clang and GCC behave a little differently when it comes to the __no_sanitize_thread attribute, which has valid reasons, and depending on context either one could be right. Traditionally, user space ThreadSanitizer [1] still expects instrumented builtin atomics (to avoid false positives) and __tsan_func_{entry,exit} (to generate meaningful stack traces), even if the function has the attribute no_sanitize("thread"). [1] https://clang.llvm.org/docs/ThreadSanitizer.html#attribute-no-sanitize-thread GCC doesn't follow the same policy (for better or worse), and removes all kinds of instrumentation if no_sanitize is added. Arguably, since this may be a problem for user space ThreadSanitizer, we expect this may change in future. Since KCSAN != ThreadSanitizer, the likelihood of false positives even without barrier instrumentation everywhere, is much lower by design. At least for Clang, however, to fully remove all sanitizer instrumentation, we must add the disable_sanitizer_instrumentation attribute, which is available since Clang 14.0. Signed-off-by: Marco Elver --- v3: * New patch. --- include/linux/compiler_types.h | 13 ++++++++++++- lib/Kconfig.kcsan | 2 +- 2 files changed, 13 insertions(+), 2 deletions(-) diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 1d32f4c03c9e..3c1795fdb568 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -198,9 +198,20 @@ struct ftrace_likely_data { # define __no_kasan_or_inline __always_inline #endif -#define __no_kcsan __no_sanitize_thread #ifdef __SANITIZE_THREAD__ +/* + * Clang still emits instrumentation for __tsan_func_{entry,exit}() and builtin + * atomics even with __no_sanitize_thread (to avoid false positives in userspace + * ThreadSanitizer). The kernel's requirements are stricter and we really do not + * want any instrumentation with __no_kcsan. + * + * Therefore we add __disable_sanitizer_instrumentation where available to + * disable all instrumentation. See Kconfig.kcsan where this is mandatory. + */ +# define __no_kcsan __no_sanitize_thread __disable_sanitizer_instrumentation # define __no_sanitize_or_inline __no_kcsan notrace __maybe_unused +#else +# define __no_kcsan #endif #ifndef __no_sanitize_or_inline diff --git a/lib/Kconfig.kcsan b/lib/Kconfig.kcsan index e4394ea8068b..63b70b8c5551 100644 --- a/lib/Kconfig.kcsan +++ b/lib/Kconfig.kcsan @@ -198,7 +198,7 @@ config KCSAN_WEAK_MEMORY # We can either let objtool nop __tsan_func_{entry,exit}() and builtin # atomics instrumentation in .noinstr.text, or use a compiler that can # implement __no_kcsan to really remove all instrumentation. - depends on STACK_VALIDATION || CC_IS_GCC + depends on STACK_VALIDATION || CC_IS_GCC || CLANG_VERSION >= 140000 help Enable support for modeling a subset of weak memory, which allows detecting a subset of data races due to missing memory barriers.