From patchwork Tue Nov 30 11:44:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646925 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56CE4C433FE for ; Tue, 30 Nov 2021 11:45:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E87D56B0074; Tue, 30 Nov 2021 06:45:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E10AA6B0075; Tue, 30 Nov 2021 06:45:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C8A406B0078; Tue, 30 Nov 2021 06:45:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0184.hostedemail.com [216.40.44.184]) by kanga.kvack.org (Postfix) with ESMTP id B8F316B0074 for ; Tue, 30 Nov 2021 06:45:16 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 684E389D23 for ; Tue, 30 Nov 2021 11:45:06 +0000 (UTC) X-FDA: 78865415370.11.9B8F470 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf10.hostedemail.com (Postfix) with ESMTP id C2D6560019B6 for ; Tue, 30 Nov 2021 11:44:59 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id o18-20020a05600c511200b00332fa17a02eso12696299wms.5 for ; Tue, 30 Nov 2021 03:45:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=xU7oMqGuZSzuwQqlTLWl5HT2gcOlUZd23bSn8B82hrg=; b=Xy0Q+YI5NXLu9Vu/GqZXZ/NkVKZGY+Dyl/qHhhsqu+N7c30y5RqI/XCrIlHe3X19Nh evJKpX1Yb5PiOGVi1OlZVgpRdD2+1S8l7ngHyodcnAb0g7h8yA5Ab5+1LnPIcWw8vsTM p2uXuWZAkjJyFoSytrkutzkg/LKlRjI43oPCGO+yw5/6Yfs1tHJ/Mf4uJOXe72iBcCBl 4FO/p9lF2haQmJkr/CoJCx3SDiPv+jHsgdY0dUU6fZwZ18WtY9A1ipHCJmqZq3ds957N Uj2uP2ybxXh8OhQWUBFKB4C89sZGuXi6EvuurDqZ5TrmAwgvkFHV0fpSoAZ84iAvdZ0x V/qQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xU7oMqGuZSzuwQqlTLWl5HT2gcOlUZd23bSn8B82hrg=; b=SDKQZdv/1DkpDG/4T2kC+xt4XwYPMVKnyPbBij6Lqse7Kre669d854ciiAJBGjMXQ7 jYx5/HH3HRZxDl9AdyCW49mEm6V4yaYpRSMPPQj43RGJFerbc6leK3QUesTenIGHsGap iyeNCU6fdhbfITq2Z08FPhtVbxhUlxBDTvFbIM8V8CvbjJIr0RIT9uJlVwzMFNpRorbJ OrqAW6w9WAihEqoLMHgduwz9mCFOYT3sdcNBe6Z+zbE7+ToGjaBOIOBctNMXSTCpIucj X8HpUrlDWqpp9kZbSmTJxbxrcyI5bs/LmV4llI0twsmZCdWcME3TdecqkiadSvWVTY9b /x0A== X-Gm-Message-State: AOAM532YgoIFlpPofDdN0IOGJqwHEXWndKaZoqxCD7TJl6+Jb6XhSoCG CIh9NmhyE9btVQagfTXXdJnMNBraIA== X-Google-Smtp-Source: ABdhPJxMw5l6/kTuSVacreFWKtgYBmLxSIyhtA/0rGCmUImckuo5HDxUebxVJTM+SCYFMFx3K6mM35ARMw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:4f87:: with SMTP id n7mr4359911wmq.63.1638272704753; Tue, 30 Nov 2021 03:45:04 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:09 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-2-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 01/25] kcsan: Refactor reading of instrumented memory From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Queue-Id: C2D6560019B6 X-Stat-Signature: nget143gtug1mtd5hck6gdfubcogx4bn Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Xy0Q+YI5; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf10.hostedemail.com: domain of 3wA6mYQUKCIkry8r4t11tyr.p1zyv07A-zzx8npx.14t@flex--elver.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3wA6mYQUKCIkry8r4t11tyr.p1zyv07A-zzx8npx.14t@flex--elver.bounces.google.com X-Rspamd-Server: rspam02 X-HE-Tag: 1638272699-394455 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Factor out the switch statement reading instrumented memory into a helper read_instrumented_memory(). No functional change. Signed-off-by: Marco Elver Acked-by: Mark Rutland --- kernel/kcsan/core.c | 51 +++++++++++++++------------------------------ 1 file changed, 17 insertions(+), 34 deletions(-) diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index 4b84c8e7884b..6bfd3040f46b 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -325,6 +325,21 @@ static void delay_access(int type) udelay(delay); } +/* + * Reads the instrumented memory for value change detection; value change + * detection is currently done for accesses up to a size of 8 bytes. + */ +static __always_inline u64 read_instrumented_memory(const volatile void *ptr, size_t size) +{ + switch (size) { + case 1: return READ_ONCE(*(const u8 *)ptr); + case 2: return READ_ONCE(*(const u16 *)ptr); + case 4: return READ_ONCE(*(const u32 *)ptr); + case 8: return READ_ONCE(*(const u64 *)ptr); + default: return 0; /* Ignore; we do not diff the values. */ + } +} + void kcsan_save_irqtrace(struct task_struct *task) { #ifdef CONFIG_TRACE_IRQFLAGS @@ -482,23 +497,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned * Read the current value, to later check and infer a race if the data * was modified via a non-instrumented access, e.g. from a device. */ - old = 0; - switch (size) { - case 1: - old = READ_ONCE(*(const u8 *)ptr); - break; - case 2: - old = READ_ONCE(*(const u16 *)ptr); - break; - case 4: - old = READ_ONCE(*(const u32 *)ptr); - break; - case 8: - old = READ_ONCE(*(const u64 *)ptr); - break; - default: - break; /* ignore; we do not diff the values */ - } + old = read_instrumented_memory(ptr, size); /* * Delay this thread, to increase probability of observing a racy @@ -511,23 +510,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned * racy access. */ access_mask = ctx->access_mask; - new = 0; - switch (size) { - case 1: - new = READ_ONCE(*(const u8 *)ptr); - break; - case 2: - new = READ_ONCE(*(const u16 *)ptr); - break; - case 4: - new = READ_ONCE(*(const u32 *)ptr); - break; - case 8: - new = READ_ONCE(*(const u64 *)ptr); - break; - default: - break; /* ignore; we do not diff the values */ - } + new = read_instrumented_memory(ptr, size); diff = old ^ new; if (access_mask) From patchwork Tue Nov 30 11:44:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 86452C433EF for ; Tue, 30 Nov 2021 11:46:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0A4F66B0075; Tue, 30 Nov 2021 06:45:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 02DA46B0078; Tue, 30 Nov 2021 06:45:19 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E10DD6B007B; Tue, 30 Nov 2021 06:45:19 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0150.hostedemail.com [216.40.44.150]) by kanga.kvack.org (Postfix) with ESMTP id D2E5C6B0075 for ; Tue, 30 Nov 2021 06:45:19 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 8FF178121538 for ; Tue, 30 Nov 2021 11:45:09 +0000 (UTC) X-FDA: 78865415580.26.6BED706 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf22.hostedemail.com (Postfix) with ESMTP id 3795D1914 for ; Tue, 30 Nov 2021 11:45:08 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id 145-20020a1c0197000000b0032efc3eb9bcso13627272wmb.0 for ; Tue, 30 Nov 2021 03:45:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ZmySmV7nXgp7ZU44HAHRwN8PhZWnIeFNi9/5LSGaJn0=; b=hs4LlI5lsoVztdyeXHYGVaXqoGqfWxtfU47ZxaWIXMF2LtJbb78kUIVc0cRuvGbMfb UNw6/fXMdGtPv2h8zBMi5zOcBaSwfjE2+nKiMNKeM9jeUs4vjbJK99CBpMUY1igcK2AO hkM8i+x5rI34uTiK4kGz6Ny8KsFfPBn9Skfxtgdf+EY6mB5WctN9uRu5XKue8YS4CWPZ MuVmceyTqxFjckuQpfDMMo3/XKIvHHLDEu5mET7aFh3FTzDlkJ5ONfBlsc2EWCjP5kGc SyQVbaVFhKuFSos4aYqJ8KZimc+D5tJysctdVSWpiS6AHqJ7/PEA9mqqH25rLKWF5py6 iIDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ZmySmV7nXgp7ZU44HAHRwN8PhZWnIeFNi9/5LSGaJn0=; b=W5WhQGIssN+yd3Ys/Vv8bxnEoWOVeKnUywrIo8+7sTa+EPJVzseccXVTO+95SA0plM IRp4LN/4h57gyKn/FZrWyzNElUdcVjLel1pjgvHkYDMycu4DUGvlvWhxrW4d+xWqdw8O kHI4UIBzq/M0cIdZjz0mNBy+9rxMUhIdPSFkP6T+iYxPQhtFMSOpmVwsrSMlMbSV1h3r vXDBqrS7z7riaAMaZI+lT7plEiK9FvWzWVHV7ehjGMHOkxMEZHtwTduBb4sZhHzoLbPI j+ovKLKknALA+DGgM4So6K2cVmCjJPTG/X0R9YKOkoeS6qvLUSoceno7Rc59sEO/l4p+ twag== X-Gm-Message-State: AOAM532GlGqlW2BNWRuk4yVTUHlbKYbZxj6oJEiEW9SEtrWmjkSYx9O2 vtyzaZTa/R9DXMToJqUNKErgJFQO9g== X-Google-Smtp-Source: ABdhPJwtFgfDTN6vPjSFkCyUpvSQ19ytHU7Acqp52u/sb5E17RDsJPS4ZB9wklKdX5QEMGX86DVOVVVs4A== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:4f0b:: with SMTP id l11mr625318wmq.0.1638272707028; Tue, 30 Nov 2021 03:45:07 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:10 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-3-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 02/25] kcsan: Remove redundant zero-initialization of globals From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 3795D1914 X-Stat-Signature: mx3gdicdznzsxoor6nuza8qmqkbe81sz Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=hs4LlI5l; spf=pass (imf22.hostedemail.com: domain of 3ww6mYQUKCIwu1Bu7w44w1u.s421y3AD-220Bqs0.47w@flex--elver.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3ww6mYQUKCIwu1Bu7w44w1u.s421y3AD-220Bqs0.47w@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1638272708-872390 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: They are implicitly zero-initialized, remove explicit initialization. It keeps the upcoming additions to kcsan_ctx consistent with the rest. No functional change intended. Signed-off-by: Marco Elver Acked-by: Mark Rutland --- v3: * Minimize diff by leaving "scoped_accesses" on its own line, which should also reduce diff of future changes. --- init/init_task.c | 5 ----- kernel/kcsan/core.c | 5 ----- 2 files changed, 10 deletions(-) diff --git a/init/init_task.c b/init/init_task.c index 2d024066e27b..73cc8f03511a 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -182,11 +182,6 @@ struct task_struct init_task #endif #ifdef CONFIG_KCSAN .kcsan_ctx = { - .disable_count = 0, - .atomic_next = 0, - .atomic_nest_count = 0, - .in_flat_atomic = false, - .access_mask = 0, .scoped_accesses = {LIST_POISON1, NULL}, }, #endif diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index 6bfd3040f46b..e34a1710b7bc 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -44,11 +44,6 @@ bool kcsan_enabled; /* Per-CPU kcsan_ctx for interrupts */ static DEFINE_PER_CPU(struct kcsan_ctx, kcsan_cpu_ctx) = { - .disable_count = 0, - .atomic_next = 0, - .atomic_nest_count = 0, - .in_flat_atomic = false, - .access_mask = 0, .scoped_accesses = {LIST_POISON1, NULL}, }; From patchwork Tue Nov 30 11:44:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646929 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 61431C433EF for ; Tue, 30 Nov 2021 11:46:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 97E3D6B0078; Tue, 30 Nov 2021 06:45:22 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 907C16B007B; Tue, 30 Nov 2021 06:45:22 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A8A66B007D; Tue, 30 Nov 2021 06:45:22 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0083.hostedemail.com [216.40.44.83]) by kanga.kvack.org (Postfix) with ESMTP id 6C3246B0078 for ; Tue, 30 Nov 2021 06:45:22 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 2099A182AB1E6 for ; Tue, 30 Nov 2021 11:45:12 +0000 (UTC) X-FDA: 78865415622.11.ACA0C0B Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf20.hostedemail.com (Postfix) with ESMTP id 47868D0000B8 for ; Tue, 30 Nov 2021 11:45:05 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id n41-20020a05600c502900b003335ab97f41so12709096wmr.3 for ; Tue, 30 Nov 2021 03:45:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=cCyxJnES0DWyrpQ2J5vRoZu9DlHTb3+fL88VjiCWbjg=; b=cKjMDTXM8NjyKwJV3lDtGCObCm4HpcJe5F+4CzZRRXVIIKSOXaY8t1H5jc/MfrMybp l7C1C5PdFSieee+5Ow4GgXoKOJzHKEgRplPyzIuWqJojZ+nN2/4IJsjhDdaZMC4hfGLc YjipwbrwuXGRJcsG9dWRqwvvUcQGiVB9s3RVhs0q+yDpoxr5r2oMHuFRBcQ1r5QIpX3k KQBK18DLneuEMmku+aWMunTXlRFji0E50ji38pRBCDpdp+D21l7U1sivA4dITmYi7DXU QeLVu19keyQA504MdNHYtGnlBTN/rI+Vy0aqov/bs8mgOaxhMmA9ff8gcphJR+tq3WW4 AyAg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=cCyxJnES0DWyrpQ2J5vRoZu9DlHTb3+fL88VjiCWbjg=; b=UI+sGJ7b+LM+13O1WzEpQyZSp5JjRUlA5cp52W8KKd7ZTfVfePN867fq4qWWhZLSDI 7oqOm/5iI0UOpVsClZEUNEWmqVc3FQPoLgFvugh9a5/ka5lpZiHnNAGB2F7mU/mHbb84 +ZHQoKeRF4OQ5jfSwycAkYn3wOcM98puSFqb2QYuPj3ZgunfeX4heAKSHEqzGuMrBBP4 bSo1UqydBpI5xOHoW+ZzfFOAzElQWRRGBP+LSTrdoIknPaQ8TAcf7DaPQVz804Dd3k0S ctbveXaWMRFkV3jBX0JHtI9+aOp99oWokIddSm6FPCXHqRLAYv8dXYo0EL5uS3LkdkSL 7Rdw== X-Gm-Message-State: AOAM533+0LxAKR06nGC420knH8Q7Gzl5eweGhMCemKmXo40lMBiYZ+je yAnLJZShrP0JzyC2lcAqF0ChdpLoug== X-Google-Smtp-Source: ABdhPJz78uHZ/XMYdk54D/l9GyDkbV5tPc83Pq6RQwiWk9kg5EHHGLp650DVfz5FKzeodb/NW5W0kpyZ5g== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:1d1b:: with SMTP id l27mr623895wms.1.1638272709978; Tue, 30 Nov 2021 03:45:09 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:11 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-4-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 03/25] kcsan: Avoid checking scoped accesses from nested contexts From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Stat-Signature: 1e7o44e31o43hjx7gnpxqkheim4km3rz X-Rspamd-Queue-Id: 47868D0000B8 X-Rspamd-Server: rspam07 Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=cKjMDTXM; spf=pass (imf20.hostedemail.com: domain of 3xQ6mYQUKCI4w3Dw9y66y3w.u64305CF-442Dsu2.69y@flex--elver.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3xQ6mYQUKCI4w3Dw9y66y3w.u64305CF-442Dsu2.69y@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1638272705-262621 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Avoid checking scoped accesses from nested contexts (such as nested interrupts or in scheduler code) which share the same kcsan_ctx. This is to avoid detecting false positive races of accesses in the same thread with currently scoped accesses: consider setting up a watchpoint for a non-scoped (normal) access that also "conflicts" with a current scoped access. In a nested interrupt (or in the scheduler), which shares the same kcsan_ctx, we cannot check scoped accesses set up in the parent context -- simply ignore them in this case. With the introduction of kcsan_ctx::disable_scoped, we can also clean up kcsan_check_scoped_accesses()'s recursion guard, and do not need to modify the list's prev pointer. Signed-off-by: Marco Elver --- include/linux/kcsan.h | 1 + kernel/kcsan/core.c | 18 +++++++++++++++--- 2 files changed, 16 insertions(+), 3 deletions(-) diff --git a/include/linux/kcsan.h b/include/linux/kcsan.h index fc266ecb2a4d..13cef3458fed 100644 --- a/include/linux/kcsan.h +++ b/include/linux/kcsan.h @@ -21,6 +21,7 @@ */ struct kcsan_ctx { int disable_count; /* disable counter */ + int disable_scoped; /* disable scoped access counter */ int atomic_next; /* number of following atomic ops */ /* diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index e34a1710b7bc..bd359f8ee63a 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -204,15 +204,17 @@ check_access(const volatile void *ptr, size_t size, int type, unsigned long ip); static noinline void kcsan_check_scoped_accesses(void) { struct kcsan_ctx *ctx = get_ctx(); - struct list_head *prev_save = ctx->scoped_accesses.prev; struct kcsan_scoped_access *scoped_access; - ctx->scoped_accesses.prev = NULL; /* Avoid recursion. */ + if (ctx->disable_scoped) + return; + + ctx->disable_scoped++; list_for_each_entry(scoped_access, &ctx->scoped_accesses, list) { check_access(scoped_access->ptr, scoped_access->size, scoped_access->type, scoped_access->ip); } - ctx->scoped_accesses.prev = prev_save; + ctx->disable_scoped--; } /* Rules for generic atomic accesses. Called from fast-path. */ @@ -465,6 +467,15 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned goto out; } + /* + * Avoid races of scoped accesses from nested interrupts (or scheduler). + * Assume setting up a watchpoint for a non-scoped (normal) access that + * also conflicts with a current scoped access. In a nested interrupt, + * which shares the context, it would check a conflicting scoped access. + * To avoid, disable scoped access checking. + */ + ctx->disable_scoped++; + /* * Save and restore the IRQ state trace touched by KCSAN, since KCSAN's * runtime is entered for every memory access, and potentially useful @@ -578,6 +589,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned if (!kcsan_interrupt_watcher) local_irq_restore(irq_flags); kcsan_restore_irqtrace(current); + ctx->disable_scoped--; out: user_access_restore(ua_flags); } From patchwork Tue Nov 30 11:44:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646931 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B619C433EF for ; Tue, 30 Nov 2021 11:47:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E61336B007B; Tue, 30 Nov 2021 06:45:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DE9086B007D; Tue, 30 Nov 2021 06:45:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C3CC26B007E; Tue, 30 Nov 2021 06:45:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0191.hostedemail.com [216.40.44.191]) by kanga.kvack.org (Postfix) with ESMTP id B193F6B007B for ; Tue, 30 Nov 2021 06:45:24 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 5E747853F0 for ; Tue, 30 Nov 2021 11:45:14 +0000 (UTC) X-FDA: 78865415748.20.8FD87E1 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf21.hostedemail.com (Postfix) with ESMTP id D65BCD0369F8 for ; Tue, 30 Nov 2021 11:45:10 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id g81-20020a1c9d54000000b003330e488323so6060573wme.0 for ; Tue, 30 Nov 2021 03:45:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=j0T1gtE+z4XQoB9A6TUQpKsLEr+NfbiB6RG+Afi5nLo=; b=sCXEXO2Tan8z+cLUOcNnHGpJ5s78vnTKorhHCc7TXu592u+A0sKS+YJVYdqy9d3uR0 BqlSoTjVGqokirqS0YyvsI4gsWv7MV/bfnJ6ejvxbl9+ILoZNGhq5jgtcAiby1dzf2sp EJelKYuCqUNWtFzpGHD2Sys9IHCgDuO8K5wFnEn/HKi6HSFvv17Jh2JUlxKl7UcucOKk GWpTn5NRmAW0S3OocyOu3ReSodULqcLzOewEnxXW2PegOKXdR5D7Fi4DsG+wozGifv6Q NO5EuBlDTh7WzOOpLqhRJ+PO6e9GK+AvJkmMqJ30PQ3R8tXtsamOaQtltEtzFsaMkwJf nqzQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=j0T1gtE+z4XQoB9A6TUQpKsLEr+NfbiB6RG+Afi5nLo=; b=Wgn80+12zXHSWmPWLdJgYMBgPbZJ8Fv8MfO1A0dmxm4oRNccGsjttZtDR7qCOR0Aj4 o+ms7MaTdxRpKmQVCNq+4ugoIJxxj19U7vQiPFK3D82G9xgL6dgxYccHFBCAem3AhNSN jIuv7UvRpdG4b3SvfsiYg0oM6s7KqO7h1ItaGiH/tEp7TxfprRBiENtJzDWkyYfcrZXw xEY9tZeAXQ99sqSHDHhAvyL91F7c8UHqu5j2MEGiqiXT0zaQTTdDRFwy88fy3NvHc1MD xDGGJbM1P+dAFTbis9fkzRP5+ZGzGQC4ouC+DJ5fpXD7T7nvWZbxTpLB58KGFZAxyfRM hLUQ== X-Gm-Message-State: AOAM533BewyQgEF4VkPjRnfyrLbNRfCySaHH/xErJSGHfN9N7H50IxRx Jmn3FwKnkAiicCcDdQ8fyTFhWUFJLw== X-Google-Smtp-Source: ABdhPJxuOqFaE94GF9m3pbN0zotYkJdx+UqdzcnrlVy7hSrBd1c3AFiQWf6x/pLCzsralNZAZ3xvpSJCRQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:4e4a:: with SMTP id e10mr4187194wmq.176.1638272712718; Tue, 30 Nov 2021 03:45:12 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:12 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-5-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 04/25] kcsan: Add core support for a subset of weak memory modeling From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: D65BCD0369F8 X-Stat-Signature: nxoi73pgft553388jcith8en65t3yuw5 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=sCXEXO2T; spf=pass (imf21.hostedemail.com: domain of 3yA6mYQUKCJEz6GzC19916z.x97638FI-775Gvx5.9C1@flex--elver.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3yA6mYQUKCJEz6GzC19916z.x97638FI-775Gvx5.9C1@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1638272710-724354 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add support for modeling a subset of weak memory, which will enable detection of a subset of data races due to missing memory barriers. KCSAN's approach to detecting missing memory barriers is based on modeling access reordering, and enabled if `CONFIG_KCSAN_WEAK_MEMORY=y`, which depends on `CONFIG_KCSAN_STRICT=y`. The feature can be enabled or disabled at boot and runtime via the `kcsan.weak_memory` boot parameter. Each memory access for which a watchpoint is set up, is also selected for simulated reordering within the scope of its function (at most 1 in-flight access). We are limited to modeling the effects of "buffering" (delaying the access), since the runtime cannot "prefetch" accesses (therefore no acquire modeling). Once an access has been selected for reordering, it is checked along every other access until the end of the function scope. If an appropriate memory barrier is encountered, the access will no longer be considered for reordering. When the result of a memory operation should be ordered by a barrier, KCSAN can then detect data races where the conflict only occurs as a result of a missing barrier due to reordering accesses. Suggested-by: Dmitry Vyukov Signed-off-by: Marco Elver --- v3: * Remove kcsan_noinstr hackery, since we now try to avoid adding any instrumentation to .noinstr.text in the first place. * Restrict config WEAK_MEMORY to only be enabled with tooling where we actually remove instrumentation from noinstr. * Don't define kcsan_weak_memory bool if !KCSAN_WEAK_MEMORY. v2: * Define kcsan_noinstr as noinline if we rely on objtool nop'ing out calls, to avoid things like LTO inlining it. --- include/linux/kcsan-checks.h | 10 +- include/linux/kcsan.h | 10 +- include/linux/sched.h | 3 + kernel/kcsan/core.c | 209 ++++++++++++++++++++++++++++++++--- lib/Kconfig.kcsan | 20 ++++ scripts/Makefile.kcsan | 9 +- 6 files changed, 242 insertions(+), 19 deletions(-) diff --git a/include/linux/kcsan-checks.h b/include/linux/kcsan-checks.h index 5f5965246877..a1c6a89fde71 100644 --- a/include/linux/kcsan-checks.h +++ b/include/linux/kcsan-checks.h @@ -99,7 +99,15 @@ void kcsan_set_access_mask(unsigned long mask); /* Scoped access information. */ struct kcsan_scoped_access { - struct list_head list; + union { + struct list_head list; /* scoped_accesses list */ + /* + * Not an entry in scoped_accesses list; stack depth from where + * the access was initialized. + */ + int stack_depth; + }; + /* Access information. */ const volatile void *ptr; size_t size; diff --git a/include/linux/kcsan.h b/include/linux/kcsan.h index 13cef3458fed..c07c71f5ba4f 100644 --- a/include/linux/kcsan.h +++ b/include/linux/kcsan.h @@ -49,8 +49,16 @@ struct kcsan_ctx { */ unsigned long access_mask; - /* List of scoped accesses. */ + /* List of scoped accesses; likely to be empty. */ struct list_head scoped_accesses; + +#ifdef CONFIG_KCSAN_WEAK_MEMORY + /* + * Scoped access for modeling access reordering to detect missing memory + * barriers; only keep 1 to keep fast-path complexity manageable. + */ + struct kcsan_scoped_access reorder_access; +#endif }; /** diff --git a/include/linux/sched.h b/include/linux/sched.h index 78c351e35fec..0cd40b010487 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1339,6 +1339,9 @@ struct task_struct { #ifdef CONFIG_TRACE_IRQFLAGS struct irqtrace_events kcsan_save_irqtrace; #endif +#ifdef CONFIG_KCSAN_WEAK_MEMORY + int kcsan_stack_depth; +#endif #endif #if IS_ENABLED(CONFIG_KUNIT) diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index bd359f8ee63a..36a75e79a0bd 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -20,6 +21,8 @@ #include #include +#include + #include "encoding.h" #include "kcsan.h" #include "permissive.h" @@ -40,6 +43,13 @@ module_param_named(udelay_interrupt, kcsan_udelay_interrupt, uint, 0644); module_param_named(skip_watch, kcsan_skip_watch, long, 0644); module_param_named(interrupt_watcher, kcsan_interrupt_watcher, bool, 0444); +#ifdef CONFIG_KCSAN_WEAK_MEMORY +static bool kcsan_weak_memory = true; +module_param_named(weak_memory, kcsan_weak_memory, bool, 0644); +#else +#define kcsan_weak_memory false +#endif + bool kcsan_enabled; /* Per-CPU kcsan_ctx for interrupts */ @@ -351,6 +361,67 @@ void kcsan_restore_irqtrace(struct task_struct *task) #endif } +static __always_inline int get_kcsan_stack_depth(void) +{ +#ifdef CONFIG_KCSAN_WEAK_MEMORY + return current->kcsan_stack_depth; +#else + BUILD_BUG(); + return 0; +#endif +} + +static __always_inline void add_kcsan_stack_depth(int val) +{ +#ifdef CONFIG_KCSAN_WEAK_MEMORY + current->kcsan_stack_depth += val; +#else + BUILD_BUG(); +#endif +} + +static __always_inline struct kcsan_scoped_access *get_reorder_access(struct kcsan_ctx *ctx) +{ +#ifdef CONFIG_KCSAN_WEAK_MEMORY + return ctx->disable_scoped ? NULL : &ctx->reorder_access; +#else + return NULL; +#endif +} + +static __always_inline bool +find_reorder_access(struct kcsan_ctx *ctx, const volatile void *ptr, size_t size, + int type, unsigned long ip) +{ + struct kcsan_scoped_access *reorder_access = get_reorder_access(ctx); + + if (!reorder_access) + return false; + + /* + * Note: If accesses are repeated while reorder_access is identical, + * never matches the new access, because !(type & KCSAN_ACCESS_SCOPED). + */ + return reorder_access->ptr == ptr && reorder_access->size == size && + reorder_access->type == type && reorder_access->ip == ip; +} + +static inline void +set_reorder_access(struct kcsan_ctx *ctx, const volatile void *ptr, size_t size, + int type, unsigned long ip) +{ + struct kcsan_scoped_access *reorder_access = get_reorder_access(ctx); + + if (!reorder_access || !kcsan_weak_memory) + return; + + reorder_access->ptr = ptr; + reorder_access->size = size; + reorder_access->type = type | KCSAN_ACCESS_SCOPED; + reorder_access->ip = ip; + reorder_access->stack_depth = get_kcsan_stack_depth(); +} + /* * Pull everything together: check_access() below contains the performance * critical operations; the fast-path (including check_access) functions should @@ -389,8 +460,10 @@ static noinline void kcsan_found_watchpoint(const volatile void *ptr, * The access_mask check relies on value-change comparison. To avoid * reporting a race where e.g. the writer set up the watchpoint, but the * reader has access_mask!=0, we have to ignore the found watchpoint. + * + * reorder_access is never created from an access with access_mask set. */ - if (ctx->access_mask) + if (ctx->access_mask && !find_reorder_access(ctx, ptr, size, type, ip)) return; /* @@ -440,11 +513,13 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned const bool is_assert = (type & KCSAN_ACCESS_ASSERT) != 0; atomic_long_t *watchpoint; u64 old, new, diff; - unsigned long access_mask; enum kcsan_value_change value_change = KCSAN_VALUE_CHANGE_MAYBE; + bool interrupt_watcher = kcsan_interrupt_watcher; unsigned long ua_flags = user_access_save(); struct kcsan_ctx *ctx = get_ctx(); + unsigned long access_mask = ctx->access_mask; unsigned long irq_flags = 0; + bool is_reorder_access; /* * Always reset kcsan_skip counter in slow-path to avoid underflow; see @@ -467,6 +542,17 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned goto out; } + /* + * The local CPU cannot observe reordering of its own accesses, and + * therefore we need to take care of 2 cases to avoid false positives: + * + * 1. Races of the reordered access with interrupts. To avoid, if + * the current access is reorder_access, disable interrupts. + * 2. Avoid races of scoped accesses from nested interrupts (below). + */ + is_reorder_access = find_reorder_access(ctx, ptr, size, type, ip); + if (is_reorder_access) + interrupt_watcher = false; /* * Avoid races of scoped accesses from nested interrupts (or scheduler). * Assume setting up a watchpoint for a non-scoped (normal) access that @@ -482,7 +568,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned * information is lost if dirtied by KCSAN. */ kcsan_save_irqtrace(current); - if (!kcsan_interrupt_watcher) + if (!interrupt_watcher) local_irq_save(irq_flags); watchpoint = insert_watchpoint((unsigned long)ptr, size, is_write); @@ -503,7 +589,7 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned * Read the current value, to later check and infer a race if the data * was modified via a non-instrumented access, e.g. from a device. */ - old = read_instrumented_memory(ptr, size); + old = is_reorder_access ? 0 : read_instrumented_memory(ptr, size); /* * Delay this thread, to increase probability of observing a racy @@ -515,8 +601,17 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned * Re-read value, and check if it is as expected; if not, we infer a * racy access. */ - access_mask = ctx->access_mask; - new = read_instrumented_memory(ptr, size); + if (!is_reorder_access) { + new = read_instrumented_memory(ptr, size); + } else { + /* + * Reordered accesses cannot be used for value change detection, + * because the memory location may no longer be accessible and + * could result in a fault. + */ + new = 0; + access_mask = 0; + } diff = old ^ new; if (access_mask) @@ -585,11 +680,20 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned */ remove_watchpoint(watchpoint); atomic_long_dec(&kcsan_counters[KCSAN_COUNTER_USED_WATCHPOINTS]); + out_unlock: - if (!kcsan_interrupt_watcher) + if (!interrupt_watcher) local_irq_restore(irq_flags); kcsan_restore_irqtrace(current); ctx->disable_scoped--; + + /* + * Reordered accesses cannot be used for value change detection, + * therefore never consider for reordering if access_mask is set. + * ASSERT_EXCLUSIVE are not real accesses, ignore them as well. + */ + if (!access_mask && !is_assert) + set_reorder_access(ctx, ptr, size, type, ip); out: user_access_restore(ua_flags); } @@ -597,7 +701,6 @@ kcsan_setup_watchpoint(const volatile void *ptr, size_t size, int type, unsigned static __always_inline void check_access(const volatile void *ptr, size_t size, int type, unsigned long ip) { - const bool is_write = (type & KCSAN_ACCESS_WRITE) != 0; atomic_long_t *watchpoint; long encoded_watchpoint; @@ -608,12 +711,14 @@ check_access(const volatile void *ptr, size_t size, int type, unsigned long ip) if (unlikely(size == 0)) return; +again: /* * Avoid user_access_save in fast-path: find_watchpoint is safe without * user_access_save, as the address that ptr points to is only used to * check if a watchpoint exists; ptr is never dereferenced. */ - watchpoint = find_watchpoint((unsigned long)ptr, size, !is_write, + watchpoint = find_watchpoint((unsigned long)ptr, size, + !(type & KCSAN_ACCESS_WRITE), &encoded_watchpoint); /* * It is safe to check kcsan_is_enabled() after find_watchpoint in the @@ -627,9 +732,42 @@ check_access(const volatile void *ptr, size_t size, int type, unsigned long ip) else { struct kcsan_ctx *ctx = get_ctx(); /* Call only once in fast-path. */ - if (unlikely(should_watch(ctx, ptr, size, type))) + if (unlikely(should_watch(ctx, ptr, size, type))) { kcsan_setup_watchpoint(ptr, size, type, ip); - else if (unlikely(ctx->scoped_accesses.prev)) + return; + } + + if (!(type & KCSAN_ACCESS_SCOPED)) { + struct kcsan_scoped_access *reorder_access = get_reorder_access(ctx); + + if (reorder_access) { + /* + * reorder_access check: simulates reordering of + * the access after subsequent operations. + */ + ptr = reorder_access->ptr; + type = reorder_access->type; + ip = reorder_access->ip; + /* + * Upon a nested interrupt, this context's + * reorder_access can be modified (shared ctx). + * We know that upon return, reorder_access is + * always invalidated by setting size to 0 via + * __tsan_func_exit(). Therefore we must read + * and check size after the other fields. + */ + barrier(); + size = READ_ONCE(reorder_access->size); + if (size) + goto again; + } + } + + /* + * Always checked last, right before returning from runtime; + * if reorder_access is valid, checked after it was checked. + */ + if (unlikely(ctx->scoped_accesses.prev)) kcsan_check_scoped_accesses(); } } @@ -916,19 +1054,60 @@ DEFINE_TSAN_VOLATILE_READ_WRITE(8); DEFINE_TSAN_VOLATILE_READ_WRITE(16); /* - * The below are not required by KCSAN, but can still be emitted by the - * compiler. + * Function entry and exit are used to determine the validty of reorder_access. + * Reordering of the access ends at the end of the function scope where the + * access happened. This is done for two reasons: + * + * 1. Artificially limits the scope where missing barriers are detected. + * This minimizes false positives due to uninstrumented functions that + * contain the required barriers but were missed. + * + * 2. Simplifies generating the stack trace of the access. */ void __tsan_func_entry(void *call_pc); -void __tsan_func_entry(void *call_pc) +noinline void __tsan_func_entry(void *call_pc) { + if (!IS_ENABLED(CONFIG_KCSAN_WEAK_MEMORY)) + return; + + instrumentation_begin(); + add_kcsan_stack_depth(1); + instrumentation_end(); } EXPORT_SYMBOL(__tsan_func_entry); + void __tsan_func_exit(void); -void __tsan_func_exit(void) +noinline void __tsan_func_exit(void) { + struct kcsan_scoped_access *reorder_access; + + if (!IS_ENABLED(CONFIG_KCSAN_WEAK_MEMORY)) + return; + + instrumentation_begin(); + reorder_access = get_reorder_access(get_ctx()); + if (!reorder_access) + goto out; + + if (get_kcsan_stack_depth() <= reorder_access->stack_depth) { + /* + * Access check to catch cases where write without a barrier + * (supposed release) was last access in function: because + * instrumentation is inserted before the real access, a data + * race due to the write giving up a c-s would only be caught if + * we do the conflicting access after. + */ + check_access(reorder_access->ptr, reorder_access->size, + reorder_access->type, reorder_access->ip); + reorder_access->size = 0; + reorder_access->stack_depth = INT_MIN; + } +out: + add_kcsan_stack_depth(-1); + instrumentation_end(); } EXPORT_SYMBOL(__tsan_func_exit); + void __tsan_init(void); void __tsan_init(void) { diff --git a/lib/Kconfig.kcsan b/lib/Kconfig.kcsan index e0a93ffdef30..e4394ea8068b 100644 --- a/lib/Kconfig.kcsan +++ b/lib/Kconfig.kcsan @@ -191,6 +191,26 @@ config KCSAN_STRICT closely aligns with the rules defined by the Linux-kernel memory consistency model (LKMM). +config KCSAN_WEAK_MEMORY + bool "Enable weak memory modeling to detect missing memory barriers" + default y + depends on KCSAN_STRICT + # We can either let objtool nop __tsan_func_{entry,exit}() and builtin + # atomics instrumentation in .noinstr.text, or use a compiler that can + # implement __no_kcsan to really remove all instrumentation. + depends on STACK_VALIDATION || CC_IS_GCC + help + Enable support for modeling a subset of weak memory, which allows + detecting a subset of data races due to missing memory barriers. + + Depends on KCSAN_STRICT, because the options strenghtening certain + plain accesses by default (depending on !KCSAN_STRICT) reduce the + ability to detect any data races invoving reordered accesses, in + particular reordered writes. + + Weak memory modeling relies on additional instrumentation and may + affect performance. + config KCSAN_REPORT_VALUE_CHANGE_ONLY bool "Only report races where watcher observed a data value change" default y diff --git a/scripts/Makefile.kcsan b/scripts/Makefile.kcsan index 37cb504c77e1..4c7f0d282e42 100644 --- a/scripts/Makefile.kcsan +++ b/scripts/Makefile.kcsan @@ -9,7 +9,12 @@ endif # Keep most options here optional, to allow enabling more compilers if absence # of some options does not break KCSAN nor causes false positive reports. -export CFLAGS_KCSAN := -fsanitize=thread \ - $(call cc-option,$(call cc-param,tsan-instrument-func-entry-exit=0) -fno-optimize-sibling-calls) \ +kcsan-cflags := -fsanitize=thread -fno-optimize-sibling-calls \ $(call cc-option,$(call cc-param,tsan-compound-read-before-write=1),$(call cc-option,$(call cc-param,tsan-instrument-read-before-write=1))) \ $(call cc-param,tsan-distinguish-volatile=1) + +ifndef CONFIG_KCSAN_WEAK_MEMORY +kcsan-cflags += $(call cc-option,$(call cc-param,tsan-instrument-func-entry-exit=0)) +endif + +export CFLAGS_KCSAN := $(kcsan-cflags) From patchwork Tue Nov 30 11:44:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646933 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FA4FC433F5 for ; Tue, 30 Nov 2021 11:47:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9A5BC6B007D; Tue, 30 Nov 2021 06:45:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 92EDA6B007E; Tue, 30 Nov 2021 06:45:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7CFB86B0080; Tue, 30 Nov 2021 06:45:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0122.hostedemail.com [216.40.44.122]) by kanga.kvack.org (Postfix) with ESMTP id 692186B007D for ; Tue, 30 Nov 2021 06:45:27 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 11EF0182A842D for ; Tue, 30 Nov 2021 11:45:17 +0000 (UTC) X-FDA: 78865415832.18.18C5824 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf12.hostedemail.com (Postfix) with ESMTP id A94EA10000BB for ; Tue, 30 Nov 2021 11:45:16 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id q7-20020adff507000000b0017d160d35a8so3530623wro.4 for ; Tue, 30 Nov 2021 03:45:16 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=05D+KiscSDDxRASVdY9PJsfP2IqeJxlbD6WimO+AarQ=; b=WjwZ3pB6+Vu2lNRFWXtsCYgVVT5dtFBAzdsiyINGDqy6ee2aK4TgF2jWEgxL5ZPGR+ BTrTrwvc8indlreyV2WKOa9NjmGI82GHiWPYvkbvZnFbBuJNCPsE4Q0zqHcoWLrC5go1 gu/Fo0LO9OQtmwAChCaBi+nxY9hyeY7yFTFhZcRk3+OC/W/SQcaC5YloxNOh/p2OLJY3 a4xBHN5RELZaAM+6LlbS4sF6LMpAMUn3bLGB67G6PQbSBetLyxwWP8VahUs+13wfeINT wnA8fVDdM4CU9kWXpYnFKL5QIprJPNhuPavfTPxgfXB7NsiyaxVa29vZLTO9jytHU3CD 2BiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=05D+KiscSDDxRASVdY9PJsfP2IqeJxlbD6WimO+AarQ=; b=xysGlaOnCSWDLZ+bnSbAMlRtWzbcCrFDEP2VtRLNhYuf9q/8c4uo1PoTa6lHyQER4o /WcG06gPe/NbHjcGnRD+79EVVKNalz98/bVDMHNG1YCAmY2CMjc+IXeuLDK8wDy0dvQ9 q0iNiBJdjVgEUmOUOPI2di2AYuHCLjy8c2TttTSp6nJAPXf3f12gt9vH1IX9vff4tBuK fyJ/z6fDYBZgqvYtIuJ+fA3h4u2OtZee9xhh9GdclQlPcoOXvupxtrUpk4+JBz/3hB1b thcUGkf88K4MDnoExcoxOvRJmOPxXSTEUZWk8kJYlmWWROHzdhKkOOlBXOV/AZGCqW4U uk+w== X-Gm-Message-State: AOAM530xQ8wSZzIMFj9s6G1ZA+XISaCA2/ch8wTtK5y79qDcVSMeAhyN FaipeFkQGlDq2VJZMSpb0TrgGPX40g== X-Google-Smtp-Source: ABdhPJxGueGvV4chcqDmzeTnc8nSvFhiikLLTsZdezn3PPdCHiHIafheTtGZ0dHxd+eScv2RrVBGUnfycQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:6000:168f:: with SMTP id y15mr39633411wrd.61.1638272715140; Tue, 30 Nov 2021 03:45:15 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:13 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-6-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 05/25] kcsan: Add core memory barrier instrumentation functions From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: A94EA10000BB X-Stat-Signature: ng8cwd5dwg9yjy7pnjqbj68za5xrjhdf Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=WjwZ3pB6; spf=pass (imf12.hostedemail.com: domain of 3yw6mYQUKCJQ29J2F4CC492.0CA96BIL-AA8Jy08.CF4@flex--elver.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=3yw6mYQUKCJQ29J2F4CC492.0CA96BIL-AA8Jy08.CF4@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1638272716-384771 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add the core memory barrier instrumentation functions. These invalidate the current in-flight reordered access based on the rules for the respective barrier types and in-flight access type. To obtain barrier instrumentation that can be disabled via __no_kcsan with appropriate compiler-support (and not just with objtool help), barrier instrumentation repurposes __atomic_signal_fence(), instead of inserting explicit calls. Crucially, __atomic_signal_fence() normally does not map to any real instructions, but is still intercepted by fsanitize=thread. As a result, like any other instrumentation done by the compiler, barrier instrumentation can be disabled with __no_kcsan. Unfortunately Clang and GCC currently differ in their __no_kcsan aka __no_sanitize_thread behaviour with respect to builtin atomics (and __tsan_func_{entry,exit}) instrumentation. This is already reflected in Kconfig.kcsan's dependencies for KCSAN_WEAK_MEMORY. A later change will introduce support for newer versions of Clang that can implement __no_kcsan to also remove the additional instrumentation introduced by KCSAN_WEAK_MEMORY. Signed-off-by: Marco Elver --- v3: * Rework to avoid inserting explicit calls, and instead repurpose __atomic_signal_fence (see comment at __tsan_atomic_signal_fence), which is known to the ThreadSanitizer instrumentation and can therefore be removed via function attributes. v2: * Rename kcsan_atomic_release() to kcsan_atomic_builtin_memorder() to avoid confusion. --- include/linux/kcsan-checks.h | 71 +++++++++++++++++++++++++++++++++++- kernel/kcsan/core.c | 68 +++++++++++++++++++++++++++++++++- 2 files changed, 136 insertions(+), 3 deletions(-) diff --git a/include/linux/kcsan-checks.h b/include/linux/kcsan-checks.h index a1c6a89fde71..9d2c869167f2 100644 --- a/include/linux/kcsan-checks.h +++ b/include/linux/kcsan-checks.h @@ -36,6 +36,36 @@ */ void __kcsan_check_access(const volatile void *ptr, size_t size, int type); +/* + * See definition of __tsan_atomic_signal_fence() in kernel/kcsan/core.c. + * Note: The mappings are arbitrary, and do not reflect any real mappings of C11 + * memory orders to the LKMM memory orders and vice-versa! + */ +#define __KCSAN_BARRIER_TO_SIGNAL_FENCE_mb __ATOMIC_SEQ_CST +#define __KCSAN_BARRIER_TO_SIGNAL_FENCE_wmb __ATOMIC_ACQ_REL +#define __KCSAN_BARRIER_TO_SIGNAL_FENCE_rmb __ATOMIC_ACQUIRE +#define __KCSAN_BARRIER_TO_SIGNAL_FENCE_release __ATOMIC_RELEASE + +/** + * __kcsan_mb - full memory barrier instrumentation + */ +void __kcsan_mb(void); + +/** + * __kcsan_wmb - write memory barrier instrumentation + */ +void __kcsan_wmb(void); + +/** + * __kcsan_rmb - read memory barrier instrumentation + */ +void __kcsan_rmb(void); + +/** + * __kcsan_release - release barrier instrumentation + */ +void __kcsan_release(void); + /** * kcsan_disable_current - disable KCSAN for the current context * @@ -159,6 +189,10 @@ void kcsan_end_scoped_access(struct kcsan_scoped_access *sa); static inline void __kcsan_check_access(const volatile void *ptr, size_t size, int type) { } +static inline void __kcsan_mb(void) { } +static inline void __kcsan_wmb(void) { } +static inline void __kcsan_rmb(void) { } +static inline void __kcsan_release(void) { } static inline void kcsan_disable_current(void) { } static inline void kcsan_enable_current(void) { } static inline void kcsan_enable_current_nowarn(void) { } @@ -191,12 +225,45 @@ static inline void kcsan_end_scoped_access(struct kcsan_scoped_access *sa) { } */ #define __kcsan_disable_current kcsan_disable_current #define __kcsan_enable_current kcsan_enable_current_nowarn -#else +#else /* __SANITIZE_THREAD__ */ static inline void kcsan_check_access(const volatile void *ptr, size_t size, int type) { } static inline void __kcsan_enable_current(void) { } static inline void __kcsan_disable_current(void) { } -#endif +#endif /* __SANITIZE_THREAD__ */ + +#if defined(CONFIG_KCSAN_WEAK_MEMORY) && defined(__SANITIZE_THREAD__) +/* + * Normal barrier instrumentation is not done via explicit calls, but by mapping + * to a repurposed __atomic_signal_fence(), which normally does not generate any + * real instructions, but is still intercepted by fsanitize=thread. This means, + * like any other compile-time instrumentation, barrier instrumentation can be + * disabled with the __no_kcsan function attribute. + * + * Also see definition of __tsan_atomic_signal_fence() in kernel/kcsan/core.c. + */ +#define __KCSAN_BARRIER_TO_SIGNAL_FENCE(name) \ + static __always_inline void kcsan_##name(void) \ + { \ + barrier(); \ + __atomic_signal_fence(__KCSAN_BARRIER_TO_SIGNAL_FENCE_##name); \ + barrier(); \ + } +__KCSAN_BARRIER_TO_SIGNAL_FENCE(mb) +__KCSAN_BARRIER_TO_SIGNAL_FENCE(wmb) +__KCSAN_BARRIER_TO_SIGNAL_FENCE(rmb) +__KCSAN_BARRIER_TO_SIGNAL_FENCE(release) +#elif defined(CONFIG_KCSAN_WEAK_MEMORY) && defined(__KCSAN_INSTRUMENT_BARRIERS__) +#define kcsan_mb __kcsan_mb +#define kcsan_wmb __kcsan_wmb +#define kcsan_rmb __kcsan_rmb +#define kcsan_release __kcsan_release +#else /* CONFIG_KCSAN_WEAK_MEMORY && ... */ +static inline void kcsan_mb(void) { } +static inline void kcsan_wmb(void) { } +static inline void kcsan_rmb(void) { } +static inline void kcsan_release(void) { } +#endif /* CONFIG_KCSAN_WEAK_MEMORY && ... */ /** * __kcsan_check_read - check regular read access for races diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index 36a75e79a0bd..2254cb75cbb0 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -942,6 +942,22 @@ void __kcsan_check_access(const volatile void *ptr, size_t size, int type) } EXPORT_SYMBOL(__kcsan_check_access); +#define DEFINE_MEMORY_BARRIER(name, order_before_cond) \ + void __kcsan_##name(void) \ + { \ + struct kcsan_scoped_access *sa = get_reorder_access(get_ctx()); \ + if (!sa) \ + return; \ + if (order_before_cond) \ + sa->size = 0; \ + } \ + EXPORT_SYMBOL(__kcsan_##name) + +DEFINE_MEMORY_BARRIER(mb, true); +DEFINE_MEMORY_BARRIER(wmb, sa->type & (KCSAN_ACCESS_WRITE | KCSAN_ACCESS_COMPOUND)); +DEFINE_MEMORY_BARRIER(rmb, !(sa->type & KCSAN_ACCESS_WRITE) || (sa->type & KCSAN_ACCESS_COMPOUND)); +DEFINE_MEMORY_BARRIER(release, true); + /* * KCSAN uses the same instrumentation that is emitted by supported compilers * for ThreadSanitizer (TSAN). @@ -1130,10 +1146,19 @@ EXPORT_SYMBOL(__tsan_init); * functions, whose job is to also execute the operation itself. */ +static __always_inline void kcsan_atomic_builtin_memorder(int memorder) +{ + if (memorder == __ATOMIC_RELEASE || + memorder == __ATOMIC_SEQ_CST || + memorder == __ATOMIC_ACQ_REL) + __kcsan_release(); +} + #define DEFINE_TSAN_ATOMIC_LOAD_STORE(bits) \ u##bits __tsan_atomic##bits##_load(const u##bits *ptr, int memorder); \ u##bits __tsan_atomic##bits##_load(const u##bits *ptr, int memorder) \ { \ + kcsan_atomic_builtin_memorder(memorder); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, KCSAN_ACCESS_ATOMIC, _RET_IP_); \ } \ @@ -1143,6 +1168,7 @@ EXPORT_SYMBOL(__tsan_init); void __tsan_atomic##bits##_store(u##bits *ptr, u##bits v, int memorder); \ void __tsan_atomic##bits##_store(u##bits *ptr, u##bits v, int memorder) \ { \ + kcsan_atomic_builtin_memorder(memorder); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, \ KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC, _RET_IP_); \ @@ -1155,6 +1181,7 @@ EXPORT_SYMBOL(__tsan_init); u##bits __tsan_atomic##bits##_##op(u##bits *ptr, u##bits v, int memorder); \ u##bits __tsan_atomic##bits##_##op(u##bits *ptr, u##bits v, int memorder) \ { \ + kcsan_atomic_builtin_memorder(memorder); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, \ KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | \ @@ -1187,6 +1214,7 @@ EXPORT_SYMBOL(__tsan_init); int __tsan_atomic##bits##_compare_exchange_##strength(u##bits *ptr, u##bits *exp, \ u##bits val, int mo, int fail_mo) \ { \ + kcsan_atomic_builtin_memorder(mo); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, \ KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | \ @@ -1202,6 +1230,7 @@ EXPORT_SYMBOL(__tsan_init); u##bits __tsan_atomic##bits##_compare_exchange_val(u##bits *ptr, u##bits exp, u##bits val, \ int mo, int fail_mo) \ { \ + kcsan_atomic_builtin_memorder(mo); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, \ KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | \ @@ -1233,10 +1262,47 @@ DEFINE_TSAN_ATOMIC_OPS(64); void __tsan_atomic_thread_fence(int memorder); void __tsan_atomic_thread_fence(int memorder) { + kcsan_atomic_builtin_memorder(memorder); __atomic_thread_fence(memorder); } EXPORT_SYMBOL(__tsan_atomic_thread_fence); +/* + * In instrumented files, we emit instrumentation for barriers by mapping the + * kernel barriers to an __atomic_signal_fence(), which is interpreted specially + * and otherwise has no relation to a real __atomic_signal_fence(). No known + * kernel code uses __atomic_signal_fence(). + * + * Since fsanitize=thread instrumentation handles __atomic_signal_fence(), which + * are turned into calls to __tsan_atomic_signal_fence(), such instrumentation + * can be disabled via the __no_kcsan function attribute (vs. an explicit call + * which could not). When __no_kcsan is requested, __atomic_signal_fence() + * generates no code. + * + * Note: The result of using __atomic_signal_fence() with KCSAN enabled is + * potentially limiting the compiler's ability to reorder operations; however, + * if barriers were instrumented with explicit calls (without LTO), the compiler + * couldn't optimize much anyway. The result of a hypothetical architecture + * using __atomic_signal_fence() in normal code would be KCSAN false negatives. + */ void __tsan_atomic_signal_fence(int memorder); -void __tsan_atomic_signal_fence(int memorder) { } +noinline void __tsan_atomic_signal_fence(int memorder) +{ + switch (memorder) { + case __KCSAN_BARRIER_TO_SIGNAL_FENCE_mb: + __kcsan_mb(); + break; + case __KCSAN_BARRIER_TO_SIGNAL_FENCE_wmb: + __kcsan_wmb(); + break; + case __KCSAN_BARRIER_TO_SIGNAL_FENCE_rmb: + __kcsan_rmb(); + break; + case __KCSAN_BARRIER_TO_SIGNAL_FENCE_release: + __kcsan_release(); + break; + default: + break; + } +} EXPORT_SYMBOL(__tsan_atomic_signal_fence); From patchwork Tue Nov 30 11:44:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646935 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0BC9C433F5 for ; Tue, 30 Nov 2021 11:48:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F26616B007E; Tue, 30 Nov 2021 06:45:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EAD886B0080; Tue, 30 Nov 2021 06:45:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D277A6B0081; Tue, 30 Nov 2021 06:45:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0074.hostedemail.com [216.40.44.74]) by kanga.kvack.org (Postfix) with ESMTP id C52CE6B007E for ; Tue, 30 Nov 2021 06:45:29 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 847D884797 for ; Tue, 30 Nov 2021 11:45:19 +0000 (UTC) X-FDA: 78865415916.17.3061BF7 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf15.hostedemail.com (Postfix) with ESMTP id EC1D8D00009F for ; Tue, 30 Nov 2021 11:45:11 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id a64-20020a1c7f43000000b003335e5dc26bso10282495wmd.8 for ; Tue, 30 Nov 2021 03:45:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=K7olqGN/TGjGn55ZVkEX4B3oA8fATuBzpoXYDPWo+s8=; b=clBdSB1KsdaD9LhF3VuIZRE5qDbZm9VUy7Jmjj5Uln7EAmbx9/EVEhCZVIjKZMPCK+ qVPPdLDGmjPahXUB+7UHlvMJZMoPvtMBSVaYAWCfkloYps+FrLrtPD/MSZq6V5qCVz9F HLMEG9qAzvEkT52Tvk9GFZWjZEpBWUwzTzdkavD1KWL9RmtktroFJZ7Egs8clPbPBHwn wN7uOQ716yPwwUBgmjReM4uHwoUeb55jkQFwgYZRDRcrW/uO9L1BglBLfcp2U6lDrDpx FloEFIIraKR1zTwfjZbLjFAUKKD+8XROZdOh1bjkVCNAV7lpb1p+AtLZAWaWo2nZkkq5 egZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=K7olqGN/TGjGn55ZVkEX4B3oA8fATuBzpoXYDPWo+s8=; b=gh9HJJ+cLQVB3i2r8klXSO/VmqnRznp4J1OWGXTsvnvRSTG+bcvFP5Hp+Bjk5iq9KP 1V4sr6bldpdX2obmYJnUFDEbO+l56v4FKnZhsnTYyIUVZhEKLffxYU6nJaklxRun+7Rk nSwB8jRjD090TODlww+D8s3RDPWzHnF0h0XUcMR+pFO3Io2ezA9GZfmnHfQ8vXqmyve6 TdN7pbgtfMz4uDnCOA3KmhNBNulgs/01BqPopi3LfLSLZEs3DPSHkU4jBY7K1unZMUBc iA+udEAq8j8VeVqAtw076BnMSgaJFtg9geh83rg11Z+haYH4qr2Ibt0+8WqceOT9Bq2V gCsg== X-Gm-Message-State: AOAM533xb3n43kfKpUQfnzqTWhChZyF/gjCnLYKQkUWqUYMrs+8wq7Sg 2g80oXzd3D+UhJTM08B6Za3IyvvJFQ== X-Google-Smtp-Source: ABdhPJwldFqpTbfajaFXWNrJX3ZXShEvmpEyVyn9WWq1XUdWN/HHO/Y9FjyIOfXlhbvUoQlQcowRnYznoQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:4f0b:: with SMTP id l11mr625490wmq.0.1638272717644; Tue, 30 Nov 2021 03:45:17 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:14 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-7-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 06/25] kcsan, kbuild: Add option for barrier instrumentation only From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Stat-Signature: n9s4thxk3dewqpdzqeyoodfze9x3orsn Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=clBdSB1K; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of 3zQ6mYQUKCJY4BL4H6EE6B4.2ECB8DKN-CCAL02A.EH6@flex--elver.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3zQ6mYQUKCJY4BL4H6EE6B4.2ECB8DKN-CCAL02A.EH6@flex--elver.bounces.google.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: EC1D8D00009F X-HE-Tag: 1638272711-472051 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Source files that disable KCSAN via KCSAN_SANITIZE := n, remove all instrumentation, including explicit barrier instrumentation. With instrumentation for memory barriers, in few places it is required to enable just the explicit instrumentation for memory barriers to avoid false positives. Providing the Makefile variable KCSAN_INSTRUMENT_BARRIERS_obj.o or KCSAN_INSTRUMENT_BARRIERS (for all files) set to 'y' only enables the explicit barrier instrumentation. Signed-off-by: Marco Elver --- scripts/Makefile.lib | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib index d1f865b8c0cb..ab17f7b2e33c 100644 --- a/scripts/Makefile.lib +++ b/scripts/Makefile.lib @@ -182,6 +182,11 @@ ifeq ($(CONFIG_KCSAN),y) _c_flags += $(if $(patsubst n%,, \ $(KCSAN_SANITIZE_$(basetarget).o)$(KCSAN_SANITIZE)y), \ $(CFLAGS_KCSAN)) +# Some uninstrumented files provide implied barriers required to avoid false +# positives: set KCSAN_INSTRUMENT_BARRIERS for barrier instrumentation only. +_c_flags += $(if $(patsubst n%,, \ + $(KCSAN_INSTRUMENT_BARRIERS_$(basetarget).o)$(KCSAN_INSTRUMENT_BARRIERS)n), \ + -D__KCSAN_INSTRUMENT_BARRIERS__) endif # $(srctree)/$(src) for including checkin headers from generated source files From patchwork Tue Nov 30 11:44:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646989 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1144DC433EF for ; Tue, 30 Nov 2021 11:49:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 64A4A6B0080; Tue, 30 Nov 2021 06:45:32 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5D3416B0081; Tue, 30 Nov 2021 06:45:32 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 49AE86B0082; Tue, 30 Nov 2021 06:45:32 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0157.hostedemail.com [216.40.44.157]) by kanga.kvack.org (Postfix) with ESMTP id 38FDB6B0080 for ; Tue, 30 Nov 2021 06:45:32 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id EEFDB8B767 for ; Tue, 30 Nov 2021 11:45:21 +0000 (UTC) X-FDA: 78865416000.27.D96541C Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf21.hostedemail.com (Postfix) with ESMTP id 62AE6D036A49 for ; Tue, 30 Nov 2021 11:45:18 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id p12-20020a05600c1d8c00b0033a22e48203so12698526wms.6 for ; Tue, 30 Nov 2021 03:45:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=j+xGmYJ7a6naxsOgId4/Y5ehX22eJTAnRqZAmnKJeqk=; b=BIT8VeP6OqlyivV/xY4OJ8catpXBR2IJSu8XREWcZg71WaAQIG5+hSM4qIocXAFjTH gRBCk6dzWkb5Ke0XMnBRGHcxF+p8r7eNgJsjXXBnQa3KuFdDeGf9PY/FWud8Z5tGk0Dx TMS5WC/YyCMQ4Kco5Sz4NOLsAIYw94zsal4vqc8G0xrQL4bEF/N60XkvMzDgctYFiQVI w2/tGq06afI4dtOL8NvZqEN4Mr+YXm2ckISYsl33DE1TvSuMEVl4UjqaYZZVgMqRPAO1 l6LV3jYTi2PS2YN1bmf1XuSMpmIqz92ZQjvfQsvKif8oMYkkNTrZf5xUTmuLXSRM453W WdYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=j+xGmYJ7a6naxsOgId4/Y5ehX22eJTAnRqZAmnKJeqk=; b=24V4u0bAM0eFI3Y1ljJ5FObTDHNP1oXc7g3VW2eiHUFwTLEJiupbZIiAA7NLwSbTfQ VNhVJ5h43sfO0b2v+FVMxkyPvYnT9kbFtO7uaow8OcumvzHAY7tDYNZOmlSZuPQtiLR3 jq+OPjvXdMk9IQBrn+H+3cKi4Yv8Nm6Oq7dpqCfVo8qhGra7hdmh1vbNaJIvUxbTfJpm 0O9WoXx7CGKpn43oBT0eW7bSr7uqBWN2641o3i5O/PYDbgfV/pwH2fOqFn6/y1PH+TQf FoU29i2SDmzTwrQWhl82HkdR0pC2pbSpFUtZ1iSHePd10oEwWHdhzvtXmSth+Pc9KmCx Ey6w== X-Gm-Message-State: AOAM5305DQx39AI6hjNwNPL5gw/kmgu53jXVPyphe5Meq4Ju7KUryPV5 E9TF8Eq/ngqMTMdpFdhsnqvnXaRlQw== X-Google-Smtp-Source: ABdhPJxMKVacuR9XDxyJLMVa2EiD6d4veLO+QybAaMsXncBxd+AFabcdu3YiPCN9Nl29f+z7+XhyR+Y0JA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:1d1b:: with SMTP id l27mr624069wms.1.1638272720147; Tue, 30 Nov 2021 03:45:20 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:15 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-8-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 07/25] kcsan: Call scoped accesses reordered in reports From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 62AE6D036A49 X-Stat-Signature: jmihwiec8guw4oh6xwzydzutbesmnces Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=BIT8VeP6; spf=pass (imf21.hostedemail.com: domain of 30A6mYQUKCJk7EO7K9HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--elver.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=30A6mYQUKCJk7EO7K9HH9E7.5HFEBGNQ-FFDO35D.HK9@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1638272718-365885 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The scoping of an access simply denotes the scope in which it may be reordered. However, in reports, it'll be less confusing to say the access is "reordered". This is more accurate when the race occurred. Signed-off-by: Marco Elver --- kernel/kcsan/kcsan_test.c | 4 ++-- kernel/kcsan/report.c | 16 ++++++++-------- 2 files changed, 10 insertions(+), 10 deletions(-) diff --git a/kernel/kcsan/kcsan_test.c b/kernel/kcsan/kcsan_test.c index 660729238588..6e3c2b8bc608 100644 --- a/kernel/kcsan/kcsan_test.c +++ b/kernel/kcsan/kcsan_test.c @@ -213,9 +213,9 @@ static bool report_matches(const struct expect_report *r) const bool is_atomic = (ty & KCSAN_ACCESS_ATOMIC); const bool is_scoped = (ty & KCSAN_ACCESS_SCOPED); const char *const access_type_aux = - (is_atomic && is_scoped) ? " (marked, scoped)" + (is_atomic && is_scoped) ? " (marked, reordered)" : (is_atomic ? " (marked)" - : (is_scoped ? " (scoped)" : "")); + : (is_scoped ? " (reordered)" : "")); if (i == 1) { /* Access 2 */ diff --git a/kernel/kcsan/report.c b/kernel/kcsan/report.c index fc15077991c4..1b0e050bdf6a 100644 --- a/kernel/kcsan/report.c +++ b/kernel/kcsan/report.c @@ -215,9 +215,9 @@ static const char *get_access_type(int type) if (type & KCSAN_ACCESS_ASSERT) { if (type & KCSAN_ACCESS_SCOPED) { if (type & KCSAN_ACCESS_WRITE) - return "assert no accesses (scoped)"; + return "assert no accesses (reordered)"; else - return "assert no writes (scoped)"; + return "assert no writes (reordered)"; } else { if (type & KCSAN_ACCESS_WRITE) return "assert no accesses"; @@ -240,17 +240,17 @@ static const char *get_access_type(int type) case KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC: return "read-write (marked)"; case KCSAN_ACCESS_SCOPED: - return "read (scoped)"; + return "read (reordered)"; case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_ATOMIC: - return "read (marked, scoped)"; + return "read (marked, reordered)"; case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_WRITE: - return "write (scoped)"; + return "write (reordered)"; case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC: - return "write (marked, scoped)"; + return "write (marked, reordered)"; case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE: - return "read-write (scoped)"; + return "read-write (reordered)"; case KCSAN_ACCESS_SCOPED | KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC: - return "read-write (marked, scoped)"; + return "read-write (marked, reordered)"; default: BUG(); } From patchwork Tue Nov 30 11:44:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646991 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1698C433EF for ; Tue, 30 Nov 2021 11:49:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EFCC46B0081; Tue, 30 Nov 2021 06:45:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E85CE6B0082; Tue, 30 Nov 2021 06:45:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D26C66B0083; Tue, 30 Nov 2021 06:45:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id C302D6B0081 for ; Tue, 30 Nov 2021 06:45:34 -0500 (EST) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 8A83A18458A99 for ; Tue, 30 Nov 2021 11:45:24 +0000 (UTC) X-FDA: 78865416168.16.363C865 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf21.hostedemail.com (Postfix) with ESMTP id 9DB90D0369F8 for ; Tue, 30 Nov 2021 11:45:20 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id l4-20020a05600c1d0400b00332f47a0fa3so12690899wms.8 for ; Tue, 30 Nov 2021 03:45:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=9MKtF9b1Z9XuvDPczN5keFD3RJ+5M3WbaM5UaZup9oY=; b=Mld6ftuJH52h3z9GNnJEoeByNAIFNrr3QsNiiYZFnnrRHu2TfyHKAOcJkvA9ddFunQ W7HZZRZi9E4yV65PPNgG1ZDF1sY+of3aAI7j56Q5/0C9Rf6OEWGZdfCOHijSM7jH24g8 MVLT5isv+V73ghLIyYaQFtX5YyvwcwZyo/T+1dJKwD98Y6BxcjtQUUE6n88U0hqpVvoY UpGN74vd/umUgR6ig9WaMplU3Lwk7NfegtTFIdPQ5VkTrkzbI2fxuQ/5vuHN+oBn0Xdi sO5Ln/L8f8ToTg1xO7zWC3T1pb7NBofCBNGXDgSJNWb9JpMCv7fM0CpBoL76ByPi+/ZN Y5iA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=9MKtF9b1Z9XuvDPczN5keFD3RJ+5M3WbaM5UaZup9oY=; b=euyi+0JwR1yN/L00lDvAn3LT9Cg0vmXCzDYQMgyH5NJvLHDmzpZFSqx5z7++sUv4a8 fKajZn9HOneUnLoivjuGXX9mrLQGwQhPF7cNotLPmewhQFymcMaAm3FMgL9kr9uQE+rW BMHVmZMz6VN0/if26FT5WRwjImv8eyghZoTiNPoiN4vXhoY3LBhMZ7ZS207Gz0BnQefk SAepQkLvflKTqclXlgC/1+AdJNRutu24yokSLJTl85Yd9ttSheN3MkxZxPQ6SbYAViu+ GT+TA0uTGOWOsWI6e8sFmyPN0Wl2LFK1q4r9Nx8VM5SVoAuqk8EfCB5ONykt7d24Kf+B zirQ== X-Gm-Message-State: AOAM531OQ6TUK4Hjz18MLIp3QfW1mO+z2Y5U9a0ABHxtvxq2jY8QVa9x nEfFYQUF2k4YgjOucx4aICI9mrw7VA== X-Google-Smtp-Source: ABdhPJyx3pgU+2VvMa2o5fdiF7HS3H4QggTKNbC1lmMB3t56ozszLnF7zTA+qkAyyD9AgjtVUW/GzzlxOQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a5d:50c7:: with SMTP id f7mr38501609wrt.327.1638272722693; Tue, 30 Nov 2021 03:45:22 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:16 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-9-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 08/25] kcsan: Show location access was reordered to From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 9DB90D0369F8 X-Stat-Signature: c3z7qnnb9kgqp9fc4cfbb6yj8w63u9x1 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Mld6ftuJ; spf=pass (imf21.hostedemail.com: domain of 30g6mYQUKCJs9GQ9MBJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--elver.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=30g6mYQUKCJs9GQ9MBJJBG9.7JHGDIPS-HHFQ57F.JMB@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1638272720-512786 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Also show the location the access was reordered to. An example report: | ================================================================== | BUG: KCSAN: data-race in test_kernel_wrong_memorder / test_kernel_wrong_memorder | | read-write to 0xffffffffc01e61a8 of 8 bytes by task 2311 on cpu 5: | test_kernel_wrong_memorder+0x57/0x90 | access_thread+0x99/0xe0 | kthread+0x2ba/0x2f0 | ret_from_fork+0x22/0x30 | | read-write (reordered) to 0xffffffffc01e61a8 of 8 bytes by task 2310 on cpu 7: | test_kernel_wrong_memorder+0x57/0x90 | access_thread+0x99/0xe0 | kthread+0x2ba/0x2f0 | ret_from_fork+0x22/0x30 | | | +-> reordered to: test_kernel_wrong_memorder+0x80/0x90 | | Reported by Kernel Concurrency Sanitizer on: | CPU: 7 PID: 2310 Comm: access_thread Not tainted 5.14.0-rc1+ #18 | Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-2 04/01/2014 | ================================================================== Signed-off-by: Marco Elver Signed-off-by: Marco Elver Reviewed-by: Boqun Feng --- kernel/kcsan/report.c | 35 +++++++++++++++++++++++------------ 1 file changed, 23 insertions(+), 12 deletions(-) diff --git a/kernel/kcsan/report.c b/kernel/kcsan/report.c index 1b0e050bdf6a..67794404042a 100644 --- a/kernel/kcsan/report.c +++ b/kernel/kcsan/report.c @@ -308,10 +308,12 @@ static int get_stack_skipnr(const unsigned long stack_entries[], int num_entries /* * Skips to the first entry that matches the function of @ip, and then replaces - * that entry with @ip, returning the entries to skip. + * that entry with @ip, returning the entries to skip with @replaced containing + * the replaced entry. */ static int -replace_stack_entry(unsigned long stack_entries[], int num_entries, unsigned long ip) +replace_stack_entry(unsigned long stack_entries[], int num_entries, unsigned long ip, + unsigned long *replaced) { unsigned long symbolsize, offset; unsigned long target_func; @@ -330,6 +332,7 @@ replace_stack_entry(unsigned long stack_entries[], int num_entries, unsigned lon func -= offset; if (func == target_func) { + *replaced = stack_entries[skip]; stack_entries[skip] = ip; return skip; } @@ -342,9 +345,10 @@ replace_stack_entry(unsigned long stack_entries[], int num_entries, unsigned lon } static int -sanitize_stack_entries(unsigned long stack_entries[], int num_entries, unsigned long ip) +sanitize_stack_entries(unsigned long stack_entries[], int num_entries, unsigned long ip, + unsigned long *replaced) { - return ip ? replace_stack_entry(stack_entries, num_entries, ip) : + return ip ? replace_stack_entry(stack_entries, num_entries, ip, replaced) : get_stack_skipnr(stack_entries, num_entries); } @@ -360,6 +364,14 @@ static int sym_strcmp(void *addr1, void *addr2) return strncmp(buf1, buf2, sizeof(buf1)); } +static void +print_stack_trace(unsigned long stack_entries[], int num_entries, unsigned long reordered_to) +{ + stack_trace_print(stack_entries, num_entries, 0); + if (reordered_to) + pr_err(" |\n +-> reordered to: %pS\n", (void *)reordered_to); +} + static void print_verbose_info(struct task_struct *task) { if (!task) @@ -378,10 +390,12 @@ static void print_report(enum kcsan_value_change value_change, struct other_info *other_info, u64 old, u64 new, u64 mask) { + unsigned long reordered_to = 0; unsigned long stack_entries[NUM_STACK_ENTRIES] = { 0 }; int num_stack_entries = stack_trace_save(stack_entries, NUM_STACK_ENTRIES, 1); - int skipnr = sanitize_stack_entries(stack_entries, num_stack_entries, ai->ip); + int skipnr = sanitize_stack_entries(stack_entries, num_stack_entries, ai->ip, &reordered_to); unsigned long this_frame = stack_entries[skipnr]; + unsigned long other_reordered_to = 0; unsigned long other_frame = 0; int other_skipnr = 0; /* silence uninit warnings */ @@ -394,7 +408,7 @@ static void print_report(enum kcsan_value_change value_change, if (other_info) { other_skipnr = sanitize_stack_entries(other_info->stack_entries, other_info->num_stack_entries, - other_info->ai.ip); + other_info->ai.ip, &other_reordered_to); other_frame = other_info->stack_entries[other_skipnr]; /* @value_change is only known for the other thread */ @@ -434,10 +448,9 @@ static void print_report(enum kcsan_value_change value_change, other_info->ai.cpu_id); /* Print the other thread's stack trace. */ - stack_trace_print(other_info->stack_entries + other_skipnr, + print_stack_trace(other_info->stack_entries + other_skipnr, other_info->num_stack_entries - other_skipnr, - 0); - + other_reordered_to); if (IS_ENABLED(CONFIG_KCSAN_VERBOSE)) print_verbose_info(other_info->task); @@ -451,9 +464,7 @@ static void print_report(enum kcsan_value_change value_change, get_thread_desc(ai->task_pid), ai->cpu_id); } /* Print stack trace of this thread. */ - stack_trace_print(stack_entries + skipnr, num_stack_entries - skipnr, - 0); - + print_stack_trace(stack_entries + skipnr, num_stack_entries - skipnr, reordered_to); if (IS_ENABLED(CONFIG_KCSAN_VERBOSE)) print_verbose_info(current); From patchwork Tue Nov 30 11:44:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646993 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AF5E8C433EF for ; Tue, 30 Nov 2021 11:50:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 252D66B0082; Tue, 30 Nov 2021 06:45:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1DBED6B0083; Tue, 30 Nov 2021 06:45:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 054C16B0085; Tue, 30 Nov 2021 06:45:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0244.hostedemail.com [216.40.44.244]) by kanga.kvack.org (Postfix) with ESMTP id EB4526B0082 for ; Tue, 30 Nov 2021 06:45:36 -0500 (EST) Received: from smtpin24.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A4BB3812C178 for ; Tue, 30 Nov 2021 11:45:26 +0000 (UTC) X-FDA: 78865416252.24.CB024AB Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf03.hostedemail.com (Postfix) with ESMTP id 6CAF93000440 for ; Tue, 30 Nov 2021 11:45:21 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id g80-20020a1c2053000000b003331a764709so13624960wmg.2 for ; Tue, 30 Nov 2021 03:45:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rG1xNdPiM+b/19SfERFIEiLMABj+g1AA81VAk56ezfY=; b=P81cETHjfxAPwcXjxLEZoAKIClsMw0V0wD96HhwxcWsW+J35pjN4VZotJ0h51Pevyb G+wsHCsj6PaNbXpEzOb+mwsLQcl2U089MyHL5XWXa0CuIbGa89sbqWRlYWYT+qYvMhrF OQdjMifbE8i9BvuU784f2inF6Z6ZkIaBuprMWhQC/UYSsgI3wtWUrwHOwIa/Leqtefqq sJWWJfRoQx4l/22qMAsurBMJur+PevDjruCtLeW8hjawcRo+ERNcdhJWQv9EgGpMpM5H QtYhGhCW2Is32Dsg/YCLDxGCqA2Rh3TbFJUS8FeoHPV7CTCKP6M5fnXqNhK7KJioPWII h1CQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rG1xNdPiM+b/19SfERFIEiLMABj+g1AA81VAk56ezfY=; b=jGzhbY8GISqgj6A+jUjXj7K2q+0y+e2UawrUQlY0CKUeBaeu0ijrCmPj7dUGzOP3sU LSOsjrIsMYa4LLvGS9OG5T1gnIDRB01rzTHCQxkONoW3g9p5AzhcFg4zAhjNnmU8r3iF WyvUlbh/Rz+gL9rXl3gjqV6n+uyAt5LCLy9rdt1jfFyeRteGDjKgapj8RtVnigkW5/6f MaS8qeS1sMTUEwqV/bzqhKAl7geZJKaaYonb7V4WeVqZi63xJiqcq/lZ0AUraOTDy8zZ EuxOd7W1UnT9XcMR/MnTJEmB0gtDM23+evB2g0GcKOHSjlPLpe3ApZldkBazXXfc3axg MAtg== X-Gm-Message-State: AOAM533WmW9f/ACgCF7R2Tok+ZkAgXjpRvalXF5np5D2vJrUF2ztToUQ k3Mu4399o65dZ5S5zCG22E0UNINUsA== X-Google-Smtp-Source: ABdhPJxbkloraurf0oEulH3ag6+7yY6BzRbHK2TeZCzdaSdJW7S6qqAtdeXCdcM3HaCBmlZIxfVSP6YzOA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:adf:f0c8:: with SMTP id x8mr41133135wro.290.1638272725050; Tue, 30 Nov 2021 03:45:25 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:17 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-10-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 09/25] kcsan: Document modeling of weak memory From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 6CAF93000440 Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=P81cETHj; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf03.hostedemail.com: domain of 31Q6mYQUKCJ4CJTCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--elver.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=31Q6mYQUKCJ4CJTCPEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--elver.bounces.google.com X-Stat-Signature: 7kdibkfzwr36j7qc4ckjd4cqeoikftre X-HE-Tag: 1638272721-178423 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Document how KCSAN models a subset of weak memory and the subset of missing memory barriers it can detect as a result. Signed-off-by: Marco Elver --- v2: * Note the reason that address or control dependencies do not require special handling. --- Documentation/dev-tools/kcsan.rst | 76 +++++++++++++++++++++++++------ 1 file changed, 63 insertions(+), 13 deletions(-) diff --git a/Documentation/dev-tools/kcsan.rst b/Documentation/dev-tools/kcsan.rst index 7db43c7c09b8..3ae866dcc924 100644 --- a/Documentation/dev-tools/kcsan.rst +++ b/Documentation/dev-tools/kcsan.rst @@ -204,17 +204,17 @@ Ultimately this allows to determine the possible executions of concurrent code, and if that code is free from data races. KCSAN is aware of *marked atomic operations* (``READ_ONCE``, ``WRITE_ONCE``, -``atomic_*``, etc.), but is oblivious of any ordering guarantees and simply -assumes that memory barriers are placed correctly. In other words, KCSAN -assumes that as long as a plain access is not observed to race with another -conflicting access, memory operations are correctly ordered. - -This means that KCSAN will not report *potential* data races due to missing -memory ordering. Developers should therefore carefully consider the required -memory ordering requirements that remain unchecked. If, however, missing -memory ordering (that is observable with a particular compiler and -architecture) leads to an observable data race (e.g. entering a critical -section erroneously), KCSAN would report the resulting data race. +``atomic_*``, etc.), and a subset of ordering guarantees implied by memory +barriers. With ``CONFIG_KCSAN_WEAK_MEMORY=y``, KCSAN models load or store +buffering, and can detect missing ``smp_mb()``, ``smp_wmb()``, ``smp_rmb()``, +``smp_store_release()``, and all ``atomic_*`` operations with equivalent +implied barriers. + +Note, KCSAN will not report all data races due to missing memory ordering, +specifically where a memory barrier would be required to prohibit subsequent +memory operation from reordering before the barrier. Developers should +therefore carefully consider the required memory ordering requirements that +remain unchecked. Race Detection Beyond Data Races -------------------------------- @@ -268,6 +268,56 @@ marked operations, if all accesses to a variable that is accessed concurrently are properly marked, KCSAN will never trigger a watchpoint and therefore never report the accesses. +Modeling Weak Memory +~~~~~~~~~~~~~~~~~~~~ + +KCSAN's approach to detecting data races due to missing memory barriers is +based on modeling access reordering (with ``CONFIG_KCSAN_WEAK_MEMORY=y``). +Each plain memory access for which a watchpoint is set up, is also selected for +simulated reordering within the scope of its function (at most 1 in-flight +access). + +Once an access has been selected for reordering, it is checked along every +other access until the end of the function scope. If an appropriate memory +barrier is encountered, the access will no longer be considered for simulated +reordering. + +When the result of a memory operation should be ordered by a barrier, KCSAN can +then detect data races where the conflict only occurs as a result of a missing +barrier. Consider the example:: + + int x, flag; + void T1(void) + { + x = 1; // data race! + WRITE_ONCE(flag, 1); // correct: smp_store_release(&flag, 1) + } + void T2(void) + { + while (!READ_ONCE(flag)); // correct: smp_load_acquire(&flag) + ... = x; // data race! + } + +When weak memory modeling is enabled, KCSAN can consider ``x`` in ``T1`` for +simulated reordering. After the write of ``flag``, ``x`` is again checked for +concurrent accesses: because ``T2`` is able to proceed after the write of +``flag``, a data race is detected. With the correct barriers in place, ``x`` +would not be considered for reordering after the proper release of ``flag``, +and no data race would be detected. + +Deliberate trade-offs in complexity but also practical limitations mean only a +subset of data races due to missing memory barriers can be detected. With +currently available compiler support, the implementation is limited to modeling +the effects of "buffering" (delaying accesses), since the runtime cannot +"prefetch" accesses. Also recall that watchpoints are only set up for plain +accesses, and the only access type for which KCSAN simulates reordering. This +means reordering of marked accesses is not modeled. + +A consequence of the above is that acquire operations do not require barrier +instrumentation (no prefetching). Furthermore, marked accesses introducing +address or control dependencies do not require special handling (the marked +access cannot be reordered, later dependent accesses cannot be prefetched). + Key Properties ~~~~~~~~~~~~~~ @@ -290,8 +340,8 @@ Key Properties 4. **Detects Racy Writes from Devices:** Due to checking data values upon setting up watchpoints, racy writes from devices can also be detected. -5. **Memory Ordering:** KCSAN is *not* explicitly aware of the LKMM's ordering - rules; this may result in missed data races (false negatives). +5. **Memory Ordering:** KCSAN is aware of only a subset of LKMM ordering rules; + this may result in missed data races (false negatives). 6. **Analysis Accuracy:** For observed executions, due to using a sampling strategy, the analysis is *unsound* (false negatives possible), but aims to From patchwork Tue Nov 30 11:44:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646995 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C8C27C433EF for ; Tue, 30 Nov 2021 11:50:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 86CF96B0083; Tue, 30 Nov 2021 06:45:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7F61C6B0085; Tue, 30 Nov 2021 06:45:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6700D6B0087; Tue, 30 Nov 2021 06:45:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0031.hostedemail.com [216.40.44.31]) by kanga.kvack.org (Postfix) with ESMTP id 573B86B0083 for ; Tue, 30 Nov 2021 06:45:39 -0500 (EST) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 146A18CE0C for ; Tue, 30 Nov 2021 11:45:29 +0000 (UTC) X-FDA: 78865416336.10.D5F31B7 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf03.hostedemail.com (Postfix) with ESMTP id DEE0B30000A6 for ; Tue, 30 Nov 2021 11:45:23 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id d18-20020adfe852000000b001985d36817cso3515438wrn.13 for ; Tue, 30 Nov 2021 03:45:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=UU4Lhk72EIyuCjy+G0t4z2bUBJwzL7HcimRlPhkjEC4=; b=Qf1SuGMOlmDvzKhIKei4tnzeifu697gt4+vPZmQcibFNX577BHwmnXZj/43waD1AFo zqqT6A3OzdKbGAxlzkO5UEoOJrj8Ib4JYt4r8NC8THoIspAyDexcoQ+HC59ZrCciIG6G Y0VXnkOleQAMCEJEzP4uOSbxFqMSuYOTS7PFKjrFz+hivg1sZFOLVTAGY8CqkDjW1GV+ a75BRnHn5LDO0YGeARpnpZlga5QclV3aHltFO/KTZgFV3VjrYcdVKBLFNx+MGXoTUU6/ B2ZT3iHpx6WJhoHbl+3frEAS8lAD2vpAMIDk1kiKMlouBt40xlrsSLBbihlNPqys44OP 0U5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=UU4Lhk72EIyuCjy+G0t4z2bUBJwzL7HcimRlPhkjEC4=; b=nxRdUDe++RX9/1mgsDFHrLbR0HUg2sACnU4HRgxo5mzuYggLmW1I8dg/5GYLorHAJI zWN3MktCu+rBZa6x2EcrdVlZcFyL46xfPS+hhtTzZH75JAyN3jlvzRRQa4Cf+K0beq34 QfJWfpoI0Ax1XcQCDV0Rvx3APIOGgduQpF1lK7tysA/ff1hoWDSFZ95UNN4EnGuUZlKW UpB9aQysbTB2DbP3CH6ksLvIc78v5O2gT+KsGcPRnMM3oHnZb3h/xXiclenbTiKjDbFb vbrjJ8mu4Dj385dtNhr6WNNiCU+uyu21DY0oLnwejCwQa6DCJsKRX7LH5OYclNt283yR 0yyg== X-Gm-Message-State: AOAM533M/BXWebD8I730l3TxmmFba/0GbPQpHpwptJ1/v5yitInh7A4E XtDzKY61Uw0waYBWzVtKbSZQJi3csw== X-Google-Smtp-Source: ABdhPJxT/z2Am1c0zAK22jXUJmps1vWdH3CmrVGKav+XMyFhnnpTrHTyxg//BbOZE2TGj8a7z9k8PB/5QA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:2252:: with SMTP id a18mr4415894wmm.133.1638272727443; Tue, 30 Nov 2021 03:45:27 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:18 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-11-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 10/25] kcsan: test: Match reordered or normal accesses From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: DEE0B30000A6 X-Stat-Signature: i1a35uu9feaihc3x36rdqm1jzswfu9pd Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=Qf1SuGMO; spf=pass (imf03.hostedemail.com: domain of 31w6mYQUKCKAELVERGOOGLE.COMLINUX-MMKVACK.ORG@flex--elver.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=31w6mYQUKCKAELVERGOOGLE.COMLINUX-MMKVACK.ORG@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1638272723-579204 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Due to reordering accesses with weak memory modeling, any access can now appear as "(reordered)". Match any permutation of accesses if CONFIG_KCSAN_WEAK_MEMORY=y, so that we effectively match an access if it is denoted "(reordered)" or not. Signed-off-by: Marco Elver --- kernel/kcsan/kcsan_test.c | 92 +++++++++++++++++++++++++++------------ 1 file changed, 63 insertions(+), 29 deletions(-) diff --git a/kernel/kcsan/kcsan_test.c b/kernel/kcsan/kcsan_test.c index 6e3c2b8bc608..ec054879201b 100644 --- a/kernel/kcsan/kcsan_test.c +++ b/kernel/kcsan/kcsan_test.c @@ -151,7 +151,7 @@ struct expect_report { /* Check observed report matches information in @r. */ __no_kcsan -static bool report_matches(const struct expect_report *r) +static bool __report_matches(const struct expect_report *r) { const bool is_assert = (r->access[0].type | r->access[1].type) & KCSAN_ACCESS_ASSERT; bool ret = false; @@ -253,6 +253,40 @@ static bool report_matches(const struct expect_report *r) return ret; } +static __always_inline const struct expect_report * +__report_set_scoped(struct expect_report *r, int accesses) +{ + BUILD_BUG_ON(accesses > 3); + + if (accesses & 1) + r->access[0].type |= KCSAN_ACCESS_SCOPED; + else + r->access[0].type &= ~KCSAN_ACCESS_SCOPED; + + if (accesses & 2) + r->access[1].type |= KCSAN_ACCESS_SCOPED; + else + r->access[1].type &= ~KCSAN_ACCESS_SCOPED; + + return r; +} + +__no_kcsan +static bool report_matches_any_reordered(struct expect_report *r) +{ + return __report_matches(__report_set_scoped(r, 0)) || + __report_matches(__report_set_scoped(r, 1)) || + __report_matches(__report_set_scoped(r, 2)) || + __report_matches(__report_set_scoped(r, 3)); +} + +#ifdef CONFIG_KCSAN_WEAK_MEMORY +/* Due to reordering accesses, any access may appear as "(reordered)". */ +#define report_matches report_matches_any_reordered +#else +#define report_matches __report_matches +#endif + /* ===== Test kernels ===== */ static long test_sink; @@ -438,13 +472,13 @@ static noinline void test_kernel_xor_1bit(void) __no_kcsan static void test_basic(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_write, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, }, }; - static const struct expect_report never = { + struct expect_report never = { .access = { { test_kernel_read, &test_var, sizeof(test_var), 0 }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, @@ -469,14 +503,14 @@ static void test_basic(struct kunit *test) __no_kcsan static void test_concurrent_races(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { /* NULL will match any address. */ { test_kernel_rmw_array, NULL, 0, __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) }, { test_kernel_rmw_array, NULL, 0, __KCSAN_ACCESS_RW(0) }, }, }; - static const struct expect_report never = { + struct expect_report never = { .access = { { test_kernel_rmw_array, NULL, 0, 0 }, { test_kernel_rmw_array, NULL, 0, 0 }, @@ -498,13 +532,13 @@ static void test_concurrent_races(struct kunit *test) __no_kcsan static void test_novalue_change(struct kunit *test) { - const struct expect_report expect_rw = { + struct expect_report expect_rw = { .access = { { test_kernel_write_nochange, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, }, }; - const struct expect_report expect_ww = { + struct expect_report expect_ww = { .access = { { test_kernel_write_nochange, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, { test_kernel_write_nochange, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, @@ -530,13 +564,13 @@ static void test_novalue_change(struct kunit *test) __no_kcsan static void test_novalue_change_exception(struct kunit *test) { - const struct expect_report expect_rw = { + struct expect_report expect_rw = { .access = { { test_kernel_write_nochange_rcu, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, }, }; - const struct expect_report expect_ww = { + struct expect_report expect_ww = { .access = { { test_kernel_write_nochange_rcu, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, { test_kernel_write_nochange_rcu, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, @@ -556,7 +590,7 @@ static void test_novalue_change_exception(struct kunit *test) __no_kcsan static void test_unknown_origin(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_read, &test_var, sizeof(test_var), 0 }, { NULL }, @@ -578,7 +612,7 @@ static void test_unknown_origin(struct kunit *test) __no_kcsan static void test_write_write_assume_atomic(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_write, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, { test_kernel_write, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, @@ -604,7 +638,7 @@ static void test_write_write_assume_atomic(struct kunit *test) __no_kcsan static void test_write_write_struct(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_write_struct, &test_struct, sizeof(test_struct), KCSAN_ACCESS_WRITE }, { test_kernel_write_struct, &test_struct, sizeof(test_struct), KCSAN_ACCESS_WRITE }, @@ -626,7 +660,7 @@ static void test_write_write_struct(struct kunit *test) __no_kcsan static void test_write_write_struct_part(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_write_struct, &test_struct, sizeof(test_struct), KCSAN_ACCESS_WRITE }, { test_kernel_write_struct_part, &test_struct.val[3], sizeof(test_struct.val[3]), KCSAN_ACCESS_WRITE }, @@ -658,7 +692,7 @@ static void test_read_atomic_write_atomic(struct kunit *test) __no_kcsan static void test_read_plain_atomic_write(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_read, &test_var, sizeof(test_var), 0 }, { test_kernel_write_atomic, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC }, @@ -679,7 +713,7 @@ static void test_read_plain_atomic_write(struct kunit *test) __no_kcsan static void test_read_plain_atomic_rmw(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_read, &test_var, sizeof(test_var), 0 }, { test_kernel_atomic_rmw, &test_var, sizeof(test_var), @@ -701,13 +735,13 @@ static void test_read_plain_atomic_rmw(struct kunit *test) __no_kcsan static void test_zero_size_access(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_write_struct, &test_struct, sizeof(test_struct), KCSAN_ACCESS_WRITE }, { test_kernel_write_struct, &test_struct, sizeof(test_struct), KCSAN_ACCESS_WRITE }, }, }; - const struct expect_report never = { + struct expect_report never = { .access = { { test_kernel_write_struct, &test_struct, sizeof(test_struct), KCSAN_ACCESS_WRITE }, { test_kernel_read_struct_zero_size, &test_struct.val[3], 0, 0 }, @@ -741,7 +775,7 @@ static void test_data_race(struct kunit *test) __no_kcsan static void test_assert_exclusive_writer(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_assert_writer, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT }, { test_kernel_write_nochange, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, @@ -759,7 +793,7 @@ static void test_assert_exclusive_writer(struct kunit *test) __no_kcsan static void test_assert_exclusive_access(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_assert_access, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, @@ -777,19 +811,19 @@ static void test_assert_exclusive_access(struct kunit *test) __no_kcsan static void test_assert_exclusive_access_writer(struct kunit *test) { - const struct expect_report expect_access_writer = { + struct expect_report expect_access_writer = { .access = { { test_kernel_assert_access, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE }, { test_kernel_assert_writer, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT }, }, }; - const struct expect_report expect_access_access = { + struct expect_report expect_access_access = { .access = { { test_kernel_assert_access, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE }, { test_kernel_assert_access, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE }, }, }; - const struct expect_report never = { + struct expect_report never = { .access = { { test_kernel_assert_writer, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT }, { test_kernel_assert_writer, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT }, @@ -813,7 +847,7 @@ static void test_assert_exclusive_access_writer(struct kunit *test) __no_kcsan static void test_assert_exclusive_bits_change(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_assert_bits_change, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT }, { test_kernel_change_bits, &test_var, sizeof(test_var), @@ -844,13 +878,13 @@ static void test_assert_exclusive_bits_nochange(struct kunit *test) __no_kcsan static void test_assert_exclusive_writer_scoped(struct kunit *test) { - const struct expect_report expect_start = { + struct expect_report expect_start = { .access = { { test_kernel_assert_writer_scoped, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_SCOPED }, { test_kernel_write_nochange, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, }, }; - const struct expect_report expect_inscope = { + struct expect_report expect_inscope = { .access = { { test_enter_scope, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_SCOPED }, { test_kernel_write_nochange, &test_var, sizeof(test_var), KCSAN_ACCESS_WRITE }, @@ -871,16 +905,16 @@ static void test_assert_exclusive_writer_scoped(struct kunit *test) __no_kcsan static void test_assert_exclusive_access_scoped(struct kunit *test) { - const struct expect_report expect_start1 = { + struct expect_report expect_start1 = { .access = { { test_kernel_assert_access_scoped, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_SCOPED }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, }, }; - const struct expect_report expect_start2 = { + struct expect_report expect_start2 = { .access = { expect_start1.access[0], expect_start1.access[0] }, }; - const struct expect_report expect_inscope = { + struct expect_report expect_inscope = { .access = { { test_enter_scope, &test_var, sizeof(test_var), KCSAN_ACCESS_ASSERT | KCSAN_ACCESS_WRITE | KCSAN_ACCESS_SCOPED }, { test_kernel_read, &test_var, sizeof(test_var), 0 }, @@ -985,7 +1019,7 @@ static void test_atomic_builtins(struct kunit *test) __no_kcsan static void test_1bit_value_change(struct kunit *test) { - const struct expect_report expect = { + struct expect_report expect = { .access = { { test_kernel_read, &test_var, sizeof(test_var), 0 }, { test_kernel_xor_1bit, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) }, From patchwork Tue Nov 30 11:44:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646997 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 14B4CC433F5 for ; Tue, 30 Nov 2021 11:51:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F23ED6B0085; Tue, 30 Nov 2021 06:45:41 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EAD1D6B0087; Tue, 30 Nov 2021 06:45:41 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CFF9B6B0088; Tue, 30 Nov 2021 06:45:41 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0077.hostedemail.com [216.40.44.77]) by kanga.kvack.org (Postfix) with ESMTP id BDDB66B0085 for ; Tue, 30 Nov 2021 06:45:41 -0500 (EST) Received: from smtpin13.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 7EAFC89951 for ; Tue, 30 Nov 2021 11:45:31 +0000 (UTC) X-FDA: 78865416462.13.D44A5E0 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf26.hostedemail.com (Postfix) with ESMTP id 116AA20019C3 for ; Tue, 30 Nov 2021 11:45:29 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id r2-20020adfe682000000b00198af042b0dso3518484wrm.23 for ; Tue, 30 Nov 2021 03:45:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ueZBGQAcZiHgxT7Io+0KmuSM9UUBjHQcOH6LTEM0h4A=; b=hXfoeu4gUvvu0k1gsQqlZOqBmvStNsVxlexZHrnYJUglsbuIxS7GSmBy1X4Fktd9PL rSF6dekKmySAL3Y9jENAj8HbcEI/Ocm4HqK6cZE2mv8Hw6CaG5mDV+KXgJ3NwnAfM3Yn wReX9JDbU/LJKPFyH0PqQqv/ADQU8EOcJtbhgjwnFw6h9LCuGs3sGUugTIZs1xA+hv6C WE6lhF98cifEJTRE96Z4n4QcWqv4woyeFyOIDnjW0ign3yQHBvu3Furi2RhHrQEAqLO3 p06ivAlXPqouc7IdpcSZmUay2PPIgjN+FoFRuKH/rKwlDQEOwTsu0xi+Q48HBoy0FJco ujjw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ueZBGQAcZiHgxT7Io+0KmuSM9UUBjHQcOH6LTEM0h4A=; b=Fup1frYUiTQCo1NDaDLM46mgwPyaau+DmHmQ72Lw/G7N55cT8BuJDarmI9RDTULk4u 1znquTk2fOT6xC2+oDd7dlcpK0ziXqMrV24Yh+PxPDn1KbvT/89sJMG5GXo9SrLtuqWw rcD2LtOaCvCIV7aXZOMDXXwNDLNmANcpgPFbb6Nq+VCO3aOzUZYwqmbG2oOSJ1SkMf2C Ky0qfKxo3kKY4ceK6+f1dORTRglw+DdS8IIJBcmFoPPFGFRf28XPqjUzsHH4ifNnU+zp isNChFUTX4U/qZgJQyWXXttp6TMBoCvDqoU/E057E7IsfPPcc4hihHj792OWddHZCgUv iytg== X-Gm-Message-State: AOAM5326yabZu6PSduqcBc8jl6Nx9k+2joKMAEfVF0QXuPIUBvbY5O6Z EU+wpld+m9b7EpX8hv8IGHRqfeId/g== X-Google-Smtp-Source: ABdhPJxbYqUT0YmQIu6BijxXI+zzHTJkt9oNiZR12yiBDaVD+wQt8KlWLFrfumwJR7sIhEawmDupKZGw2Q== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:2117:: with SMTP id u23mr4424316wml.19.1638272729928; Tue, 30 Nov 2021 03:45:29 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:19 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-12-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 11/25] kcsan: test: Add test cases for memory barrier instrumentation From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 116AA20019C3 X-Stat-Signature: xpwwxnokzmfzxay5e5t8pirqogd4jizx Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=hXfoeu4g; spf=pass (imf26.hostedemail.com: domain of 32Q6mYQUKCKIGNXGTIQQING.EQONKPWZ-OOMXCEM.QTI@flex--elver.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=32Q6mYQUKCKIGNXGTIQQING.EQONKPWZ-OOMXCEM.QTI@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1638272729-512002 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Adds test cases to check that memory barriers are instrumented correctly, and detection of missing memory barriers is working as intended if CONFIG_KCSAN_STRICT=y. Signed-off-by: Marco Elver --- v3: * Remove __no_kcsan from barrier test, given __no_kcsan would now remove barrier instrumentation, too. --- kernel/kcsan/kcsan_test.c | 319 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 319 insertions(+) diff --git a/kernel/kcsan/kcsan_test.c b/kernel/kcsan/kcsan_test.c index ec054879201b..5bf94550bcdf 100644 --- a/kernel/kcsan/kcsan_test.c +++ b/kernel/kcsan/kcsan_test.c @@ -16,9 +16,12 @@ #define pr_fmt(fmt) "kcsan_test: " fmt #include +#include +#include #include #include #include +#include #include #include #include @@ -305,6 +308,16 @@ static DEFINE_SEQLOCK(test_seqlock); __no_kcsan static noinline void sink_value(long v) { WRITE_ONCE(test_sink, v); } +/* + * Generates a delay and some accesses that enter the runtime but do not produce + * data races. + */ +static noinline void test_delay(int iter) +{ + while (iter--) + sink_value(READ_ONCE(test_sink)); +} + static noinline void test_kernel_read(void) { sink_value(test_var); } static noinline void test_kernel_write(void) @@ -466,8 +479,219 @@ static noinline void test_kernel_xor_1bit(void) kcsan_nestable_atomic_end(); } +#define TEST_KERNEL_LOCKED(name, acquire, release) \ + static noinline void test_kernel_##name(void) \ + { \ + long *flag = &test_struct.val[0]; \ + long v = 0; \ + if (!(acquire)) \ + return; \ + while (v++ < 100) { \ + test_var++; \ + barrier(); \ + } \ + release; \ + test_delay(10); \ + } + +TEST_KERNEL_LOCKED(with_memorder, + cmpxchg_acquire(flag, 0, 1) == 0, + smp_store_release(flag, 0)); +TEST_KERNEL_LOCKED(wrong_memorder, + cmpxchg_relaxed(flag, 0, 1) == 0, + WRITE_ONCE(*flag, 0)); +TEST_KERNEL_LOCKED(atomic_builtin_with_memorder, + __atomic_compare_exchange_n(flag, &v, 1, 0, __ATOMIC_ACQUIRE, __ATOMIC_RELAXED), + __atomic_store_n(flag, 0, __ATOMIC_RELEASE)); +TEST_KERNEL_LOCKED(atomic_builtin_wrong_memorder, + __atomic_compare_exchange_n(flag, &v, 1, 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED), + __atomic_store_n(flag, 0, __ATOMIC_RELAXED)); + /* ===== Test cases ===== */ +/* + * Tests that various barriers have the expected effect on internal state. Not + * exhaustive on atomic_t operations. Unlike the selftest, also checks for + * too-strict barrier instrumentation; these can be tolerated, because it does + * not cause false positives, but at least we should be aware of such cases. + */ +static void test_barrier_nothreads(struct kunit *test) +{ +#ifdef CONFIG_KCSAN_WEAK_MEMORY + struct kcsan_scoped_access *reorder_access = ¤t->kcsan_ctx.reorder_access; +#else + struct kcsan_scoped_access *reorder_access = NULL; +#endif + arch_spinlock_t arch_spinlock = __ARCH_SPIN_LOCK_UNLOCKED; + DEFINE_SPINLOCK(spinlock); + DEFINE_MUTEX(mutex); + atomic_t dummy; + + KCSAN_TEST_REQUIRES(test, reorder_access != NULL); + KCSAN_TEST_REQUIRES(test, IS_ENABLED(CONFIG_SMP)); + +#define __KCSAN_EXPECT_BARRIER(access_type, barrier, order_before, name) \ + do { \ + reorder_access->type = (access_type) | KCSAN_ACCESS_SCOPED; \ + reorder_access->size = sizeof(test_var); \ + barrier; \ + KUNIT_EXPECT_EQ_MSG(test, reorder_access->size, \ + order_before ? 0 : sizeof(test_var), \ + "improperly instrumented type=(" #access_type "): " name); \ + } while (0) +#define KCSAN_EXPECT_READ_BARRIER(b, o) __KCSAN_EXPECT_BARRIER(0, b, o, #b) +#define KCSAN_EXPECT_WRITE_BARRIER(b, o) __KCSAN_EXPECT_BARRIER(KCSAN_ACCESS_WRITE, b, o, #b) +#define KCSAN_EXPECT_RW_BARRIER(b, o) __KCSAN_EXPECT_BARRIER(KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE, b, o, #b) + + /* Force creating a valid entry in reorder_access first. */ + test_var = 0; + while (test_var++ < 1000000 && reorder_access->size != sizeof(test_var)) + __kcsan_check_read(&test_var, sizeof(test_var)); + KUNIT_ASSERT_EQ(test, reorder_access->size, sizeof(test_var)); + + kcsan_nestable_atomic_begin(); /* No watchpoints in called functions. */ + + KCSAN_EXPECT_READ_BARRIER(mb(), true); + KCSAN_EXPECT_READ_BARRIER(wmb(), false); + KCSAN_EXPECT_READ_BARRIER(rmb(), true); + KCSAN_EXPECT_READ_BARRIER(smp_mb(), true); + KCSAN_EXPECT_READ_BARRIER(smp_wmb(), false); + KCSAN_EXPECT_READ_BARRIER(smp_rmb(), true); + KCSAN_EXPECT_READ_BARRIER(dma_wmb(), false); + KCSAN_EXPECT_READ_BARRIER(dma_rmb(), true); + KCSAN_EXPECT_READ_BARRIER(smp_mb__before_atomic(), true); + KCSAN_EXPECT_READ_BARRIER(smp_mb__after_atomic(), true); + KCSAN_EXPECT_READ_BARRIER(smp_mb__after_spinlock(), true); + KCSAN_EXPECT_READ_BARRIER(smp_store_mb(test_var, 0), true); + KCSAN_EXPECT_READ_BARRIER(smp_load_acquire(&test_var), false); + KCSAN_EXPECT_READ_BARRIER(smp_store_release(&test_var, 0), true); + KCSAN_EXPECT_READ_BARRIER(xchg(&test_var, 0), true); + KCSAN_EXPECT_READ_BARRIER(xchg_release(&test_var, 0), true); + KCSAN_EXPECT_READ_BARRIER(xchg_relaxed(&test_var, 0), false); + KCSAN_EXPECT_READ_BARRIER(cmpxchg(&test_var, 0, 0), true); + KCSAN_EXPECT_READ_BARRIER(cmpxchg_release(&test_var, 0, 0), true); + KCSAN_EXPECT_READ_BARRIER(cmpxchg_relaxed(&test_var, 0, 0), false); + KCSAN_EXPECT_READ_BARRIER(atomic_read(&dummy), false); + KCSAN_EXPECT_READ_BARRIER(atomic_read_acquire(&dummy), false); + KCSAN_EXPECT_READ_BARRIER(atomic_set(&dummy, 0), false); + KCSAN_EXPECT_READ_BARRIER(atomic_set_release(&dummy, 0), true); + KCSAN_EXPECT_READ_BARRIER(atomic_add(1, &dummy), false); + KCSAN_EXPECT_READ_BARRIER(atomic_add_return(1, &dummy), true); + KCSAN_EXPECT_READ_BARRIER(atomic_add_return_acquire(1, &dummy), false); + KCSAN_EXPECT_READ_BARRIER(atomic_add_return_release(1, &dummy), true); + KCSAN_EXPECT_READ_BARRIER(atomic_add_return_relaxed(1, &dummy), false); + KCSAN_EXPECT_READ_BARRIER(atomic_fetch_add(1, &dummy), true); + KCSAN_EXPECT_READ_BARRIER(atomic_fetch_add_acquire(1, &dummy), false); + KCSAN_EXPECT_READ_BARRIER(atomic_fetch_add_release(1, &dummy), true); + KCSAN_EXPECT_READ_BARRIER(atomic_fetch_add_relaxed(1, &dummy), false); + KCSAN_EXPECT_READ_BARRIER(test_and_set_bit(0, &test_var), true); + KCSAN_EXPECT_READ_BARRIER(test_and_clear_bit(0, &test_var), true); + KCSAN_EXPECT_READ_BARRIER(test_and_change_bit(0, &test_var), true); + KCSAN_EXPECT_READ_BARRIER(clear_bit_unlock(0, &test_var), true); + KCSAN_EXPECT_READ_BARRIER(__clear_bit_unlock(0, &test_var), true); + KCSAN_EXPECT_READ_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var), true); + KCSAN_EXPECT_READ_BARRIER(arch_spin_lock(&arch_spinlock), false); + KCSAN_EXPECT_READ_BARRIER(arch_spin_unlock(&arch_spinlock), true); + KCSAN_EXPECT_READ_BARRIER(spin_lock(&spinlock), false); + KCSAN_EXPECT_READ_BARRIER(spin_unlock(&spinlock), true); + KCSAN_EXPECT_READ_BARRIER(mutex_lock(&mutex), false); + KCSAN_EXPECT_READ_BARRIER(mutex_unlock(&mutex), true); + + KCSAN_EXPECT_WRITE_BARRIER(mb(), true); + KCSAN_EXPECT_WRITE_BARRIER(wmb(), true); + KCSAN_EXPECT_WRITE_BARRIER(rmb(), false); + KCSAN_EXPECT_WRITE_BARRIER(smp_mb(), true); + KCSAN_EXPECT_WRITE_BARRIER(smp_wmb(), true); + KCSAN_EXPECT_WRITE_BARRIER(smp_rmb(), false); + KCSAN_EXPECT_WRITE_BARRIER(dma_wmb(), true); + KCSAN_EXPECT_WRITE_BARRIER(dma_rmb(), false); + KCSAN_EXPECT_WRITE_BARRIER(smp_mb__before_atomic(), true); + KCSAN_EXPECT_WRITE_BARRIER(smp_mb__after_atomic(), true); + KCSAN_EXPECT_WRITE_BARRIER(smp_mb__after_spinlock(), true); + KCSAN_EXPECT_WRITE_BARRIER(smp_store_mb(test_var, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(smp_load_acquire(&test_var), false); + KCSAN_EXPECT_WRITE_BARRIER(smp_store_release(&test_var, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(xchg(&test_var, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(xchg_release(&test_var, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(xchg_relaxed(&test_var, 0), false); + KCSAN_EXPECT_WRITE_BARRIER(cmpxchg(&test_var, 0, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(cmpxchg_release(&test_var, 0, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(cmpxchg_relaxed(&test_var, 0, 0), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_read(&dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_read_acquire(&dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_set(&dummy, 0), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_set_release(&dummy, 0), true); + KCSAN_EXPECT_WRITE_BARRIER(atomic_add(1, &dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_add_return(1, &dummy), true); + KCSAN_EXPECT_WRITE_BARRIER(atomic_add_return_acquire(1, &dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_add_return_release(1, &dummy), true); + KCSAN_EXPECT_WRITE_BARRIER(atomic_add_return_relaxed(1, &dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_fetch_add(1, &dummy), true); + KCSAN_EXPECT_WRITE_BARRIER(atomic_fetch_add_acquire(1, &dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(atomic_fetch_add_release(1, &dummy), true); + KCSAN_EXPECT_WRITE_BARRIER(atomic_fetch_add_relaxed(1, &dummy), false); + KCSAN_EXPECT_WRITE_BARRIER(test_and_set_bit(0, &test_var), true); + KCSAN_EXPECT_WRITE_BARRIER(test_and_clear_bit(0, &test_var), true); + KCSAN_EXPECT_WRITE_BARRIER(test_and_change_bit(0, &test_var), true); + KCSAN_EXPECT_WRITE_BARRIER(clear_bit_unlock(0, &test_var), true); + KCSAN_EXPECT_WRITE_BARRIER(__clear_bit_unlock(0, &test_var), true); + KCSAN_EXPECT_WRITE_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var), true); + KCSAN_EXPECT_WRITE_BARRIER(arch_spin_lock(&arch_spinlock), false); + KCSAN_EXPECT_WRITE_BARRIER(arch_spin_unlock(&arch_spinlock), true); + KCSAN_EXPECT_WRITE_BARRIER(spin_lock(&spinlock), false); + KCSAN_EXPECT_WRITE_BARRIER(spin_unlock(&spinlock), true); + KCSAN_EXPECT_WRITE_BARRIER(mutex_lock(&mutex), false); + KCSAN_EXPECT_WRITE_BARRIER(mutex_unlock(&mutex), true); + + KCSAN_EXPECT_RW_BARRIER(mb(), true); + KCSAN_EXPECT_RW_BARRIER(wmb(), true); + KCSAN_EXPECT_RW_BARRIER(rmb(), true); + KCSAN_EXPECT_RW_BARRIER(smp_mb(), true); + KCSAN_EXPECT_RW_BARRIER(smp_wmb(), true); + KCSAN_EXPECT_RW_BARRIER(smp_rmb(), true); + KCSAN_EXPECT_RW_BARRIER(dma_wmb(), true); + KCSAN_EXPECT_RW_BARRIER(dma_rmb(), true); + KCSAN_EXPECT_RW_BARRIER(smp_mb__before_atomic(), true); + KCSAN_EXPECT_RW_BARRIER(smp_mb__after_atomic(), true); + KCSAN_EXPECT_RW_BARRIER(smp_mb__after_spinlock(), true); + KCSAN_EXPECT_RW_BARRIER(smp_store_mb(test_var, 0), true); + KCSAN_EXPECT_RW_BARRIER(smp_load_acquire(&test_var), false); + KCSAN_EXPECT_RW_BARRIER(smp_store_release(&test_var, 0), true); + KCSAN_EXPECT_RW_BARRIER(xchg(&test_var, 0), true); + KCSAN_EXPECT_RW_BARRIER(xchg_release(&test_var, 0), true); + KCSAN_EXPECT_RW_BARRIER(xchg_relaxed(&test_var, 0), false); + KCSAN_EXPECT_RW_BARRIER(cmpxchg(&test_var, 0, 0), true); + KCSAN_EXPECT_RW_BARRIER(cmpxchg_release(&test_var, 0, 0), true); + KCSAN_EXPECT_RW_BARRIER(cmpxchg_relaxed(&test_var, 0, 0), false); + KCSAN_EXPECT_RW_BARRIER(atomic_read(&dummy), false); + KCSAN_EXPECT_RW_BARRIER(atomic_read_acquire(&dummy), false); + KCSAN_EXPECT_RW_BARRIER(atomic_set(&dummy, 0), false); + KCSAN_EXPECT_RW_BARRIER(atomic_set_release(&dummy, 0), true); + KCSAN_EXPECT_RW_BARRIER(atomic_add(1, &dummy), false); + KCSAN_EXPECT_RW_BARRIER(atomic_add_return(1, &dummy), true); + KCSAN_EXPECT_RW_BARRIER(atomic_add_return_acquire(1, &dummy), false); + KCSAN_EXPECT_RW_BARRIER(atomic_add_return_release(1, &dummy), true); + KCSAN_EXPECT_RW_BARRIER(atomic_add_return_relaxed(1, &dummy), false); + KCSAN_EXPECT_RW_BARRIER(atomic_fetch_add(1, &dummy), true); + KCSAN_EXPECT_RW_BARRIER(atomic_fetch_add_acquire(1, &dummy), false); + KCSAN_EXPECT_RW_BARRIER(atomic_fetch_add_release(1, &dummy), true); + KCSAN_EXPECT_RW_BARRIER(atomic_fetch_add_relaxed(1, &dummy), false); + KCSAN_EXPECT_RW_BARRIER(test_and_set_bit(0, &test_var), true); + KCSAN_EXPECT_RW_BARRIER(test_and_clear_bit(0, &test_var), true); + KCSAN_EXPECT_RW_BARRIER(test_and_change_bit(0, &test_var), true); + KCSAN_EXPECT_RW_BARRIER(clear_bit_unlock(0, &test_var), true); + KCSAN_EXPECT_RW_BARRIER(__clear_bit_unlock(0, &test_var), true); + KCSAN_EXPECT_RW_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var), true); + KCSAN_EXPECT_RW_BARRIER(arch_spin_lock(&arch_spinlock), false); + KCSAN_EXPECT_RW_BARRIER(arch_spin_unlock(&arch_spinlock), true); + KCSAN_EXPECT_RW_BARRIER(spin_lock(&spinlock), false); + KCSAN_EXPECT_RW_BARRIER(spin_unlock(&spinlock), true); + KCSAN_EXPECT_RW_BARRIER(mutex_lock(&mutex), false); + KCSAN_EXPECT_RW_BARRIER(mutex_unlock(&mutex), true); + + kcsan_nestable_atomic_end(); +} + /* Simple test with normal data race. */ __no_kcsan static void test_basic(struct kunit *test) @@ -1039,6 +1263,90 @@ static void test_1bit_value_change(struct kunit *test) KUNIT_EXPECT_TRUE(test, match); } +__no_kcsan +static void test_correct_barrier(struct kunit *test) +{ + struct expect_report expect = { + .access = { + { test_kernel_with_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) }, + { test_kernel_with_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(0) }, + }, + }; + bool match_expect = false; + + test_struct.val[0] = 0; /* init unlocked */ + begin_test_checks(test_kernel_with_memorder, test_kernel_with_memorder); + do { + match_expect = report_matches_any_reordered(&expect); + } while (!end_test_checks(match_expect)); + KUNIT_EXPECT_FALSE(test, match_expect); +} + +__no_kcsan +static void test_missing_barrier(struct kunit *test) +{ + struct expect_report expect = { + .access = { + { test_kernel_wrong_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) }, + { test_kernel_wrong_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(0) }, + }, + }; + bool match_expect = false; + + test_struct.val[0] = 0; /* init unlocked */ + begin_test_checks(test_kernel_wrong_memorder, test_kernel_wrong_memorder); + do { + match_expect = report_matches_any_reordered(&expect); + } while (!end_test_checks(match_expect)); + if (IS_ENABLED(CONFIG_KCSAN_WEAK_MEMORY)) + KUNIT_EXPECT_TRUE(test, match_expect); + else + KUNIT_EXPECT_FALSE(test, match_expect); +} + +__no_kcsan +static void test_atomic_builtins_correct_barrier(struct kunit *test) +{ + struct expect_report expect = { + .access = { + { test_kernel_atomic_builtin_with_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) }, + { test_kernel_atomic_builtin_with_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(0) }, + }, + }; + bool match_expect = false; + + test_struct.val[0] = 0; /* init unlocked */ + begin_test_checks(test_kernel_atomic_builtin_with_memorder, + test_kernel_atomic_builtin_with_memorder); + do { + match_expect = report_matches_any_reordered(&expect); + } while (!end_test_checks(match_expect)); + KUNIT_EXPECT_FALSE(test, match_expect); +} + +__no_kcsan +static void test_atomic_builtins_missing_barrier(struct kunit *test) +{ + struct expect_report expect = { + .access = { + { test_kernel_atomic_builtin_wrong_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(KCSAN_ACCESS_WRITE) }, + { test_kernel_atomic_builtin_wrong_memorder, &test_var, sizeof(test_var), __KCSAN_ACCESS_RW(0) }, + }, + }; + bool match_expect = false; + + test_struct.val[0] = 0; /* init unlocked */ + begin_test_checks(test_kernel_atomic_builtin_wrong_memorder, + test_kernel_atomic_builtin_wrong_memorder); + do { + match_expect = report_matches_any_reordered(&expect); + } while (!end_test_checks(match_expect)); + if (IS_ENABLED(CONFIG_KCSAN_WEAK_MEMORY)) + KUNIT_EXPECT_TRUE(test, match_expect); + else + KUNIT_EXPECT_FALSE(test, match_expect); +} + /* * Generate thread counts for all test cases. Values generated are in interval * [2, 5] followed by exponentially increasing thread counts from 8 to 32. @@ -1088,6 +1396,7 @@ static const void *nthreads_gen_params(const void *prev, char *desc) #define KCSAN_KUNIT_CASE(test_name) KUNIT_CASE_PARAM(test_name, nthreads_gen_params) static struct kunit_case kcsan_test_cases[] = { + KUNIT_CASE(test_barrier_nothreads), KCSAN_KUNIT_CASE(test_basic), KCSAN_KUNIT_CASE(test_concurrent_races), KCSAN_KUNIT_CASE(test_novalue_change), @@ -1112,6 +1421,10 @@ static struct kunit_case kcsan_test_cases[] = { KCSAN_KUNIT_CASE(test_seqlock_noreport), KCSAN_KUNIT_CASE(test_atomic_builtins), KCSAN_KUNIT_CASE(test_1bit_value_change), + KCSAN_KUNIT_CASE(test_correct_barrier), + KCSAN_KUNIT_CASE(test_missing_barrier), + KCSAN_KUNIT_CASE(test_atomic_builtins_correct_barrier), + KCSAN_KUNIT_CASE(test_atomic_builtins_missing_barrier), {}, }; @@ -1176,6 +1489,9 @@ static int test_init(struct kunit *test) observed.nlines = 0; spin_unlock_irqrestore(&observed.lock, flags); + if (strstr(test->name, "nothreads")) + return 0; + if (!torture_init_begin((char *)test->name, 1)) return -EBUSY; @@ -1218,6 +1534,9 @@ static void test_exit(struct kunit *test) struct task_struct **stop_thread; int i; + if (strstr(test->name, "nothreads")) + return; + if (torture_cleanup_begin()) return; From patchwork Tue Nov 30 11:44:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12646999 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 17D96C433EF for ; Tue, 30 Nov 2021 11:51:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 930476B0087; Tue, 30 Nov 2021 06:45:44 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8DFE46B0088; Tue, 30 Nov 2021 06:45:44 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6E7396B0089; Tue, 30 Nov 2021 06:45:44 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0178.hostedemail.com [216.40.44.178]) by kanga.kvack.org (Postfix) with ESMTP id 5EA1D6B0087 for ; Tue, 30 Nov 2021 06:45:44 -0500 (EST) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 29F4B182AB1E6 for ; Tue, 30 Nov 2021 11:45:34 +0000 (UTC) X-FDA: 78865416588.30.093FBDC Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf06.hostedemail.com (Postfix) with ESMTP id 5AF46801AB10 for ; Tue, 30 Nov 2021 11:45:32 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id p4-20020aa7d304000000b003e7ef120a37so16631101edq.16 for ; Tue, 30 Nov 2021 03:45:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=e4DEXl9ENL8f6grIX8Smd5WFfYq3AcFIjaS7u+NEo6I=; b=VRFC3Sd61/1Tk7+lMqH9mkXHzsXrk6VDRpNr/RODxLb0s8opBH5XBdY0sCSwlL4os0 aNipizz3jan6hjYbfkih51nEKluedB8DR5M+li6i6hvW06Y0WJ5x5tx3zq2x3JKh92zp beJxpvn/kGN1x6ljKuhUDI2IbBAwSqkUeaN+7HU3YpFEvrF+c/6aydWsQctO4Ymd0udk IJ84ZzgzCmWkNlqi/QghFcQ2djhgKy4WBgUVGPVv6c4CskqMJrMZy7XrZOPfGlo9LN1Z 65TsX2NnW+8SOrVW+OK5wZAA3QUZMDcz6AMizaKTb/XX111gUThqKlkXg/p7apaNndAq SUZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=e4DEXl9ENL8f6grIX8Smd5WFfYq3AcFIjaS7u+NEo6I=; b=hRNLK7VO+HWHReiR3harROtigrYyGZKnkYR2eDDV0TmDco9VYRVljvboTuyG057i8Q lA/8qSu5M9f7Aa30G8TulQR3mq50eRG4sQP04ufL09FdaM3y2fjGMZedC+qlP920iMDt 9OU1+LFCM8XjhR+MFaHlxi9fYqP1uCnaVaIbz3tt2cvE9HCSBAwMaznZ1X5ORlzTBiCE Pyp2mQFRSVIBRktejFOqeJSBp4+o56D16nZ2sGmPyTJGyxCTy2Ig77BrnpumPQKaEe56 Y9BX1a9n1FdkDkeCL0576GdmSYuNrvCiS/sTAvw7uYNqoFOFKTzQdBRnMjKJE4mdZfN3 fnXA== X-Gm-Message-State: AOAM530bZ0huWiiICR24mKhCha8LFk5t0eHxLfJ97QTrCXgntRW1+jdS Ny53+5dOmL5lLZxGltyxQRGyo+ALow== X-Google-Smtp-Source: ABdhPJz7RyZqdN7S4/+caxzBBOK3JiCGoO88loaeaBFa4FUTBsvk6hn509wWF6mKeqicro2oUCPsrxEoNg== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a17:907:3f29:: with SMTP id hq41mr66294129ejc.216.1638272732309; Tue, 30 Nov 2021 03:45:32 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:20 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-13-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 12/25] kcsan: Ignore GCC 11+ warnings about TSan runtime support From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 5AF46801AB10 X-Stat-Signature: eqjasansicx3e98tq8ww34of1kh4xrmy Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=VRFC3Sd6; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf06.hostedemail.com: domain of 33A6mYQUKCKUJQaJWLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--elver.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=33A6mYQUKCKUJQaJWLTTLQJ.HTRQNSZc-RRPaFHP.TWL@flex--elver.bounces.google.com X-HE-Tag: 1638272732-694929 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: GCC 11 has introduced a new warning option, -Wtsan [1], to warn about unsupported operations in the TSan runtime. But KCSAN != TSan runtime, so none of the warnings apply. [1] https://gcc.gnu.org/onlinedocs/gcc-11.1.0/gcc/Warning-Options.html Ignore the warnings. Currently the warning only fires in the test for __atomic_thread_fence(): kernel/kcsan/kcsan_test.c: In function ‘test_atomic_builtins’: kernel/kcsan/kcsan_test.c:1234:17: warning: ‘atomic_thread_fence’ is not supported with ‘-fsanitize=thread’ [-Wtsan] 1234 | __atomic_thread_fence(__ATOMIC_SEQ_CST); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ which exists to ensure the KCSAN runtime keeps supporting the builtin instrumentation. Signed-off-by: Marco Elver --- scripts/Makefile.kcsan | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/scripts/Makefile.kcsan b/scripts/Makefile.kcsan index 4c7f0d282e42..19f693b68a96 100644 --- a/scripts/Makefile.kcsan +++ b/scripts/Makefile.kcsan @@ -13,6 +13,12 @@ kcsan-cflags := -fsanitize=thread -fno-optimize-sibling-calls \ $(call cc-option,$(call cc-param,tsan-compound-read-before-write=1),$(call cc-option,$(call cc-param,tsan-instrument-read-before-write=1))) \ $(call cc-param,tsan-distinguish-volatile=1) +ifdef CONFIG_CC_IS_GCC +# GCC started warning about operations unsupported by the TSan runtime. But +# KCSAN != TSan, so just ignore these warnings. +kcsan-cflags += -Wno-tsan +endif + ifndef CONFIG_KCSAN_WEAK_MEMORY kcsan-cflags += $(call cc-option,$(call cc-param,tsan-instrument-func-entry-exit=0)) endif From patchwork Tue Nov 30 11:44:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12647001 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67CC1C433EF for ; Tue, 30 Nov 2021 11:52:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1B5BC6B0088; Tue, 30 Nov 2021 06:45:47 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 13F936B0089; Tue, 30 Nov 2021 06:45:47 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 006836B008A; Tue, 30 Nov 2021 06:45:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0119.hostedemail.com [216.40.44.119]) by kanga.kvack.org (Postfix) with ESMTP id E637E6B0088 for ; Tue, 30 Nov 2021 06:45:46 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id AF4F58C584 for ; Tue, 30 Nov 2021 11:45:36 +0000 (UTC) X-FDA: 78865416630.11.A3B9985 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf02.hostedemail.com (Postfix) with ESMTP id 0F48D7001A23 for ; Tue, 30 Nov 2021 11:45:32 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id b1-20020a5d6341000000b001901ddd352eso3539251wrw.7 for ; Tue, 30 Nov 2021 03:45:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=Iys8lzbI1Nma9PwE1F52pU9iK1Hh8M3Ac0otxw9nSQU=; b=J8vmyXS/GrvIPUiRrmnpXnM23vlbl8gTzQcjputy0NigdXskM1exWYnaspB797D3ft NXocSIAu50iDgAED/KttvbRMHhNCBHezQlIAnYRMZ9EPCRUB6BLKxpnwzxpiM39ki6iF pKduWtSrQDLyGQCqz3v0RRIdwfkFvzy28hNjq1+FBeiURxDnOysPWhEFvDI3ZK7AwxnE UCBCzkcCNrNRkugsGIh5jp9/fWoiC2jjF10Zv/FFkFd/qQiXeNsbmkpyElWbkd7+p4m6 HTQRreCIEdhOMH2P8xe0yM+3SsOfimydVe+jG1AoZ81MRqit79EgVxBGt4D/Y5n4+XsH qjKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=Iys8lzbI1Nma9PwE1F52pU9iK1Hh8M3Ac0otxw9nSQU=; b=zny7XMw2tiTlAfk9r8pBpZby4nAoAONTZNDOq1Mh6vHb0HpRV+SCqs3JZTkteHGJ2m D1Vz9He4ZLEyXpMM8LMmKx9jiEK70N/BorgBzqzTj24fC2x/X8bb/iDsMTHFVhi0M6LF Ywvv5npnLrb3VfAHwMutQ/Hh6VMTAeLOxX0Zjq5O/fIw4YqUmwUYzgJcMVsL382/X/gt q0GrPgLNlbZQN6Sa1sLXQlmjCg8JPkitjmP1Bs7DWzKbyosspnXwYZWNpydoKLbLwXXO 6gVSMCDuZhXD1ZCGoKGzyc70IJZqa7W5fMVhzRiDSBAYuBUEvhfXU/0FHsihyyaud4R2 9+Jw== X-Gm-Message-State: AOAM530GdAKJ+1lkArTjsdkhr4TmKq1wDkb96noVwS32zXWbDFdAqEEO OrIZ8SqdpqCFzlQReF6hhQCGWWDNpg== X-Google-Smtp-Source: ABdhPJwVwVRvSp7fPNqadzmvPFUOPgjn135U4JnWgImgUWee4DG8AJGh488Q1cMCbq1Z+I47903wd1OauA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:1993:: with SMTP id t19mr4402473wmq.21.1638272734879; Tue, 30 Nov 2021 03:45:34 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:21 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-14-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 13/25] kcsan: selftest: Add test case to check memory barrier instrumentation From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 0F48D7001A23 X-Stat-Signature: 1gcea19z7s45wt8jn18atx8du4ygstwf Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="J8vmyXS/"; spf=pass (imf02.hostedemail.com: domain of 33g6mYQUKCKcLScLYNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--elver.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=33g6mYQUKCKcLScLYNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1638272732-870740 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Memory barrier instrumentation is crucial to avoid false positives. To avoid surprises, run a simple test case in the boot-time selftest to ensure memory barriers are still instrumented correctly. Signed-off-by: Marco Elver --- kernel/kcsan/Makefile | 2 + kernel/kcsan/selftest.c | 141 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 143 insertions(+) diff --git a/kernel/kcsan/Makefile b/kernel/kcsan/Makefile index c2bb07f5bcc7..ff47e896de3b 100644 --- a/kernel/kcsan/Makefile +++ b/kernel/kcsan/Makefile @@ -11,6 +11,8 @@ CFLAGS_core.o := $(call cc-option,-fno-conserve-stack) \ -fno-stack-protector -DDISABLE_BRANCH_PROFILING obj-y := core.o debugfs.o report.o + +KCSAN_INSTRUMENT_BARRIERS_selftest.o := y obj-$(CONFIG_KCSAN_SELFTEST) += selftest.o CFLAGS_kcsan_test.o := $(CFLAGS_KCSAN) -g -fno-omit-frame-pointer diff --git a/kernel/kcsan/selftest.c b/kernel/kcsan/selftest.c index b4295a3892b7..08c6b84b9ebe 100644 --- a/kernel/kcsan/selftest.c +++ b/kernel/kcsan/selftest.c @@ -7,10 +7,15 @@ #define pr_fmt(fmt) "kcsan: " fmt +#include +#include #include +#include #include #include #include +#include +#include #include #include "encoding.h" @@ -103,6 +108,141 @@ static bool __init test_matching_access(void) return true; } +/* + * Correct memory barrier instrumentation is critical to avoiding false + * positives: simple test to check at boot certain barriers are always properly + * instrumented. See kcsan_test for a more complete test. + */ +static bool __init test_barrier(void) +{ +#ifdef CONFIG_KCSAN_WEAK_MEMORY + struct kcsan_scoped_access *reorder_access = ¤t->kcsan_ctx.reorder_access; +#else + struct kcsan_scoped_access *reorder_access = NULL; +#endif + bool ret = true; + arch_spinlock_t arch_spinlock = __ARCH_SPIN_LOCK_UNLOCKED; + DEFINE_SPINLOCK(spinlock); + atomic_t dummy; + long test_var; + + if (!reorder_access || !IS_ENABLED(CONFIG_SMP)) + return true; + +#define __KCSAN_CHECK_BARRIER(access_type, barrier, name) \ + do { \ + reorder_access->type = (access_type) | KCSAN_ACCESS_SCOPED; \ + reorder_access->size = 1; \ + barrier; \ + if (reorder_access->size != 0) { \ + pr_err("improperly instrumented type=(" #access_type "): " name "\n"); \ + ret = false; \ + } \ + } while (0) +#define KCSAN_CHECK_READ_BARRIER(b) __KCSAN_CHECK_BARRIER(0, b, #b) +#define KCSAN_CHECK_WRITE_BARRIER(b) __KCSAN_CHECK_BARRIER(KCSAN_ACCESS_WRITE, b, #b) +#define KCSAN_CHECK_RW_BARRIER(b) __KCSAN_CHECK_BARRIER(KCSAN_ACCESS_WRITE | KCSAN_ACCESS_COMPOUND, b, #b) + + kcsan_nestable_atomic_begin(); /* No watchpoints in called functions. */ + + KCSAN_CHECK_READ_BARRIER(mb()); + KCSAN_CHECK_READ_BARRIER(rmb()); + KCSAN_CHECK_READ_BARRIER(smp_mb()); + KCSAN_CHECK_READ_BARRIER(smp_rmb()); + KCSAN_CHECK_READ_BARRIER(dma_rmb()); + KCSAN_CHECK_READ_BARRIER(smp_mb__before_atomic()); + KCSAN_CHECK_READ_BARRIER(smp_mb__after_atomic()); + KCSAN_CHECK_READ_BARRIER(smp_mb__after_spinlock()); + KCSAN_CHECK_READ_BARRIER(smp_store_mb(test_var, 0)); + KCSAN_CHECK_READ_BARRIER(smp_store_release(&test_var, 0)); + KCSAN_CHECK_READ_BARRIER(xchg(&test_var, 0)); + KCSAN_CHECK_READ_BARRIER(xchg_release(&test_var, 0)); + KCSAN_CHECK_READ_BARRIER(cmpxchg(&test_var, 0, 0)); + KCSAN_CHECK_READ_BARRIER(cmpxchg_release(&test_var, 0, 0)); + KCSAN_CHECK_READ_BARRIER(atomic_set_release(&dummy, 0)); + KCSAN_CHECK_READ_BARRIER(atomic_add_return(1, &dummy)); + KCSAN_CHECK_READ_BARRIER(atomic_add_return_release(1, &dummy)); + KCSAN_CHECK_READ_BARRIER(atomic_fetch_add(1, &dummy)); + KCSAN_CHECK_READ_BARRIER(atomic_fetch_add_release(1, &dummy)); + KCSAN_CHECK_READ_BARRIER(test_and_set_bit(0, &test_var)); + KCSAN_CHECK_READ_BARRIER(test_and_clear_bit(0, &test_var)); + KCSAN_CHECK_READ_BARRIER(test_and_change_bit(0, &test_var)); + KCSAN_CHECK_READ_BARRIER(clear_bit_unlock(0, &test_var)); + KCSAN_CHECK_READ_BARRIER(__clear_bit_unlock(0, &test_var)); + KCSAN_CHECK_READ_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var)); + arch_spin_lock(&arch_spinlock); + KCSAN_CHECK_READ_BARRIER(arch_spin_unlock(&arch_spinlock)); + spin_lock(&spinlock); + KCSAN_CHECK_READ_BARRIER(spin_unlock(&spinlock)); + + KCSAN_CHECK_WRITE_BARRIER(mb()); + KCSAN_CHECK_WRITE_BARRIER(wmb()); + KCSAN_CHECK_WRITE_BARRIER(smp_mb()); + KCSAN_CHECK_WRITE_BARRIER(smp_wmb()); + KCSAN_CHECK_WRITE_BARRIER(dma_wmb()); + KCSAN_CHECK_WRITE_BARRIER(smp_mb__before_atomic()); + KCSAN_CHECK_WRITE_BARRIER(smp_mb__after_atomic()); + KCSAN_CHECK_WRITE_BARRIER(smp_mb__after_spinlock()); + KCSAN_CHECK_WRITE_BARRIER(smp_store_mb(test_var, 0)); + KCSAN_CHECK_WRITE_BARRIER(smp_store_release(&test_var, 0)); + KCSAN_CHECK_WRITE_BARRIER(xchg(&test_var, 0)); + KCSAN_CHECK_WRITE_BARRIER(xchg_release(&test_var, 0)); + KCSAN_CHECK_WRITE_BARRIER(cmpxchg(&test_var, 0, 0)); + KCSAN_CHECK_WRITE_BARRIER(cmpxchg_release(&test_var, 0, 0)); + KCSAN_CHECK_WRITE_BARRIER(atomic_set_release(&dummy, 0)); + KCSAN_CHECK_WRITE_BARRIER(atomic_add_return(1, &dummy)); + KCSAN_CHECK_WRITE_BARRIER(atomic_add_return_release(1, &dummy)); + KCSAN_CHECK_WRITE_BARRIER(atomic_fetch_add(1, &dummy)); + KCSAN_CHECK_WRITE_BARRIER(atomic_fetch_add_release(1, &dummy)); + KCSAN_CHECK_WRITE_BARRIER(test_and_set_bit(0, &test_var)); + KCSAN_CHECK_WRITE_BARRIER(test_and_clear_bit(0, &test_var)); + KCSAN_CHECK_WRITE_BARRIER(test_and_change_bit(0, &test_var)); + KCSAN_CHECK_WRITE_BARRIER(clear_bit_unlock(0, &test_var)); + KCSAN_CHECK_WRITE_BARRIER(__clear_bit_unlock(0, &test_var)); + KCSAN_CHECK_WRITE_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var)); + arch_spin_lock(&arch_spinlock); + KCSAN_CHECK_WRITE_BARRIER(arch_spin_unlock(&arch_spinlock)); + spin_lock(&spinlock); + KCSAN_CHECK_WRITE_BARRIER(spin_unlock(&spinlock)); + + KCSAN_CHECK_RW_BARRIER(mb()); + KCSAN_CHECK_RW_BARRIER(wmb()); + KCSAN_CHECK_RW_BARRIER(rmb()); + KCSAN_CHECK_RW_BARRIER(smp_mb()); + KCSAN_CHECK_RW_BARRIER(smp_wmb()); + KCSAN_CHECK_RW_BARRIER(smp_rmb()); + KCSAN_CHECK_RW_BARRIER(dma_wmb()); + KCSAN_CHECK_RW_BARRIER(dma_rmb()); + KCSAN_CHECK_RW_BARRIER(smp_mb__before_atomic()); + KCSAN_CHECK_RW_BARRIER(smp_mb__after_atomic()); + KCSAN_CHECK_RW_BARRIER(smp_mb__after_spinlock()); + KCSAN_CHECK_RW_BARRIER(smp_store_mb(test_var, 0)); + KCSAN_CHECK_RW_BARRIER(smp_store_release(&test_var, 0)); + KCSAN_CHECK_RW_BARRIER(xchg(&test_var, 0)); + KCSAN_CHECK_RW_BARRIER(xchg_release(&test_var, 0)); + KCSAN_CHECK_RW_BARRIER(cmpxchg(&test_var, 0, 0)); + KCSAN_CHECK_RW_BARRIER(cmpxchg_release(&test_var, 0, 0)); + KCSAN_CHECK_RW_BARRIER(atomic_set_release(&dummy, 0)); + KCSAN_CHECK_RW_BARRIER(atomic_add_return(1, &dummy)); + KCSAN_CHECK_RW_BARRIER(atomic_add_return_release(1, &dummy)); + KCSAN_CHECK_RW_BARRIER(atomic_fetch_add(1, &dummy)); + KCSAN_CHECK_RW_BARRIER(atomic_fetch_add_release(1, &dummy)); + KCSAN_CHECK_RW_BARRIER(test_and_set_bit(0, &test_var)); + KCSAN_CHECK_RW_BARRIER(test_and_clear_bit(0, &test_var)); + KCSAN_CHECK_RW_BARRIER(test_and_change_bit(0, &test_var)); + KCSAN_CHECK_RW_BARRIER(clear_bit_unlock(0, &test_var)); + KCSAN_CHECK_RW_BARRIER(__clear_bit_unlock(0, &test_var)); + KCSAN_CHECK_RW_BARRIER(clear_bit_unlock_is_negative_byte(0, &test_var)); + arch_spin_lock(&arch_spinlock); + KCSAN_CHECK_RW_BARRIER(arch_spin_unlock(&arch_spinlock)); + spin_lock(&spinlock); + KCSAN_CHECK_RW_BARRIER(spin_unlock(&spinlock)); + + kcsan_nestable_atomic_end(); + + return ret; +} + static int __init kcsan_selftest(void) { int passed = 0; @@ -120,6 +260,7 @@ static int __init kcsan_selftest(void) RUN_TEST(test_requires); RUN_TEST(test_encode_decode); RUN_TEST(test_matching_access); + RUN_TEST(test_barrier); pr_info("selftest: %d/%d tests passed\n", passed, total); if (passed != total) From patchwork Tue Nov 30 11:44:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12647003 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8A5FC433F5 for ; Tue, 30 Nov 2021 11:52:57 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 07F246B0089; Tue, 30 Nov 2021 06:45:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 02D4B6B008A; Tue, 30 Nov 2021 06:45:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E37846B008C; Tue, 30 Nov 2021 06:45:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0106.hostedemail.com [216.40.44.106]) by kanga.kvack.org (Postfix) with ESMTP id D624E6B0089 for ; Tue, 30 Nov 2021 06:45:48 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 9F43D8B766 for ; Tue, 30 Nov 2021 11:45:38 +0000 (UTC) X-FDA: 78865416756.08.37F08CB Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf28.hostedemail.com (Postfix) with ESMTP id 4FBEC90000AB for ; Tue, 30 Nov 2021 11:45:38 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id ay34-20020a05600c1e2200b00337fd217772so12696788wmb.4 for ; Tue, 30 Nov 2021 03:45:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=HO2j8fDcTjG33FWF1g1vyRfUzwW+oeuePm13p24Xfnw=; b=sAns+f6z6lC9Rzr6sv1VOaF9Z+/b9Rm7Fc6eiv8LDEf1/K8jbpxDmmnPBMBZfSxySX dTpjl3fNGMy78twn2A4yIBadlhfaWtCCsObqHVem6zBDVFZVCC4n+IQUTL7f0sQCFvwb o57Imh2NYHQtkXdFhwKWYCTpsWBhlOl1CCg0T4fc9xHd4rCYaw53hmF7vk0QAz6SS6GP wIiH+wJJ9skB13S9TyInZZn9WwBTgfFJcdUIkQyoG9cGBoOA6WySIvzdR0anU8rbzqLX RWbyTf4PvpmsxkiuslYmzwxUNIXoBo8PPE9Ue6o5o9xu9E1j8Bu4cV0qwZSk4l4zKZky Ogjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=HO2j8fDcTjG33FWF1g1vyRfUzwW+oeuePm13p24Xfnw=; b=MlfIDvfFZImbm7mmsdXJl2vrhAdfM+tn6TbWXjyYi9AjPZ41RmYFo72MX7+khmxCCh 1IAploHNvvCLjS3ZYoXc6LWEG+Prq6dRzuYby2PNpeD3UpU17kRr2wYt3Gc+6PJYE9+N xE7gWvO0oyxdKWoA7XhJJznd7Y4ZBO6Kpf6GC2ZKV31hvHMVu61DV+TdeUq1k16QBRaL 8Srtsy7Dc0lSKalm3sMoGzesRTeWu5Kge/V44SuLZl1GlPq4s1A7IIeLvPnmZDveleCt cXMxzMHWupLGlsTqhyNaqiO78bDHUqMRx2hvpEQYVnIl+mwQmDXnaZ7nvHE9ejNXQV2N auug== X-Gm-Message-State: AOAM530hjjm3o75ljWoKz/w27OTqGqWlbxgQCHo6rWC4JkDMVLDkKtRl XgyqeoSyBwk1rs7qfduzMvC6JzqnEg== X-Google-Smtp-Source: ABdhPJx2Rz4y/jenX9xM8/e7rGubW4uw91rJcNHmjU3XY1sl9Bg9qcjG6x+0kmUf26GRvGsaHzk4ZiHgag== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:4308:: with SMTP id p8mr4243163wme.132.1638272737238; Tue, 30 Nov 2021 03:45:37 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:22 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-15-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 14/25] locking/barriers, kcsan: Add instrumentation for barriers From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 4FBEC90000AB X-Stat-Signature: eku397w6smxxiws5zogxhf1kb6zsjnsy Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=sAns+f6z; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf28.hostedemail.com: domain of 34Q6mYQUKCKoOVfObQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--elver.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=34Q6mYQUKCKoOVfObQYYQVO.MYWVSXeh-WWUfKMU.YbQ@flex--elver.bounces.google.com X-HE-Tag: 1638272738-554752 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Adds the required KCSAN instrumentation for barriers if CONFIG_SMP. KCSAN supports modeling the effects of: smp_mb() smp_rmb() smp_wmb() smp_store_release() Signed-off-by: Marco Elver --- include/asm-generic/barrier.h | 29 +++++++++++++++-------------- include/linux/spinlock.h | 2 +- 2 files changed, 16 insertions(+), 15 deletions(-) diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h index 640f09479bdf..27a9c9edfef6 100644 --- a/include/asm-generic/barrier.h +++ b/include/asm-generic/barrier.h @@ -14,6 +14,7 @@ #ifndef __ASSEMBLY__ #include +#include #include #ifndef nop @@ -62,15 +63,15 @@ #ifdef CONFIG_SMP #ifndef smp_mb -#define smp_mb() __smp_mb() +#define smp_mb() do { kcsan_mb(); __smp_mb(); } while (0) #endif #ifndef smp_rmb -#define smp_rmb() __smp_rmb() +#define smp_rmb() do { kcsan_rmb(); __smp_rmb(); } while (0) #endif #ifndef smp_wmb -#define smp_wmb() __smp_wmb() +#define smp_wmb() do { kcsan_wmb(); __smp_wmb(); } while (0) #endif #else /* !CONFIG_SMP */ @@ -123,19 +124,19 @@ do { \ #ifdef CONFIG_SMP #ifndef smp_store_mb -#define smp_store_mb(var, value) __smp_store_mb(var, value) +#define smp_store_mb(var, value) do { kcsan_mb(); __smp_store_mb(var, value); } while (0) #endif #ifndef smp_mb__before_atomic -#define smp_mb__before_atomic() __smp_mb__before_atomic() +#define smp_mb__before_atomic() do { kcsan_mb(); __smp_mb__before_atomic(); } while (0) #endif #ifndef smp_mb__after_atomic -#define smp_mb__after_atomic() __smp_mb__after_atomic() +#define smp_mb__after_atomic() do { kcsan_mb(); __smp_mb__after_atomic(); } while (0) #endif #ifndef smp_store_release -#define smp_store_release(p, v) __smp_store_release(p, v) +#define smp_store_release(p, v) do { kcsan_release(); __smp_store_release(p, v); } while (0) #endif #ifndef smp_load_acquire @@ -178,13 +179,13 @@ do { \ #endif /* CONFIG_SMP */ /* Barriers for virtual machine guests when talking to an SMP host */ -#define virt_mb() __smp_mb() -#define virt_rmb() __smp_rmb() -#define virt_wmb() __smp_wmb() -#define virt_store_mb(var, value) __smp_store_mb(var, value) -#define virt_mb__before_atomic() __smp_mb__before_atomic() -#define virt_mb__after_atomic() __smp_mb__after_atomic() -#define virt_store_release(p, v) __smp_store_release(p, v) +#define virt_mb() do { kcsan_mb(); __smp_mb(); } while (0) +#define virt_rmb() do { kcsan_rmb(); __smp_rmb(); } while (0) +#define virt_wmb() do { kcsan_wmb(); __smp_wmb(); } while (0) +#define virt_store_mb(var, value) do { kcsan_mb(); __smp_store_mb(var, value); } while (0) +#define virt_mb__before_atomic() do { kcsan_mb(); __smp_mb__before_atomic(); } while (0) +#define virt_mb__after_atomic() do { kcsan_mb(); __smp_mb__after_atomic(); } while (0) +#define virt_store_release(p, v) do { kcsan_release(); __smp_store_release(p, v); } while (0) #define virt_load_acquire(p) __smp_load_acquire(p) /** diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h index b4e5ca23f840..5c0c5174155d 100644 --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -171,7 +171,7 @@ do { \ * Architectures that can implement ACQUIRE better need to take care. */ #ifndef smp_mb__after_spinlock -#define smp_mb__after_spinlock() do { } while (0) +#define smp_mb__after_spinlock() kcsan_mb() #endif #ifdef CONFIG_DEBUG_SPINLOCK From patchwork Tue Nov 30 11:44:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12647005 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30563C433EF for ; Tue, 30 Nov 2021 11:53:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7F79E6B008A; Tue, 30 Nov 2021 06:45:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7A7FC6B008C; Tue, 30 Nov 2021 06:45:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 648756B0092; Tue, 30 Nov 2021 06:45:51 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0044.hostedemail.com [216.40.44.44]) by kanga.kvack.org (Postfix) with ESMTP id 5505F6B008A for ; Tue, 30 Nov 2021 06:45:51 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 1D3F68541D for ; Tue, 30 Nov 2021 11:45:41 +0000 (UTC) X-FDA: 78865416840.02.051B3C0 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf23.hostedemail.com (Postfix) with ESMTP id 220D290000B6 for ; Tue, 30 Nov 2021 11:45:32 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id g81-20020a1c9d54000000b003330e488323so6061035wme.0 for ; Tue, 30 Nov 2021 03:45:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=T9LcpiESjlbrW6ACmq5IFNnAkv2Wwnn3szo64Ts4MTU=; b=r4Nym9IUPqr6smt4hSh29FgX/wEpgW1oWFkMBkLt45lZV4MsS8x2Pp29b/KcYXqGW5 wkgAwLy1JnLd35I/Fj+l/UEsgKzPaN3fJHplny6OF4yF9Jtq0944uyqDNW80nE8eTOOs tD0d68ft7qYxlfnEvLZgygghFhWYO6UbSBwDu9I0tRDGua5IbN6gD97jk1bRW4gRuuZy L60TzPcxWQipS6+h/EMoBzRNufVYK1sioP7pHL1JTqKPI5TW33XwwzjSy7RaUlALcE+o dtm8pG3WMMblJczibJlop55o0nu+CZMM8tox6oeFdPlHITAjgFrbiy/Zf6uEeXftuYvt DRSg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=T9LcpiESjlbrW6ACmq5IFNnAkv2Wwnn3szo64Ts4MTU=; b=qX5e3gGlfO29oOBVCOVpbiI1tqfwZr+EH/3vqIsTWUiYfs1Yk73N4w4ouHo0GPbkNk 6qMPNYfKFOgnZE95Zjc3A/pfF4rS5SiLNEClHHOw31e/0EJ9LkaDY6cza27CZZEbLUGS e4Hgxkx/nRR/TVznx6P9YzJ/MlxM5/41HyftGh/qbcq/4uk8fjL1mfsS3KtTc3Rf8euv 2KKt8+ZqV2+QM+0OE1uzCKShKoJx3+UYnClr5K70GJUNW2fqOwgkgz5J728U8UOlG+7P hn2JrY89JG42lSwfLiLwaasulu+kcAyzr6wjX4PIKXPFIPDPIM+oMs7ys6F7wc+DOl1B NdpA== X-Gm-Message-State: AOAM533RRGmDOmf1G8iQVsYabgsA0zeiBNBk6xy0e2CXVRWpFt/Ql2y8 rjqMdOQMDOzmf9B7XewrTR1G6YuBcg== X-Google-Smtp-Source: ABdhPJyiLlXYxmElXhBKv1qI383tG/tMJUVWxzbrc6gBjqU+GoqEax55tKxqpMzboGdxCm7oCpqAkhpQnw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:adf:c406:: with SMTP id v6mr39945592wrf.570.1638272739561; Tue, 30 Nov 2021 03:45:39 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:23 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-16-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 15/25] locking/barriers, kcsan: Support generic instrumentation From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 220D290000B6 X-Stat-Signature: btjdyeakbog9iy5554sie38bz5w3eafj Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=r4Nym9IU; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf23.hostedemail.com: domain of 34w6mYQUKCKwQXhQdSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--elver.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=34w6mYQUKCKwQXhQdSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--elver.bounces.google.com X-HE-Tag: 1638272732-120162 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Thus far only smp_*() barriers had been defined by asm-generic/barrier.h based on __smp_*() barriers, because the !SMP case is usually generic. With the introduction of instrumentation, it also makes sense to have asm-generic/barrier.h assist in the definition of instrumented versions of mb(), rmb(), wmb(), dma_rmb(), and dma_wmb(). Because there is no requirement to distinguish the !SMP case, the definition can be simpler: we can avoid also providing fallbacks for the __ prefixed cases, and only check if `defined(__)`, to finally define the KCSAN-instrumented versions. This also allows for the compiler to complain if an architecture accidentally defines both the normal and __ prefixed variant. Signed-off-by: Marco Elver --- include/asm-generic/barrier.h | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h index 27a9c9edfef6..02c4339c8eeb 100644 --- a/include/asm-generic/barrier.h +++ b/include/asm-generic/barrier.h @@ -21,6 +21,31 @@ #define nop() asm volatile ("nop") #endif +/* + * Architectures that want generic instrumentation can define __ prefixed + * variants of all barriers. + */ + +#ifdef __mb +#define mb() do { kcsan_mb(); __mb(); } while (0) +#endif + +#ifdef __rmb +#define rmb() do { kcsan_rmb(); __rmb(); } while (0) +#endif + +#ifdef __wmb +#define wmb() do { kcsan_wmb(); __wmb(); } while (0) +#endif + +#ifdef __dma_rmb +#define dma_rmb() do { kcsan_rmb(); __dma_rmb(); } while (0) +#endif + +#ifdef __dma_wmb +#define dma_wmb() do { kcsan_wmb(); __dma_wmb(); } while (0) +#endif + /* * Force strict CPU ordering. And yes, this is required on UP too when we're * talking to devices. From patchwork Tue Nov 30 11:44:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12647095 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4C193C433EF for ; Tue, 30 Nov 2021 11:54:04 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 119F26B008C; Tue, 30 Nov 2021 06:45:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0C9F36B0092; Tue, 30 Nov 2021 06:45:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E5EA56B0093; Tue, 30 Nov 2021 06:45:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0129.hostedemail.com [216.40.44.129]) by kanga.kvack.org (Postfix) with ESMTP id D17896B008C for ; Tue, 30 Nov 2021 06:45:53 -0500 (EST) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 91D0A181C43D5 for ; Tue, 30 Nov 2021 11:45:43 +0000 (UTC) X-FDA: 78865416924.14.71A984E Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf03.hostedemail.com (Postfix) with ESMTP id 7BA583000444 for ; Tue, 30 Nov 2021 11:45:38 +0000 (UTC) Received: by mail-wm1-f73.google.com with SMTP id v62-20020a1cac41000000b0033719a1a714so10283041wme.6 for ; Tue, 30 Nov 2021 03:45:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=uCLdwqCnwbeDZQfoWjXYjkSmTccU5m6nCKSAs1uqhFI=; b=V7mTS18U6xzg9WKWPrIxbXNhMuTGT1y7j/ltBhDzWNDpNTb2WKHOY5/nBZNJCqiMvj aE5cO0ysP9vg+Z7z/78CRbzP5Qilrrr4xeyAcBG3LL0VNaw+MrJVGdbQ2b76ukXnQoYf xq8bW4GnMJdv5GTYA+LgEGPBu9fezpld0FQIc1fL+PVt1xAytHsVMY00us1VUZ0GvS3p 309P7gj9SGxd/EHuiVPraZC0UzHxlEFy+ZlA8IVfeRFa+oD/AHE3ngMztChNJKaBxYiB DhFvRNCRkUl1/q+yPSfKaUqVC6LAOEL9nCDDgbbZOrqNy+aNkp2zuM6WOWf0j74I8z4h qAMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=uCLdwqCnwbeDZQfoWjXYjkSmTccU5m6nCKSAs1uqhFI=; b=Gg8Gi9J1kfN8Je+BnSBXRgR1QLwBPBjzIbw7NzdpxFb+qSw84oLyPJE3nOFPmbtKEi tlQX7q/7pHJSv4fY0WHK5bRzyIyf7dyTN55kbtpxApbZKYXAkDNm9jJ0lRS/C7Lorzz0 dS8R8RxDudB7/aTaah4u5Pe9V+ziuuNzmHrqdwVLdszcOBX+IEeq2RKXd2Tu2PffObjG R/3K/lMcfNL4mnWtqFwpyPHXYBIc0tiSjZGXkCYcy6LjTesp1yocfm0WP8R/9LT1eqsC ZaOEG8xzl1FVjUpuxAl0cPVjU45ZGtQuQnXaZYI/gWUEuKre7bgYUiO/RxLgVIYkmte0 1QQw== X-Gm-Message-State: AOAM531u9ZEohVwmauk3Id6MaL+X4jaeC/bJTpJGKyIpcFiSPfdeD7Ru ASThmtLNa0BL1oNA8qKHX6jobOkaHw== X-Google-Smtp-Source: ABdhPJxY4+tlEctmfsl7CloPlBSpjIpN5Y44bObf5mYVLiogrsaWTkzYPGAEuXD5I6z6UKAZEzCy6kwpPg== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a5d:618f:: with SMTP id j15mr39136465wru.506.1638272742105; Tue, 30 Nov 2021 03:45:42 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:24 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-17-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 16/25] locking/atomics, kcsan: Add instrumentation for barriers From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 7BA583000444 X-Stat-Signature: wnezmyndi7jmimkqu55depq9aj5qmrry Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=V7mTS18U; spf=pass (imf03.hostedemail.com: domain of 35g6mYQUKCK8TakTgVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--elver.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=35g6mYQUKCK8TakTgVddVaT.RdbaXcjm-bbZkPRZ.dgV@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1638272738-483379 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Adds the required KCSAN instrumentation for barriers of atomics. Signed-off-by: Marco Elver --- include/linux/atomic/atomic-instrumented.h | 135 ++++++++++++++++++++- scripts/atomic/gen-atomic-instrumented.sh | 41 +++++-- 2 files changed, 166 insertions(+), 10 deletions(-) diff --git a/include/linux/atomic/atomic-instrumented.h b/include/linux/atomic/atomic-instrumented.h index a0f654370da3..5d69b143c28e 100644 --- a/include/linux/atomic/atomic-instrumented.h +++ b/include/linux/atomic/atomic-instrumented.h @@ -45,6 +45,7 @@ atomic_set(atomic_t *v, int i) static __always_inline void atomic_set_release(atomic_t *v, int i) { + kcsan_release(); instrument_atomic_write(v, sizeof(*v)); arch_atomic_set_release(v, i); } @@ -59,6 +60,7 @@ atomic_add(int i, atomic_t *v) static __always_inline int atomic_add_return(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_add_return(i, v); } @@ -73,6 +75,7 @@ atomic_add_return_acquire(int i, atomic_t *v) static __always_inline int atomic_add_return_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_add_return_release(i, v); } @@ -87,6 +90,7 @@ atomic_add_return_relaxed(int i, atomic_t *v) static __always_inline int atomic_fetch_add(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_add(i, v); } @@ -101,6 +105,7 @@ atomic_fetch_add_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_add_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_add_release(i, v); } @@ -122,6 +127,7 @@ atomic_sub(int i, atomic_t *v) static __always_inline int atomic_sub_return(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_sub_return(i, v); } @@ -136,6 +142,7 @@ atomic_sub_return_acquire(int i, atomic_t *v) static __always_inline int atomic_sub_return_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_sub_return_release(i, v); } @@ -150,6 +157,7 @@ atomic_sub_return_relaxed(int i, atomic_t *v) static __always_inline int atomic_fetch_sub(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_sub(i, v); } @@ -164,6 +172,7 @@ atomic_fetch_sub_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_sub_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_sub_release(i, v); } @@ -185,6 +194,7 @@ atomic_inc(atomic_t *v) static __always_inline int atomic_inc_return(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_return(v); } @@ -199,6 +209,7 @@ atomic_inc_return_acquire(atomic_t *v) static __always_inline int atomic_inc_return_release(atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_return_release(v); } @@ -213,6 +224,7 @@ atomic_inc_return_relaxed(atomic_t *v) static __always_inline int atomic_fetch_inc(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_inc(v); } @@ -227,6 +239,7 @@ atomic_fetch_inc_acquire(atomic_t *v) static __always_inline int atomic_fetch_inc_release(atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_inc_release(v); } @@ -248,6 +261,7 @@ atomic_dec(atomic_t *v) static __always_inline int atomic_dec_return(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_return(v); } @@ -262,6 +276,7 @@ atomic_dec_return_acquire(atomic_t *v) static __always_inline int atomic_dec_return_release(atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_return_release(v); } @@ -276,6 +291,7 @@ atomic_dec_return_relaxed(atomic_t *v) static __always_inline int atomic_fetch_dec(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_dec(v); } @@ -290,6 +306,7 @@ atomic_fetch_dec_acquire(atomic_t *v) static __always_inline int atomic_fetch_dec_release(atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_dec_release(v); } @@ -311,6 +328,7 @@ atomic_and(int i, atomic_t *v) static __always_inline int atomic_fetch_and(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_and(i, v); } @@ -325,6 +343,7 @@ atomic_fetch_and_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_and_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_and_release(i, v); } @@ -346,6 +365,7 @@ atomic_andnot(int i, atomic_t *v) static __always_inline int atomic_fetch_andnot(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_andnot(i, v); } @@ -360,6 +380,7 @@ atomic_fetch_andnot_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_andnot_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_andnot_release(i, v); } @@ -381,6 +402,7 @@ atomic_or(int i, atomic_t *v) static __always_inline int atomic_fetch_or(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_or(i, v); } @@ -395,6 +417,7 @@ atomic_fetch_or_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_or_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_or_release(i, v); } @@ -416,6 +439,7 @@ atomic_xor(int i, atomic_t *v) static __always_inline int atomic_fetch_xor(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_xor(i, v); } @@ -430,6 +454,7 @@ atomic_fetch_xor_acquire(int i, atomic_t *v) static __always_inline int atomic_fetch_xor_release(int i, atomic_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_xor_release(i, v); } @@ -444,6 +469,7 @@ atomic_fetch_xor_relaxed(int i, atomic_t *v) static __always_inline int atomic_xchg(atomic_t *v, int i) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_xchg(v, i); } @@ -458,6 +484,7 @@ atomic_xchg_acquire(atomic_t *v, int i) static __always_inline int atomic_xchg_release(atomic_t *v, int i) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_xchg_release(v, i); } @@ -472,6 +499,7 @@ atomic_xchg_relaxed(atomic_t *v, int i) static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_cmpxchg(v, old, new); } @@ -486,6 +514,7 @@ atomic_cmpxchg_acquire(atomic_t *v, int old, int new) static __always_inline int atomic_cmpxchg_release(atomic_t *v, int old, int new) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_cmpxchg_release(v, old, new); } @@ -500,6 +529,7 @@ atomic_cmpxchg_relaxed(atomic_t *v, int old, int new) static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic_try_cmpxchg(v, old, new); @@ -516,6 +546,7 @@ atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new) static __always_inline bool atomic_try_cmpxchg_release(atomic_t *v, int *old, int new) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic_try_cmpxchg_release(v, old, new); @@ -532,6 +563,7 @@ atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new) static __always_inline bool atomic_sub_and_test(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_sub_and_test(i, v); } @@ -539,6 +571,7 @@ atomic_sub_and_test(int i, atomic_t *v) static __always_inline bool atomic_dec_and_test(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_and_test(v); } @@ -546,6 +579,7 @@ atomic_dec_and_test(atomic_t *v) static __always_inline bool atomic_inc_and_test(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_and_test(v); } @@ -553,6 +587,7 @@ atomic_inc_and_test(atomic_t *v) static __always_inline bool atomic_add_negative(int i, atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_add_negative(i, v); } @@ -560,6 +595,7 @@ atomic_add_negative(int i, atomic_t *v) static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_fetch_add_unless(v, a, u); } @@ -567,6 +603,7 @@ atomic_fetch_add_unless(atomic_t *v, int a, int u) static __always_inline bool atomic_add_unless(atomic_t *v, int a, int u) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_add_unless(v, a, u); } @@ -574,6 +611,7 @@ atomic_add_unless(atomic_t *v, int a, int u) static __always_inline bool atomic_inc_not_zero(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_not_zero(v); } @@ -581,6 +619,7 @@ atomic_inc_not_zero(atomic_t *v) static __always_inline bool atomic_inc_unless_negative(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_inc_unless_negative(v); } @@ -588,6 +627,7 @@ atomic_inc_unless_negative(atomic_t *v) static __always_inline bool atomic_dec_unless_positive(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_unless_positive(v); } @@ -595,6 +635,7 @@ atomic_dec_unless_positive(atomic_t *v) static __always_inline int atomic_dec_if_positive(atomic_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_dec_if_positive(v); } @@ -623,6 +664,7 @@ atomic64_set(atomic64_t *v, s64 i) static __always_inline void atomic64_set_release(atomic64_t *v, s64 i) { + kcsan_release(); instrument_atomic_write(v, sizeof(*v)); arch_atomic64_set_release(v, i); } @@ -637,6 +679,7 @@ atomic64_add(s64 i, atomic64_t *v) static __always_inline s64 atomic64_add_return(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_add_return(i, v); } @@ -651,6 +694,7 @@ atomic64_add_return_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_add_return_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_add_return_release(i, v); } @@ -665,6 +709,7 @@ atomic64_add_return_relaxed(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_add(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_add(i, v); } @@ -679,6 +724,7 @@ atomic64_fetch_add_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_add_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_add_release(i, v); } @@ -700,6 +746,7 @@ atomic64_sub(s64 i, atomic64_t *v) static __always_inline s64 atomic64_sub_return(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_sub_return(i, v); } @@ -714,6 +761,7 @@ atomic64_sub_return_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_sub_return_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_sub_return_release(i, v); } @@ -728,6 +776,7 @@ atomic64_sub_return_relaxed(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_sub(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_sub(i, v); } @@ -742,6 +791,7 @@ atomic64_fetch_sub_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_sub_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_sub_release(i, v); } @@ -763,6 +813,7 @@ atomic64_inc(atomic64_t *v) static __always_inline s64 atomic64_inc_return(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_return(v); } @@ -777,6 +828,7 @@ atomic64_inc_return_acquire(atomic64_t *v) static __always_inline s64 atomic64_inc_return_release(atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_return_release(v); } @@ -791,6 +843,7 @@ atomic64_inc_return_relaxed(atomic64_t *v) static __always_inline s64 atomic64_fetch_inc(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_inc(v); } @@ -805,6 +858,7 @@ atomic64_fetch_inc_acquire(atomic64_t *v) static __always_inline s64 atomic64_fetch_inc_release(atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_inc_release(v); } @@ -826,6 +880,7 @@ atomic64_dec(atomic64_t *v) static __always_inline s64 atomic64_dec_return(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_return(v); } @@ -840,6 +895,7 @@ atomic64_dec_return_acquire(atomic64_t *v) static __always_inline s64 atomic64_dec_return_release(atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_return_release(v); } @@ -854,6 +910,7 @@ atomic64_dec_return_relaxed(atomic64_t *v) static __always_inline s64 atomic64_fetch_dec(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_dec(v); } @@ -868,6 +925,7 @@ atomic64_fetch_dec_acquire(atomic64_t *v) static __always_inline s64 atomic64_fetch_dec_release(atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_dec_release(v); } @@ -889,6 +947,7 @@ atomic64_and(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_and(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_and(i, v); } @@ -903,6 +962,7 @@ atomic64_fetch_and_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_and_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_and_release(i, v); } @@ -924,6 +984,7 @@ atomic64_andnot(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_andnot(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_andnot(i, v); } @@ -938,6 +999,7 @@ atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_andnot_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_andnot_release(i, v); } @@ -959,6 +1021,7 @@ atomic64_or(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_or(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_or(i, v); } @@ -973,6 +1036,7 @@ atomic64_fetch_or_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_or_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_or_release(i, v); } @@ -994,6 +1058,7 @@ atomic64_xor(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_xor(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_xor(i, v); } @@ -1008,6 +1073,7 @@ atomic64_fetch_xor_acquire(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_xor_release(s64 i, atomic64_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_xor_release(i, v); } @@ -1022,6 +1088,7 @@ atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v) static __always_inline s64 atomic64_xchg(atomic64_t *v, s64 i) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_xchg(v, i); } @@ -1036,6 +1103,7 @@ atomic64_xchg_acquire(atomic64_t *v, s64 i) static __always_inline s64 atomic64_xchg_release(atomic64_t *v, s64 i) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_xchg_release(v, i); } @@ -1050,6 +1118,7 @@ atomic64_xchg_relaxed(atomic64_t *v, s64 i) static __always_inline s64 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_cmpxchg(v, old, new); } @@ -1064,6 +1133,7 @@ atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new) static __always_inline s64 atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_cmpxchg_release(v, old, new); } @@ -1078,6 +1148,7 @@ atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new) static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic64_try_cmpxchg(v, old, new); @@ -1094,6 +1165,7 @@ atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new) static __always_inline bool atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic64_try_cmpxchg_release(v, old, new); @@ -1110,6 +1182,7 @@ atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new) static __always_inline bool atomic64_sub_and_test(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_sub_and_test(i, v); } @@ -1117,6 +1190,7 @@ atomic64_sub_and_test(s64 i, atomic64_t *v) static __always_inline bool atomic64_dec_and_test(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_and_test(v); } @@ -1124,6 +1198,7 @@ atomic64_dec_and_test(atomic64_t *v) static __always_inline bool atomic64_inc_and_test(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_and_test(v); } @@ -1131,6 +1206,7 @@ atomic64_inc_and_test(atomic64_t *v) static __always_inline bool atomic64_add_negative(s64 i, atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_add_negative(i, v); } @@ -1138,6 +1214,7 @@ atomic64_add_negative(s64 i, atomic64_t *v) static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_fetch_add_unless(v, a, u); } @@ -1145,6 +1222,7 @@ atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u) static __always_inline bool atomic64_add_unless(atomic64_t *v, s64 a, s64 u) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_add_unless(v, a, u); } @@ -1152,6 +1230,7 @@ atomic64_add_unless(atomic64_t *v, s64 a, s64 u) static __always_inline bool atomic64_inc_not_zero(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_not_zero(v); } @@ -1159,6 +1238,7 @@ atomic64_inc_not_zero(atomic64_t *v) static __always_inline bool atomic64_inc_unless_negative(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_inc_unless_negative(v); } @@ -1166,6 +1246,7 @@ atomic64_inc_unless_negative(atomic64_t *v) static __always_inline bool atomic64_dec_unless_positive(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_unless_positive(v); } @@ -1173,6 +1254,7 @@ atomic64_dec_unless_positive(atomic64_t *v) static __always_inline s64 atomic64_dec_if_positive(atomic64_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic64_dec_if_positive(v); } @@ -1201,6 +1283,7 @@ atomic_long_set(atomic_long_t *v, long i) static __always_inline void atomic_long_set_release(atomic_long_t *v, long i) { + kcsan_release(); instrument_atomic_write(v, sizeof(*v)); arch_atomic_long_set_release(v, i); } @@ -1215,6 +1298,7 @@ atomic_long_add(long i, atomic_long_t *v) static __always_inline long atomic_long_add_return(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_add_return(i, v); } @@ -1229,6 +1313,7 @@ atomic_long_add_return_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_add_return_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_add_return_release(i, v); } @@ -1243,6 +1328,7 @@ atomic_long_add_return_relaxed(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_add(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_add(i, v); } @@ -1257,6 +1343,7 @@ atomic_long_fetch_add_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_add_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_add_release(i, v); } @@ -1278,6 +1365,7 @@ atomic_long_sub(long i, atomic_long_t *v) static __always_inline long atomic_long_sub_return(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_sub_return(i, v); } @@ -1292,6 +1380,7 @@ atomic_long_sub_return_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_sub_return_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_sub_return_release(i, v); } @@ -1306,6 +1395,7 @@ atomic_long_sub_return_relaxed(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_sub(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_sub(i, v); } @@ -1320,6 +1410,7 @@ atomic_long_fetch_sub_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_sub_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_sub_release(i, v); } @@ -1341,6 +1432,7 @@ atomic_long_inc(atomic_long_t *v) static __always_inline long atomic_long_inc_return(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_inc_return(v); } @@ -1355,6 +1447,7 @@ atomic_long_inc_return_acquire(atomic_long_t *v) static __always_inline long atomic_long_inc_return_release(atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_inc_return_release(v); } @@ -1369,6 +1462,7 @@ atomic_long_inc_return_relaxed(atomic_long_t *v) static __always_inline long atomic_long_fetch_inc(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_inc(v); } @@ -1383,6 +1477,7 @@ atomic_long_fetch_inc_acquire(atomic_long_t *v) static __always_inline long atomic_long_fetch_inc_release(atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_inc_release(v); } @@ -1404,6 +1499,7 @@ atomic_long_dec(atomic_long_t *v) static __always_inline long atomic_long_dec_return(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_dec_return(v); } @@ -1418,6 +1514,7 @@ atomic_long_dec_return_acquire(atomic_long_t *v) static __always_inline long atomic_long_dec_return_release(atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_dec_return_release(v); } @@ -1432,6 +1529,7 @@ atomic_long_dec_return_relaxed(atomic_long_t *v) static __always_inline long atomic_long_fetch_dec(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_dec(v); } @@ -1446,6 +1544,7 @@ atomic_long_fetch_dec_acquire(atomic_long_t *v) static __always_inline long atomic_long_fetch_dec_release(atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_dec_release(v); } @@ -1467,6 +1566,7 @@ atomic_long_and(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_and(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_and(i, v); } @@ -1481,6 +1581,7 @@ atomic_long_fetch_and_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_and_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_and_release(i, v); } @@ -1502,6 +1603,7 @@ atomic_long_andnot(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_andnot(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_andnot(i, v); } @@ -1516,6 +1618,7 @@ atomic_long_fetch_andnot_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_andnot_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_andnot_release(i, v); } @@ -1537,6 +1640,7 @@ atomic_long_or(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_or(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_or(i, v); } @@ -1551,6 +1655,7 @@ atomic_long_fetch_or_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_or_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_or_release(i, v); } @@ -1572,6 +1677,7 @@ atomic_long_xor(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_xor(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_xor(i, v); } @@ -1586,6 +1692,7 @@ atomic_long_fetch_xor_acquire(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_xor_release(long i, atomic_long_t *v) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_xor_release(i, v); } @@ -1600,6 +1707,7 @@ atomic_long_fetch_xor_relaxed(long i, atomic_long_t *v) static __always_inline long atomic_long_xchg(atomic_long_t *v, long i) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_xchg(v, i); } @@ -1614,6 +1722,7 @@ atomic_long_xchg_acquire(atomic_long_t *v, long i) static __always_inline long atomic_long_xchg_release(atomic_long_t *v, long i) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_xchg_release(v, i); } @@ -1628,6 +1737,7 @@ atomic_long_xchg_relaxed(atomic_long_t *v, long i) static __always_inline long atomic_long_cmpxchg(atomic_long_t *v, long old, long new) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_cmpxchg(v, old, new); } @@ -1642,6 +1752,7 @@ atomic_long_cmpxchg_acquire(atomic_long_t *v, long old, long new) static __always_inline long atomic_long_cmpxchg_release(atomic_long_t *v, long old, long new) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_cmpxchg_release(v, old, new); } @@ -1656,6 +1767,7 @@ atomic_long_cmpxchg_relaxed(atomic_long_t *v, long old, long new) static __always_inline bool atomic_long_try_cmpxchg(atomic_long_t *v, long *old, long new) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic_long_try_cmpxchg(v, old, new); @@ -1672,6 +1784,7 @@ atomic_long_try_cmpxchg_acquire(atomic_long_t *v, long *old, long new) static __always_inline bool atomic_long_try_cmpxchg_release(atomic_long_t *v, long *old, long new) { + kcsan_release(); instrument_atomic_read_write(v, sizeof(*v)); instrument_atomic_read_write(old, sizeof(*old)); return arch_atomic_long_try_cmpxchg_release(v, old, new); @@ -1688,6 +1801,7 @@ atomic_long_try_cmpxchg_relaxed(atomic_long_t *v, long *old, long new) static __always_inline bool atomic_long_sub_and_test(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_sub_and_test(i, v); } @@ -1695,6 +1809,7 @@ atomic_long_sub_and_test(long i, atomic_long_t *v) static __always_inline bool atomic_long_dec_and_test(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_dec_and_test(v); } @@ -1702,6 +1817,7 @@ atomic_long_dec_and_test(atomic_long_t *v) static __always_inline bool atomic_long_inc_and_test(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_inc_and_test(v); } @@ -1709,6 +1825,7 @@ atomic_long_inc_and_test(atomic_long_t *v) static __always_inline bool atomic_long_add_negative(long i, atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_add_negative(i, v); } @@ -1716,6 +1833,7 @@ atomic_long_add_negative(long i, atomic_long_t *v) static __always_inline long atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_fetch_add_unless(v, a, u); } @@ -1723,6 +1841,7 @@ atomic_long_fetch_add_unless(atomic_long_t *v, long a, long u) static __always_inline bool atomic_long_add_unless(atomic_long_t *v, long a, long u) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_add_unless(v, a, u); } @@ -1730,6 +1849,7 @@ atomic_long_add_unless(atomic_long_t *v, long a, long u) static __always_inline bool atomic_long_inc_not_zero(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_inc_not_zero(v); } @@ -1737,6 +1857,7 @@ atomic_long_inc_not_zero(atomic_long_t *v) static __always_inline bool atomic_long_inc_unless_negative(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_inc_unless_negative(v); } @@ -1744,6 +1865,7 @@ atomic_long_inc_unless_negative(atomic_long_t *v) static __always_inline bool atomic_long_dec_unless_positive(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_dec_unless_positive(v); } @@ -1751,6 +1873,7 @@ atomic_long_dec_unless_positive(atomic_long_t *v) static __always_inline long atomic_long_dec_if_positive(atomic_long_t *v) { + kcsan_mb(); instrument_atomic_read_write(v, sizeof(*v)); return arch_atomic_long_dec_if_positive(v); } @@ -1758,6 +1881,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define xchg(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_mb(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_xchg(__ai_ptr, __VA_ARGS__); \ }) @@ -1772,6 +1896,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define xchg_release(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_release(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_xchg_release(__ai_ptr, __VA_ARGS__); \ }) @@ -1786,6 +1911,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define cmpxchg(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_mb(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_cmpxchg(__ai_ptr, __VA_ARGS__); \ }) @@ -1800,6 +1926,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define cmpxchg_release(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_release(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_cmpxchg_release(__ai_ptr, __VA_ARGS__); \ }) @@ -1814,6 +1941,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define cmpxchg64(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_mb(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_cmpxchg64(__ai_ptr, __VA_ARGS__); \ }) @@ -1828,6 +1956,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define cmpxchg64_release(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_release(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_cmpxchg64_release(__ai_ptr, __VA_ARGS__); \ }) @@ -1843,6 +1972,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ typeof(oldp) __ai_oldp = (oldp); \ + kcsan_mb(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_atomic_write(__ai_oldp, sizeof(*__ai_oldp)); \ arch_try_cmpxchg(__ai_ptr, __ai_oldp, __VA_ARGS__); \ @@ -1861,6 +1991,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) ({ \ typeof(ptr) __ai_ptr = (ptr); \ typeof(oldp) __ai_oldp = (oldp); \ + kcsan_release(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ instrument_atomic_write(__ai_oldp, sizeof(*__ai_oldp)); \ arch_try_cmpxchg_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \ @@ -1892,6 +2023,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define sync_cmpxchg(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_mb(); \ instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \ arch_sync_cmpxchg(__ai_ptr, __VA_ARGS__); \ }) @@ -1899,6 +2031,7 @@ atomic_long_dec_if_positive(atomic_long_t *v) #define cmpxchg_double(ptr, ...) \ ({ \ typeof(ptr) __ai_ptr = (ptr); \ + kcsan_mb(); \ instrument_atomic_write(__ai_ptr, 2 * sizeof(*__ai_ptr)); \ arch_cmpxchg_double(__ai_ptr, __VA_ARGS__); \ }) @@ -1912,4 +2045,4 @@ atomic_long_dec_if_positive(atomic_long_t *v) }) #endif /* _LINUX_ATOMIC_INSTRUMENTED_H */ -// 2a9553f0a9d5619f19151092df5cabbbf16ce835 +// 87c974b93032afd42143613434d1a7788fa598f9 diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh index 035ceb4ee85c..68f902731d01 100755 --- a/scripts/atomic/gen-atomic-instrumented.sh +++ b/scripts/atomic/gen-atomic-instrumented.sh @@ -34,6 +34,14 @@ gen_param_check() gen_params_checks() { local meta="$1"; shift + local order="$1"; shift + + if [ "${order}" = "_release" ]; then + printf "\tkcsan_release();\n" + elif [ -z "${order}" ] && ! meta_in "$meta" "slv"; then + # RMW with return value is fully ordered + printf "\tkcsan_mb();\n" + fi while [ "$#" -gt 0 ]; do gen_param_check "$meta" "$1" @@ -56,7 +64,7 @@ gen_proto_order_variant() local ret="$(gen_ret_type "${meta}" "${int}")" local params="$(gen_params "${int}" "${atomic}" "$@")" - local checks="$(gen_params_checks "${meta}" "$@")" + local checks="$(gen_params_checks "${meta}" "${order}" "$@")" local args="$(gen_args "$@")" local retstmt="$(gen_ret_stmt "${meta}")" @@ -75,29 +83,44 @@ EOF gen_xchg() { local xchg="$1"; shift + local order="$1"; shift local mult="$1"; shift + kcsan_barrier="" + if [ "${xchg%_local}" = "${xchg}" ]; then + case "$order" in + _release) kcsan_barrier="kcsan_release()" ;; + "") kcsan_barrier="kcsan_mb()" ;; + esac + fi + if [ "${xchg%${xchg#try_cmpxchg}}" = "try_cmpxchg" ] ; then cat < X-Patchwork-Id: 12647097 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CA78C433FE for ; Tue, 30 Nov 2021 11:54:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DC4016B0092; Tue, 30 Nov 2021 06:45:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D4CD36B0093; Tue, 30 Nov 2021 06:45:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC7356B0095; Tue, 30 Nov 2021 06:45:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0052.hostedemail.com [216.40.44.52]) by kanga.kvack.org (Postfix) with ESMTP id ACE706B0092 for ; Tue, 30 Nov 2021 06:45:58 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 71680183AB0A6 for ; Tue, 30 Nov 2021 11:45:48 +0000 (UTC) X-FDA: 78865417176.09.419A510 Received: from mail-ed1-f73.google.com (mail-ed1-f73.google.com [209.85.208.73]) by imf16.hostedemail.com (Postfix) with ESMTP id 40149F000091 for ; Tue, 30 Nov 2021 11:45:42 +0000 (UTC) Received: by mail-ed1-f73.google.com with SMTP id v1-20020aa7cd41000000b003e80973378aso16647514edw.14 for ; Tue, 30 Nov 2021 03:45:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=FxnGx7glVXa5fb5vt97IxdOcLQo6ydeIuX4RCBDcgdw=; b=IuT8ETl+CUu5djE2Sc10KkM6Tnru1Bt8YznawbCvRm9Jgvp9dSU0yLe6+joi6N8O7z AIQMX+OAPVIR41Cfns6ghKjzo+Oma52RXEvce97BWN8XleA4vFUT7GdZPuKFrpZGlHcF 6yYRNP/ViTSHKPJfti0Eaj1rJb2iVf9U20oiJZuzUw0D5yJerS3Bzu0/Hx+jtOL+x1E1 R2wPF1vj6a03NK0VTctonSiFDQ5cYmZGKeIfRYdvtl5e6D9EGeuanhZY7SdpfSTa6yMo 5PmPh+xd9A8VTGIvQ+j2BqV2x4NHt+irBixkDfvH2/M2xazm3E9FnFudLg/aEvPjG3A7 dYGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=FxnGx7glVXa5fb5vt97IxdOcLQo6ydeIuX4RCBDcgdw=; b=rPoC6I2ad03HYPEr3rwXNkuIzPA49h9StyJ23+19TE01yhdCaZJ3eEehuJUYPGZzur RmCnEAgIlGXyrl4liHStainllXRyqIkNPT+4SFx5xZoXWZIhKKUdt6Rxi+J+m4YTQ+PD wXT7GVLg83SRuMupJ1AXnKfSTWSJHS2GgmKLE7/KKOaH2uMlLklCNk+kZmMa63XBs3GO vC/Pk+ItFw6F214DJK6m81jbzIp+fM9ouxmQRB6uanwTyl/8TpYJ9tARF0PoWVTFT8r3 zeshGmZkFGpAZ7ZZvusoZdTdxkKvkTueRqN1jcXxIDsvP+6cCNwx2zPf6O8LLupKifnm bWuw== X-Gm-Message-State: AOAM532PjK5ZGcyTcVnxm89v1DJRsYgHyojTLahrcxv9OUmdIAQYTycs barECe/pWOzG9SjChs53IbT/I5/rLw== X-Google-Smtp-Source: ABdhPJxhBj5XWZjZdXBGffDMCoQxOuz6ni/khGb4YnRO6y4oi2+lwBW5Amf+ofP9vXsmezHPbLSJ3bZlIQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a17:907:9720:: with SMTP id jg32mr69349183ejc.304.1638272746945; Tue, 30 Nov 2021 03:45:46 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:26 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-19-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 18/25] x86/barriers, kcsan: Use generic instrumentation for non-smp barriers From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 40149F000091 X-Stat-Signature: tg8fi7wp61faj4hkaoz1jkxeqj4ysojw Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=IuT8ETl+; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of 36g6mYQUKCLMXeoXkZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--elver.bounces.google.com designates 209.85.208.73 as permitted sender) smtp.mailfrom=36g6mYQUKCLMXeoXkZhhZeX.Vhfebgnq-ffdoTVd.hkZ@flex--elver.bounces.google.com X-HE-Tag: 1638272742-290820 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Prefix all barriers with __, now that asm-generic/barriers.h supports defining the final instrumented version of these barriers. The change is limited to barriers used by x86-64. Signed-off-by: Marco Elver --- arch/x86/include/asm/barrier.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h index 3ba772a69cc8..35389b2af88e 100644 --- a/arch/x86/include/asm/barrier.h +++ b/arch/x86/include/asm/barrier.h @@ -19,9 +19,9 @@ #define wmb() asm volatile(ALTERNATIVE("lock; addl $0,-4(%%esp)", "sfence", \ X86_FEATURE_XMM2) ::: "memory", "cc") #else -#define mb() asm volatile("mfence":::"memory") -#define rmb() asm volatile("lfence":::"memory") -#define wmb() asm volatile("sfence" ::: "memory") +#define __mb() asm volatile("mfence":::"memory") +#define __rmb() asm volatile("lfence":::"memory") +#define __wmb() asm volatile("sfence" ::: "memory") #endif /** @@ -51,8 +51,8 @@ static inline unsigned long array_index_mask_nospec(unsigned long index, /* Prevent speculative execution past this barrier. */ #define barrier_nospec() alternative("", "lfence", X86_FEATURE_LFENCE_RDTSC) -#define dma_rmb() barrier() -#define dma_wmb() barrier() +#define __dma_rmb() barrier() +#define __dma_wmb() barrier() #define __smp_mb() asm volatile("lock; addl $0,-4(%%" _ASM_SP ")" ::: "memory", "cc") From patchwork Tue Nov 30 11:44:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12647099 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 19079C433FE for ; Tue, 30 Nov 2021 11:55:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9572A6B0093; Tue, 30 Nov 2021 06:46:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E0AE6B0095; Tue, 30 Nov 2021 06:46:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A6FC6B0096; Tue, 30 Nov 2021 06:46:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0226.hostedemail.com [216.40.44.226]) by kanga.kvack.org (Postfix) with ESMTP id 6D69C6B0093 for ; Tue, 30 Nov 2021 06:46:01 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 325538CA31 for ; Tue, 30 Nov 2021 11:45:51 +0000 (UTC) X-FDA: 78865417302.08.76708FC Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf12.hostedemail.com (Postfix) with ESMTP id B37B010000BF for ; Tue, 30 Nov 2021 11:45:50 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id 187-20020a1c02c4000000b003335872db8dso10297183wmc.2 for ; Tue, 30 Nov 2021 03:45:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=TQFyZU0v42DaaTkISoD0OFzde9Hlzhfr9CYPS/61CiQ=; b=UzkPOK44rKyoX0ydzg9d6O+jrGkdKn2U4Uh9I6GaB3gkQKhjEvJW6OswApZ/DpFhIs MjWkd+FkuLnMui5ECel20ebBFkuRLa1f1neQ6VVz1S7yCh0IvJGlwo8aeF9bhgAZqyug l2zCEvskzXpYTMvQpQGB0z4ARki5+15v+PWKTCRoN0cGTSZ65LPhGkDFa7K5dBSQtt1+ LiZoJY4ePdc7NQgpEjukKVvNVBinKAvWLDa3Mfnn9KnFectfd/m0Z+qn/ceEmQlEYCrQ OGNEtLuGhqXgJgSt12TdFVVE+Mz8qn5rbzRLOK/zD3ItiiBRgqjdDcz7tWKq+w3BIhpz +2gg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=TQFyZU0v42DaaTkISoD0OFzde9Hlzhfr9CYPS/61CiQ=; b=DfTrAWzelETyhn8+E8pihcjyQinXgF5mMrwVZBiy8pIRnMjgbRNRbQ+Yqnx+CB7vfG XlzGhCx1gUKvUzOAzc3OAlTzsVXPS3rACVcC+VVMnxS5Y0g/XcKE3gFMtkMaQwKxP6T0 EVwJCcKX1QhA40DO1T8pDSUVzFbSlPPNMkVX648HdDubTSHZzm8sXDFEetOVrL4As1NN GVTsVTOMIM66a+AL3/FqLQxqvM1CFQx7URe8iCCYlmcs1AF/vIT4uYDQ6kc9OOd7tUU8 pDX4Euwogi9YUISFCXol5knsGtefsavswzCscP4S3edTw1XvQhJ6I5ZuZEtCXgG1ygLo wLNg== X-Gm-Message-State: AOAM533ClyfuRoZ94SyBmnwVhuOKsYZOBbUe5caIHpcempcx8SsFDtQw V9LhO7KHx5a9ERbDhGiECSfep8BVYw== X-Google-Smtp-Source: ABdhPJxb0DkHs7VvMvArdCvCQrK9MrDdf3hAA2QD7Q29GE02b72kjglRFvit2CdEhNhUNiK6zoXiTRV6/Q== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:19c8:: with SMTP id u8mr4203223wmq.155.1638272749477; Tue, 30 Nov 2021 03:45:49 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:27 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-20-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 19/25] x86/qspinlock, kcsan: Instrument barrier of pv_queued_spin_unlock() From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: B37B010000BF X-Stat-Signature: rbap5w51ix8gojd3fz4hm1xe8ieziq1z Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=UzkPOK44; spf=pass (imf12.hostedemail.com: domain of 37Q6mYQUKCLYahranckkcha.Ykihejqt-iigrWYg.knc@flex--elver.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=37Q6mYQUKCLYahranckkcha.Ykihejqt-iigrWYg.knc@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1638272750-214130 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If CONFIG_PARAVIRT_SPINLOCKS=y, queued_spin_unlock() is implemented using pv_queued_spin_unlock() which is entirely inline asm based. As such, we do not receive any KCSAN barrier instrumentation via regular atomic operations. Add the missing KCSAN barrier instrumentation for the CONFIG_PARAVIRT_SPINLOCKS case. Signed-off-by: Marco Elver --- arch/x86/include/asm/qspinlock.h | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h index d86ab942219c..d87451df480b 100644 --- a/arch/x86/include/asm/qspinlock.h +++ b/arch/x86/include/asm/qspinlock.h @@ -53,6 +53,7 @@ static inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) static inline void queued_spin_unlock(struct qspinlock *lock) { + kcsan_release(); pv_queued_spin_unlock(lock); } From patchwork Tue Nov 30 11:44:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12647101 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2A26C433F5 for ; Tue, 30 Nov 2021 11:55:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 252D26B0095; Tue, 30 Nov 2021 06:46:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 202266B0096; Tue, 30 Nov 2021 06:46:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0D3D96B0098; Tue, 30 Nov 2021 06:46:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0137.hostedemail.com [216.40.44.137]) by kanga.kvack.org (Postfix) with ESMTP id F13F26B0095 for ; Tue, 30 Nov 2021 06:46:02 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id BA6D7824C42D for ; Tue, 30 Nov 2021 11:45:52 +0000 (UTC) X-FDA: 78865417386.26.2C574BD Received: from mail-qt1-f201.google.com (mail-qt1-f201.google.com [209.85.160.201]) by imf24.hostedemail.com (Postfix) with ESMTP id 4B78DB0000BC for ; Tue, 30 Nov 2021 11:45:47 +0000 (UTC) Received: by mail-qt1-f201.google.com with SMTP id v19-20020ac85793000000b002b19184b2bfso27019339qta.14 for ; Tue, 30 Nov 2021 03:45:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=rzdCErYR7QVJMqQ7C8iOEn+pKBOJL+BpLP2uqDT2sRI=; b=iiJU/NFxnkO4QpNuEziwm44ZddXcfbqTjJ27RND0XQcNYVVoxyYjAqEozptVgUXNwN tUotgFLhO6hTdCXy/8o38iy2aym7FJT+LW4h57+kXk+ITZwaPtmf/ukrKduYHtwrNNVy u6zRnFErKVJc5tA07nIb20yQJs/rjFgBkkJfiMGCJ0+/frkqjjvq7sB5Qlsbun/G1B3v mpVqufLXr0RiWnvRXu2PZPDLndJRxYoHPg3pEXSdStYWib1eEQIROpsu8lv6/UvQvgN4 KcSqzQSZ9rtkXbOEAqLYcx/YblyNiiImRh54NndbnYSq+4cJ1Y/85RKTI5bPUsrRqqxr ZZsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=rzdCErYR7QVJMqQ7C8iOEn+pKBOJL+BpLP2uqDT2sRI=; b=f7MRiN9i+CQD300f8NLqLYGwwmc991VcojZ5S3MH+OvzsHebdLuvAcH6LOTkxXdUtI w9kghr0hzyG/5wzBv0AREudJR+r/ghhPDzTHTMpPp6KVFeisKLdBMfeQxxEd7jaiPMPZ Sjy/IWJz+IRLIUA3d5mn8OXWt0M6OlpSTNaVn5jnKa0SMUXyn2t+n9v3I2+ibD+xefDp jNvPMjsIge3iW419pJx8hEeyBJxAWZbaUnoPhiuJWFgAZcc0yO5bMjopDVdEre8yBFDz EjhRR3jmDin8ssAw7VlKYz2QuyYAsvuS7BQIJi66EGlF6BTaW/S2KN5f7aFnN6NhkqZF bDAQ== X-Gm-Message-State: AOAM532g//sMjrDwHAUadiAupeQryDB3m+Blge5bCayCqVh4axKx//8o OwJpcGaNciG71TAb2rj6T7EXmX6toQ== X-Google-Smtp-Source: ABdhPJxS2DvHFjm5rRAa2Cv18Mr86xucn8I+904NmrEkNJGoQhVDVa+JTgno3MDa7PBFHMKHIzeyW8Gdjg== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:ac8:5c45:: with SMTP id j5mr51253786qtj.58.1638272751779; Tue, 30 Nov 2021 03:45:51 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:28 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-21-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 20/25] mm, kcsan: Enable barrier instrumentation From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 4B78DB0000BC X-Stat-Signature: jr14bobux9hmwd9i9yga1rtnwmagjnir Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="iiJU/NFx"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf24.hostedemail.com: domain of 37w6mYQUKCLgcjtcpemmejc.amkjglsv-kkitYai.mpe@flex--elver.bounces.google.com designates 209.85.160.201 as permitted sender) smtp.mailfrom=37w6mYQUKCLgcjtcpemmejc.amkjglsv-kkitYai.mpe@flex--elver.bounces.google.com X-HE-Tag: 1638272747-479739 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Some memory management calls imply memory barriers that are required to avoid false positives. For example, without the correct instrumentation, we could observe data races of the following variant: T0 | T1 ------------------------+------------------------ | *a = 42; ---+ | kfree(a); | | | | b = kmalloc(..); // b == a <-+ | *b = 42; // not a data race! | Therefore, instrument memory barriers in all allocator code currently not being instrumented in a default build. Signed-off-by: Marco Elver --- mm/Makefile | 2 ++ 1 file changed, 2 insertions(+) diff --git a/mm/Makefile b/mm/Makefile index d6c0042e3aa0..7919cd7f13f2 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -15,6 +15,8 @@ KCSAN_SANITIZE_slab_common.o := n KCSAN_SANITIZE_slab.o := n KCSAN_SANITIZE_slub.o := n KCSAN_SANITIZE_page_alloc.o := n +# But enable explicit instrumentation for memory barriers. +KCSAN_INSTRUMENT_BARRIERS := y # These files are disabled because they produce non-interesting and/or # flaky coverage that is not a function of syscall inputs. E.g. slab is out of From patchwork Tue Nov 30 11:44:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12647111 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 106C1C433EF for ; Tue, 30 Nov 2021 11:58:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 909FA6B009B; Tue, 30 Nov 2021 06:47:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B77C6B009C; Tue, 30 Nov 2021 06:47:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 77FCB6B009D; Tue, 30 Nov 2021 06:47:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0177.hostedemail.com [216.40.44.177]) by kanga.kvack.org (Postfix) with ESMTP id 699A86B009C for ; Tue, 30 Nov 2021 06:47:06 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 372E38B767 for ; Tue, 30 Nov 2021 11:46:56 +0000 (UTC) X-FDA: 78865419990.11.6FE46D9 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) by imf21.hostedemail.com (Postfix) with ESMTP id 9C1F0D036A4D for ; Tue, 30 Nov 2021 11:45:52 +0000 (UTC) Received: by mail-wr1-f74.google.com with SMTP id d18-20020adfe852000000b001985d36817cso3515687wrn.13 for ; Tue, 30 Nov 2021 03:45:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=SnYVQxLC6YSHg0QBspCa3KEICf+BvvptqJyS2cHTmDQ=; b=YAPrqaSvFXPKIyChd2B9G/mC+V4Jc4D9Ysa+dt9+gyWoCPV+nRINqJd57Duo402ra3 fACCmxrugjWDUChLO48YEakWmtgKIBS2NenkgfzdEIUaFeOLMeRRLt3y3O0O8m42KVyP EYZTlGjECGMCo3XeJTyzQ9huKYBCMEUzAAG6gZ+6TrfzQbi+hkq/C1sNW9Osl71l2q00 KnvJkh1XBMAWo2p+iVXz1ZScDqJ9cYV8qalOkG24HTG/65Ii8z7IiAdL8qvRMC2Opkvc XOyIIRR2dMM8QMdcIa4mT34J5n4EmvIPm98FB4otkGxOdejNKSt9f8kvehv1Vt3mjuJX t2wA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=SnYVQxLC6YSHg0QBspCa3KEICf+BvvptqJyS2cHTmDQ=; b=hlylBKuXYcKX7O4DXnHvkIUVlNSluAqhp2TtAdzcNKEb3PQesx6y6h8hOs3RUT6gjQ mf5uCH/zuQiGxVDIg+mS+DF0yfo3nnYFq0rpYb7SOiTbawCF0fzS1nENfC4EYnui2Avb Y5Pk6jEIenV4WC7RZc5EscwhEO+ua9I+xuS94ewogdkHIsemHqqa1CElu2r8k9ywn0Ia Xn3HpRLnrFNF3MCqxTmjQzm00Qy6vKSoBAdwh0Ro/8XP/Agw6H/ynDPi62eXMhzO8ip3 W3cde1QThhfxafKwUpwdY9z+8PNwKSOAeNNw7UmNEGGjHKLn+gfNQBSxeG/OrWU1pG/7 6YJQ== X-Gm-Message-State: AOAM532D0T9172KRC9JgCyqp0RmHJ+xq2pEmX2oHT4XSE6nskxT0Acnq 2nti2ENuymyXt4NnrZHE8HqpvV70VA== X-Google-Smtp-Source: ABdhPJy3VXvdk8vo/XrXiDkU+BIk+TJTbB/pqh504ouz3u8/5fsbp+KQ9JgMbQcOudhStkssJotFYNwe1A== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:4f0b:: with SMTP id l11mr626057wmq.0.1638272754116; Tue, 30 Nov 2021 03:45:54 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:29 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-22-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 21/25] sched, kcsan: Enable memory barrier instrumentation From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Queue-Id: 9C1F0D036A4D X-Stat-Signature: w7aw6zoxb1osks1hh35twmc1iwq8ibf3 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=YAPrqaSv; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf21.hostedemail.com: domain of 38g6mYQUKCLsfmwfshpphmf.dpnmjovy-nnlwbdl.psh@flex--elver.bounces.google.com designates 209.85.221.74 as permitted sender) smtp.mailfrom=38g6mYQUKCLsfmwfshpphmf.dpnmjovy-nnlwbdl.psh@flex--elver.bounces.google.com X-Rspamd-Server: rspam02 X-HE-Tag: 1638272752-710203 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There's no fundamental reason to disable KCSAN for scheduler code, except for excessive noise and performance concerns (instrumenting scheduler code is usually a good way to stress test KCSAN itself). However, several core sched functions imply memory barriers that are invisible to KCSAN without instrumentation, but are required to avoid false positives. Therefore, unconditionally enable instrumentation of memory barriers in scheduler code. Also update the comment to reflect this and be a bit more brief. Signed-off-by: Marco Elver --- kernel/sched/Makefile | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile index c7421f2d05e1..c83b37af155b 100644 --- a/kernel/sched/Makefile +++ b/kernel/sched/Makefile @@ -11,11 +11,10 @@ ccflags-y += $(call cc-disable-warning, unused-but-set-variable) # that is not a function of syscall inputs. E.g. involuntary context switches. KCOV_INSTRUMENT := n -# There are numerous data races here, however, most of them are due to plain accesses. -# This would make it even harder for syzbot to find reproducers, because these -# bugs trigger without specific input. Disable by default, but should re-enable -# eventually. +# Disable KCSAN to avoid excessive noise and performance degradation. To avoid +# false positives ensure barriers implied by sched functions are instrumented. KCSAN_SANITIZE := n +KCSAN_INSTRUMENT_BARRIERS := y ifneq ($(CONFIG_SCHED_OMIT_FRAME_POINTER),y) # According to Alan Modra , the -fno-omit-frame-pointer is From patchwork Tue Nov 30 11:44:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12647103 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 83A39C433EF for ; Tue, 30 Nov 2021 11:56:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 75C496B0096; Tue, 30 Nov 2021 06:46:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 70AC26B0098; Tue, 30 Nov 2021 06:46:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5D3126B0099; Tue, 30 Nov 2021 06:46:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0247.hostedemail.com [216.40.44.247]) by kanga.kvack.org (Postfix) with ESMTP id 4FFD06B0096 for ; Tue, 30 Nov 2021 06:46:08 -0500 (EST) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 0376618317032 for ; Tue, 30 Nov 2021 11:45:58 +0000 (UTC) X-FDA: 78865417554.09.3662777 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf19.hostedemail.com (Postfix) with ESMTP id C70BCB0000AE for ; Tue, 30 Nov 2021 11:45:52 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id d3-20020adfa343000000b0018ed6dd4629so3530483wrb.2 for ; Tue, 30 Nov 2021 03:45:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=JzBhpw0Qm81Z9lbF5IGyZhQAb+5o04LDLkCFlxupKLc=; b=pHRgTgtBe9w8GJo+UVuCrCXkkBePrXHeJXXGSGxiMQTDZhPB1HMZjVhCQEHB+8iuxA QTqapU0TX6O4TE0Y48efyZW2bq1hF37Xt0PGzM68wPVdqNEwB+3nHHqWFUL1gWu3kHtR Cq7Wrdud5dEh12cWU90OhIU4V/9Q7ULAOS9mOORJ6Eqe+R8j1ryew/Yd+VDrcTllPVFh 5lcxmqYwyS4EYnDSPUY2x7vLF1W4hPQ8ICF36xbdENuMBTVVtdo+SbNEIlEehDKxtwYR kgzu3xNPjqGc6hHJvBg6+Ezt/2WHsUBVgd1XEwAmeXQPIFvn+uf/vjPzU0O+2NhHvmqd ksUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=JzBhpw0Qm81Z9lbF5IGyZhQAb+5o04LDLkCFlxupKLc=; b=pbMKd+UnEs8Hyyk8eROdyASPe1rDrd8KEHSvOY9L+s2iU5LtYckQJSf7WjPrrlFEd/ eWnGU6PwvDi2MbG37gOHlGaoHADyQm8yvJae2Eg4U6CqQf6IKQh5DNccj0VwC4ekRTAp hKo2JYU/hxoRKDDKa/mjAPtM5TWdZk6ThLD2HQPG0x9loyl6KEO5Dn849ZDg67u6sstu acoB9Kw0dMUiqFCXv9jAuLd5uvA16TGmBhZ8j1LHUdUlhvPb+Sn55K8maf2fIszglOgT 3GMdZb/innFhiHk9ip9nrsJHlc+H6UrCz0p19RlZdCKS07f6eEiEMzgcz30un4WspSmh IMpw== X-Gm-Message-State: AOAM530I9m/r6m70rfAMlyIAty9PRnZcf4UDubvf6N+hBBihstNB6TW8 yNtsvj3QILwxJzTOxBWHyh/yAalHSQ== X-Google-Smtp-Source: ABdhPJyGlw39n0+Jfq+xh033MGOvVeVGOoalcV20EEznri+jie430fZfA9+7esf3Wo7SpGvol2V1AwHpQw== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a7b:cc8f:: with SMTP id p15mr4408290wma.129.1638272756667; Tue, 30 Nov 2021 03:45:56 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:30 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-23-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 22/25] objtool, kcsan: Add memory barrier instrumentation to whitelist From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: C70BCB0000AE X-Stat-Signature: xtonxo4r64yqfxc44kprak186eaiibj8 Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=pHRgTgtB; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf19.hostedemail.com: domain of 39A6mYQUKCL0hoyhujrrjoh.frpolqx0-ppnydfn.ruj@flex--elver.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=39A6mYQUKCL0hoyhujrrjoh.frpolqx0-ppnydfn.ruj@flex--elver.bounces.google.com X-HE-Tag: 1638272752-726420 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Adds KCSAN's memory barrier instrumentation to objtool's uaccess whitelist. Signed-off-by: Marco Elver --- tools/objtool/check.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 21735829b860..61dfb66b30b6 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -849,6 +849,10 @@ static const char *uaccess_safe_builtin[] = { "__asan_report_store16_noabort", /* KCSAN */ "__kcsan_check_access", + "__kcsan_mb", + "__kcsan_wmb", + "__kcsan_rmb", + "__kcsan_release", "kcsan_found_watchpoint", "kcsan_setup_watchpoint", "kcsan_check_scoped_accesses", From patchwork Tue Nov 30 11:44:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12647105 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3E6B0C433F5 for ; Tue, 30 Nov 2021 11:56:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9310E6B0098; Tue, 30 Nov 2021 06:46:11 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8DFBA6B0099; Tue, 30 Nov 2021 06:46:11 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A72F6B009A; Tue, 30 Nov 2021 06:46:11 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id 6C8546B0098 for ; Tue, 30 Nov 2021 06:46:11 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 349877CCF9 for ; Tue, 30 Nov 2021 11:46:01 +0000 (UTC) X-FDA: 78865417680.23.1C8F42F Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf02.hostedemail.com (Postfix) with ESMTP id CDFAB700177D for ; Tue, 30 Nov 2021 11:45:57 +0000 (UTC) Received: by mail-wr1-f73.google.com with SMTP id r2-20020adfe682000000b00198af042b0dso3518809wrm.23 for ; Tue, 30 Nov 2021 03:46:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=1F+MofKXejp7ry8lfe8Rm0bUxzdx8s4N4hiT9E57RpU=; b=AxG+zGUAuQz1+vXIOQ6DG6QRjOq3T51cJb6xki7f4JHpONQpFZJx8GE/b7CRWzPCsG OLRaS1SxhNRItDJ+4oOUU5VmP/nYksafcZlT3TVFS1SWFBK3vno6Gg3Jm9z1cK6Zuest ui1H7XkT5tngcQHFEK6CC8n6Ghp1ypFOZCz94aNFgLZvN+GCnNenOMRE9+mty4qRXv1N 2qGht/8T/o21I4oyxH8o4aKxIfvvRVeX0nUK1yVoam9e7llKFicSRQUZiaWijAt560uq Dg7b5UU9ZIUcjYU29xL3ofX78AIwM6oIKYpH7LDX2euCle6y9eO4jBbsvtISE16oErOr I/ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=1F+MofKXejp7ry8lfe8Rm0bUxzdx8s4N4hiT9E57RpU=; b=mCYHskN40WLGNNqiKmYmEmRW42zGFx6G2D/s1sLPAlDa+NUHCuB6d4tnbIDxV7OThD p8d8JIYfgTo2iTeTqNASs6T9nxbL4NcI0i1Nz7iAwgvpBXK+yQSlt53VKwvXrSHqhcN8 WOlFRiQDn4mqAsauHAKG5n1KKmCdIXJKASg0KkDRDlmfsNSHjUwBTWZB7Do2yKXpjArX l/lDfYYeSVOHz9ci6jI3AP2wb4D7/qhJti3jr+83yBPyW0i2YRBxPft+pe/PsaU3ENzt v6Dk0EELyAfmk0cB2x0r+tmlCR2IfZtWpd5u6poaJxf7ENJLFcQhKzl+HMeGGGa/1j9d ecTQ== X-Gm-Message-State: AOAM5338PTYpsl17IlRwCSIRjFLs7NLdSTgMkbcmirfYkg9fx37c9cTL pxZjOBAj8b/3XolEn9h75laUQa6qZA== X-Google-Smtp-Source: ABdhPJyPWvl8zulDnZGqXZ4CGIOsIfQU/gfUk/ZE53ZBn+QfJqlzKFrFzo/IqOCXSYJaXQPpXS6SD3dG+A== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:1d1b:: with SMTP id l27mr624656wms.1.1638272759191; Tue, 30 Nov 2021 03:45:59 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:31 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-24-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 23/25] objtool, kcsan: Remove memory barrier instrumentation from noinstr From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org, Josh Poimboeuf X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: CDFAB700177D X-Stat-Signature: aidkdpc7qo5duczswjstab7z54pfxkhi Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=AxG+zGUA; spf=pass (imf02.hostedemail.com: domain of 39w6mYQUKCMAkr1kxmuumrk.iusrot03-ssq1giq.uxm@flex--elver.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=39w6mYQUKCMAkr1kxmuumrk.iusrot03-ssq1giq.uxm@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1638272757-256589 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Teach objtool to turn instrumentation required for memory barrier modeling into nops in noinstr text. The __tsan_func_entry/exit calls are still emitted by compilers even with the __no_sanitize_thread attribute. The memory barrier instrumentation will be inserted explicitly (without compiler help), and thus needs to also explicitly be removed. Signed-off-by: Marco Elver Acked-by: Josh Poimboeuf --- v3: * s/removable_instr/profiling_func/ (suggested by Josh Poimboeuf) * s/__kcsan_(mb|wmb|rmb|release)/__atomic_signal_fence/, because Clang < 14.0 will still emit these in noinstr even with __no_kcsan. * Fix and add more comments. v2: * Rewrite after rebase to v5.16-rc1. --- tools/objtool/check.c | 37 ++++++++++++++++++++++++----- tools/objtool/include/objtool/elf.h | 2 +- 2 files changed, 32 insertions(+), 7 deletions(-) diff --git a/tools/objtool/check.c b/tools/objtool/check.c index 61dfb66b30b6..a9a1f7259d62 100644 --- a/tools/objtool/check.c +++ b/tools/objtool/check.c @@ -1072,11 +1072,11 @@ static void annotate_call_site(struct objtool_file *file, } /* - * Many compilers cannot disable KCOV with a function attribute - * so they need a little help, NOP out any KCOV calls from noinstr - * text. + * Many compilers cannot disable KCOV or sanitizer calls with a function + * attribute so they need a little help, NOP out any such calls from + * noinstr text. */ - if (insn->sec->noinstr && sym->kcov) { + if (insn->sec->noinstr && sym->profiling_func) { if (reloc) { reloc->type = R_NONE; elf_write_reloc(file->elf, reloc); @@ -1991,6 +1991,31 @@ static int read_intra_function_calls(struct objtool_file *file) return 0; } +/* + * Return true if name matches an instrumentation function, where calls to that + * function from noinstr code can safely be removed, but compilers won't do so. + */ +static bool is_profiling_func(const char *name) +{ + /* + * Many compilers cannot disable KCOV with a function attribute. + */ + if (!strncmp(name, "__sanitizer_cov_", 16)) + return true; + + /* + * Some compilers currently do not remove __tsan_func_entry/exit nor + * __tsan_atomic_signal_fence (used for barrier instrumentation) with + * the __no_sanitize_thread attribute, remove them. Once the kernel's + * minimum Clang version is 14.0, this can be removed. + */ + if (!strncmp(name, "__tsan_func_", 12) || + !strcmp(name, "__tsan_atomic_signal_fence")) + return true; + + return false; +} + static int classify_symbols(struct objtool_file *file) { struct section *sec; @@ -2011,8 +2036,8 @@ static int classify_symbols(struct objtool_file *file) if (!strcmp(func->name, "__fentry__")) func->fentry = true; - if (!strncmp(func->name, "__sanitizer_cov_", 16)) - func->kcov = true; + if (is_profiling_func(func->name)) + func->profiling_func = true; } } diff --git a/tools/objtool/include/objtool/elf.h b/tools/objtool/include/objtool/elf.h index cdc739fa9a6f..d22336781401 100644 --- a/tools/objtool/include/objtool/elf.h +++ b/tools/objtool/include/objtool/elf.h @@ -58,7 +58,7 @@ struct symbol { u8 static_call_tramp : 1; u8 retpoline_thunk : 1; u8 fentry : 1; - u8 kcov : 1; + u8 profiling_func : 1; struct list_head pv_target; }; From patchwork Tue Nov 30 11:44:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12647107 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B106CC433FE for ; Tue, 30 Nov 2021 11:57:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 623F06B0099; Tue, 30 Nov 2021 06:46:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 5D2DB6B009A; Tue, 30 Nov 2021 06:46:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4C17A6B009B; Tue, 30 Nov 2021 06:46:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0181.hostedemail.com [216.40.44.181]) by kanga.kvack.org (Postfix) with ESMTP id 3E9986B0099 for ; Tue, 30 Nov 2021 06:46:13 -0500 (EST) Received: from smtpin02.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 01CBF86314 for ; Tue, 30 Nov 2021 11:46:03 +0000 (UTC) X-FDA: 78865417764.02.6E7043A Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf04.hostedemail.com (Postfix) with ESMTP id 1EFD550000A1 for ; Tue, 30 Nov 2021 11:45:56 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id n41-20020a05600c502900b003335ab97f41so12710397wmr.3 for ; Tue, 30 Nov 2021 03:46:02 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=hHpI+TkJP++fcdc46vUsOPoUrJOI1E0VxBDtAT4JUtM=; b=ZydYBw/EQptiAQjTJgW8VFb6M+tj/jnOJSKAzEOo9ttb4/prBXxenHyAcxKFnhYPtG yxkoZSjdkoQwGFG90n0H+8xMHAoecs8kp2mS+zTaZsYuo/VXrNHwT1dCCGqN+qd2xKqO A4acfRlS2z6HmsLoxx1UolNjWfJ2+Ztsu6wU/VCsHuubkWa6zG7vvPdxEtYFrhi/J/x1 SgdPBckdgJ9qE7I2ZNNnAqkYt2+gpBZ+AartGApj/044P+zkOJcXZg2hjzRL7no/XtPz MihwHIfIAQnWjB96wkXuEnhFMgqyeDJ1+RyraL9HQrgcKqeYe6aB+N/OeDIQjy/DWPFQ OLpw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=hHpI+TkJP++fcdc46vUsOPoUrJOI1E0VxBDtAT4JUtM=; b=hGvdLaOCv4a2BplM8gtRtBZjTSh4Zh+AI49JF6N5fP1OeFHlLDl2nRT6jQzo/JgoZp oMIEaxKx9XRY1NDnRgBck9kgGkzO9N0WmsuqYR8ETcd9F/+BUYoJf9lWuWfnjNWDSo26 vIQZ50qYpFmL2IaalmFHryyqQWqnnc53EI5ZW0WbGK34fSZrrqOzDMq6VLB8aeT9pEL1 jc6KgAFGTj6ItzBdQgUGWn36Ra7ACe4g1f8BsctLu9Ix4EwTXeGxPxckTqHWYYYXrFKT 9R+N+SgOK3E51ssWltOOEXsoy25zaMHq68KVNMD/jA38h+KjffJVJVRjmh4hmEwRAQp9 j4ZQ== X-Gm-Message-State: AOAM53202942ualkh0+cce5Fy2fltJS1nRjlkKgIiSMuHCifhOIG+j2y 1gW0UaRBWYIXqhuf6bi1GlxvW0yDQg== X-Google-Smtp-Source: ABdhPJxBEJ1Ij4N3anSHCQ6YBfqk8LJXNhrNf/MNCN7ZMLSlwxOUn6GdF8bZmMY3PAFHnunTgtGrJWD/kQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:4e07:: with SMTP id b7mr4217301wmq.16.1638272761639; Tue, 30 Nov 2021 03:46:01 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:32 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-25-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 24/25] compiler_attributes.h: Add __disable_sanitizer_instrumentation From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Stat-Signature: 47yqowsopsg5n6zfapbfknw9crj64a35 X-Rspamd-Queue-Id: 1EFD550000A1 X-Rspamd-Server: rspam07 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b="ZydYBw/E"; spf=pass (imf04.hostedemail.com: domain of 3-Q6mYQUKCMImt3mzowwotm.kwutqv25-uus3iks.wzo@flex--elver.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3-Q6mYQUKCMImt3mzowwotm.kwutqv25-uus3iks.wzo@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1638272756-628097 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Alexander Potapenko The new attribute maps to __attribute__((disable_sanitizer_instrumentation)), which will be supported by Clang >= 14.0. Future support in GCC is also possible. This attribute disables compiler instrumentation for kernel sanitizer tools, making it easier to implement noinstr. It is different from the existing __no_sanitize* attributes, which may still allow certain types of instrumentation to prevent false positives. Signed-off-by: Alexander Potapenko Signed-off-by: Marco Elver --- v3: * New patch. --- include/linux/compiler_attributes.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h index b9121afd8733..37e260020221 100644 --- a/include/linux/compiler_attributes.h +++ b/include/linux/compiler_attributes.h @@ -308,6 +308,24 @@ # define __compiletime_warning(msg) #endif +/* + * Optional: only supported since clang >= 14.0 + * + * clang: https://clang.llvm.org/docs/AttributeReference.html#disable-sanitizer-instrumentation + * + * disable_sanitizer_instrumentation is not always similar to + * no_sanitize(()): the latter may still let specific sanitizers + * insert code into functions to prevent false positives. Unlike that, + * disable_sanitizer_instrumentation prevents all kinds of instrumentation to + * functions with the attribute. + */ +#if __has_attribute(disable_sanitizer_instrumentation) +# define __disable_sanitizer_instrumentation \ + __attribute__((disable_sanitizer_instrumentation)) +#else +# define __disable_sanitizer_instrumentation +#endif + /* * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html#index-weak-function-attribute * gcc: https://gcc.gnu.org/onlinedocs/gcc/Common-Variable-Attributes.html#index-weak-variable-attribute From patchwork Tue Nov 30 11:44:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12647109 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3476BC433F5 for ; Tue, 30 Nov 2021 11:57:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4E9F06B009A; Tue, 30 Nov 2021 06:46:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 49AB26B009B; Tue, 30 Nov 2021 06:46:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 362636B009C; Tue, 30 Nov 2021 06:46:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0152.hostedemail.com [216.40.44.152]) by kanga.kvack.org (Postfix) with ESMTP id 27BF06B009A for ; Tue, 30 Nov 2021 06:46:16 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id E4924183F96A6 for ; Tue, 30 Nov 2021 11:46:05 +0000 (UTC) X-FDA: 78865417848.27.06B6752 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf16.hostedemail.com (Postfix) with ESMTP id AF57DF0000A0 for ; Tue, 30 Nov 2021 11:45:59 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id i131-20020a1c3b89000000b00337f92384e0so13610018wma.5 for ; Tue, 30 Nov 2021 03:46:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=ykQMG7f7HitTY02bjgroWOqQwac8nTYOVAZLqTXm/og=; b=qfKYXOVzY2MTzD2zIwUpOpLCYIanN5AfAyzY4pD6y/00NCNsZhMgCXyQuHgqHRjMQP 2Q4GpB2kKiA9NG+mxwbdwhx/T6uztpsr0cPDZKwOOV4NBkpo8wKmqUMwj6QIZ50+WTGC rj+XujQ6x15SViviNrLc8ryKZTpDmfQ59NmqPKusALt7vIXW09vUdxRhx/IHgX0O1Fi0 peXM6+tWJqwR46RttMZspmzxuv6mgkZjTPQRNEx0+Evh/SfhzkOebPUPea948l/PAD60 8d8gkegUNMa+NHH1FPVvEqF67z8VdDW4GAXutQLRg58SDKDe/Mh9BClDvEsEpsff6SiM sbTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=ykQMG7f7HitTY02bjgroWOqQwac8nTYOVAZLqTXm/og=; b=7JR9KgpzaOXfwJQOAMP6o59nHCOWBcuA4LPsAnxVjnlKcORPhpNxwGxlgyaPYW5j63 1voW5xcuXWTkXNsMwDLIDPJl8tPyc/merqckP/TKdPqD75I7ejwxMrpbNUY7ElB50LRC Pnp/j7kkI4mvGa6L9eA3/6IQIid0LWQrM01wZobSXaV5rD7+GjCStPGp7TdeUQMTVH37 VrIV/UlgSQz7kjRQqXiXwYwo0UPYM+H1YFFuk0XL1pEDp8ABf1+9HXK4GqH8kKuWnNDG 31svqUH4b2wn1OlBKx6qint6sHlv4yQBtVGNrmox71MHCKxz3sXw7peS4oLAMmSj9k7I etDg== X-Gm-Message-State: AOAM533wAUjYE4xDcrHdwRybtVHN/lfE2ZiG1/t/dRUkntpoaAXKJZg8 GHLIfi2EsUKZixNfH+TafOGujSWv4Q== X-Google-Smtp-Source: ABdhPJw6whH8Hls3EC+BgAy91BvIDfMtsebOG9cJCkHwE7EfRIVxArLwzYtCPhRhlBjkro/EFip+BpGtoQ== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:86b7:11e9:7797:99f0]) (user=elver job=sendgmr) by 2002:a05:600c:4f0b:: with SMTP id l11mr626212wmq.0.1638272763966; Tue, 30 Nov 2021 03:46:03 -0800 (PST) Date: Tue, 30 Nov 2021 12:44:33 +0100 In-Reply-To: <20211130114433.2580590-1-elver@google.com> Message-Id: <20211130114433.2580590-26-elver@google.com> Mime-Version: 1.0 References: <20211130114433.2580590-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v3 25/25] kcsan: Support WEAK_MEMORY with Clang where no objtool support exists From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, llvm@lists.linux.dev, x86@kernel.org X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: AF57DF0000A0 X-Stat-Signature: wbxbk1g3i5rt8cscbkyw9jcdcg16nuwt Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=qfKYXOVz; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf16.hostedemail.com: domain of 3-w6mYQUKCMQov5o1qyyqvo.mywvsx47-wwu5kmu.y1q@flex--elver.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3-w6mYQUKCMQov5o1qyyqvo.mywvsx47-wwu5kmu.y1q@flex--elver.bounces.google.com X-HE-Tag: 1638272759-52226 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Clang and GCC behave a little differently when it comes to the __no_sanitize_thread attribute, which has valid reasons, and depending on context either one could be right. Traditionally, user space ThreadSanitizer [1] still expects instrumented builtin atomics (to avoid false positives) and __tsan_func_{entry,exit} (to generate meaningful stack traces), even if the function has the attribute no_sanitize("thread"). [1] https://clang.llvm.org/docs/ThreadSanitizer.html#attribute-no-sanitize-thread GCC doesn't follow the same policy (for better or worse), and removes all kinds of instrumentation if no_sanitize is added. Arguably, since this may be a problem for user space ThreadSanitizer, we expect this may change in future. Since KCSAN != ThreadSanitizer, the likelihood of false positives even without barrier instrumentation everywhere, is much lower by design. At least for Clang, however, to fully remove all sanitizer instrumentation, we must add the disable_sanitizer_instrumentation attribute, which is available since Clang 14.0. Signed-off-by: Marco Elver --- v3: * New patch. --- include/linux/compiler_types.h | 13 ++++++++++++- lib/Kconfig.kcsan | 2 +- 2 files changed, 13 insertions(+), 2 deletions(-) diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 1d32f4c03c9e..3c1795fdb568 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -198,9 +198,20 @@ struct ftrace_likely_data { # define __no_kasan_or_inline __always_inline #endif -#define __no_kcsan __no_sanitize_thread #ifdef __SANITIZE_THREAD__ +/* + * Clang still emits instrumentation for __tsan_func_{entry,exit}() and builtin + * atomics even with __no_sanitize_thread (to avoid false positives in userspace + * ThreadSanitizer). The kernel's requirements are stricter and we really do not + * want any instrumentation with __no_kcsan. + * + * Therefore we add __disable_sanitizer_instrumentation where available to + * disable all instrumentation. See Kconfig.kcsan where this is mandatory. + */ +# define __no_kcsan __no_sanitize_thread __disable_sanitizer_instrumentation # define __no_sanitize_or_inline __no_kcsan notrace __maybe_unused +#else +# define __no_kcsan #endif #ifndef __no_sanitize_or_inline diff --git a/lib/Kconfig.kcsan b/lib/Kconfig.kcsan index e4394ea8068b..63b70b8c5551 100644 --- a/lib/Kconfig.kcsan +++ b/lib/Kconfig.kcsan @@ -198,7 +198,7 @@ config KCSAN_WEAK_MEMORY # We can either let objtool nop __tsan_func_{entry,exit}() and builtin # atomics instrumentation in .noinstr.text, or use a compiler that can # implement __no_kcsan to really remove all instrumentation. - depends on STACK_VALIDATION || CC_IS_GCC + depends on STACK_VALIDATION || CC_IS_GCC || CLANG_VERSION >= 140000 help Enable support for modeling a subset of weak memory, which allows detecting a subset of data races due to missing memory barriers.