From patchwork Thu Nov 18 08:10:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marco Elver X-Patchwork-Id: 12626215 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 781B2C433EF for ; Thu, 18 Nov 2021 08:13:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0F46B61107 for ; Thu, 18 Nov 2021 08:13:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 0F46B61107 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 35FAB6B007D; Thu, 18 Nov 2021 03:11:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 30FE56B007E; Thu, 18 Nov 2021 03:11:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1D74C6B0080; Thu, 18 Nov 2021 03:11:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0094.hostedemail.com [216.40.44.94]) by kanga.kvack.org (Postfix) with ESMTP id 0EA3B6B007D for ; Thu, 18 Nov 2021 03:11:21 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id BC66E89115 for ; Thu, 18 Nov 2021 08:11:10 +0000 (UTC) X-FDA: 78821330700.26.2BD42DA Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) by imf17.hostedemail.com (Postfix) with ESMTP id 537F2F0001C9 for ; Thu, 18 Nov 2021 08:11:10 +0000 (UTC) Received: by mail-wm1-f74.google.com with SMTP id i131-20020a1c3b89000000b00337f92384e0so3987732wma.5 for ; Thu, 18 Nov 2021 00:11:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=HAAXujUo7vJi9QoAYq1xZhWCtJcnOt42cvkKdKb2mZ0=; b=ZvljMtx+aBzGVGLPn7eWAsT73ExmZFn5/qieQKn1o0FrFOeRJR+W8/ayaqbIbk9TkV naIMN5BgLXuq0BX1QLAgT5iI+II4Tuc7HGQBvjS8az+pHHIEuOBs1M/cpJIFAhMVF9Dl ufft4/f8dllvEUBYZMcDl+bbqasvP5vObHHZTic25m1rDP4K5b+87b4dCCiu0L5ahS/M uPfzcCB/57abGkoWuIniqyUVWgh2ujmHqBG9UWHm1HA1IR7StDu4RMk25Khh0gH2UY19 B+D7y4ylMleQBxH4zfRbPcwxo2QlCMRALtP7AhYHfvZb+JkLzgY5i5wMs3uPDOWvh9b1 7J3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=HAAXujUo7vJi9QoAYq1xZhWCtJcnOt42cvkKdKb2mZ0=; b=NXgxVMKqIS2FKwX1a/VbqNQTv6SJa7QYzHusUF62zOMkAIKhSbnkJEZ8kMKVLx1aAJ zvA7E+CbbTchgp/brLjBf2E/kk6EtmlPEh8pe+wwMxmib/2veAvQ7IMLkpZ/PlLAopEo ovkKpc8CzpImcKE6z2yjYOgeKJaD+DH8309OEQAV7uyaw2kOIiF0bhlz0bUDlfan8V24 /+vNvEFB7OlVDjqsVbf11d2GJWu4u18nZTePcVrUPhldUJY20LYKHAjgpxSqqITwYx7B V9UMChL+Vaw7ZZNInIDcKurS/EXYUtP2o3Prl5jnZ5flfhEGS1MiwST5o1j5FIko3Jm7 La8g== X-Gm-Message-State: AOAM531mMnsJdtl4Kcp6EV65FVjMh9zDPL7kSk8kioP9xjeP6P9no2ds luIIBsR4KOH+C4Vq/R+zRwqlBqxcmw== X-Google-Smtp-Source: ABdhPJxasck2gvp3pyOKdAMvcZlLYm/K2gi7xrgvcQBYIVlIkj9YBNax85HUGsxAxhDiLPq3hTBplPPqCA== X-Received: from elver.muc.corp.google.com ([2a00:79e0:15:13:7155:1b7:fca5:3926]) (user=elver job=sendgmr) by 2002:a05:600c:1d97:: with SMTP id p23mr7679237wms.186.1637223069169; Thu, 18 Nov 2021 00:11:09 -0800 (PST) Date: Thu, 18 Nov 2021 09:10:09 +0100 In-Reply-To: <20211118081027.3175699-1-elver@google.com> Message-Id: <20211118081027.3175699-6-elver@google.com> Mime-Version: 1.0 References: <20211118081027.3175699-1-elver@google.com> X-Mailer: git-send-email 2.34.0.rc2.393.gf8c9666880-goog Subject: [PATCH v2 05/23] kcsan: Add core memory barrier instrumentation functions From: Marco Elver To: elver@google.com, "Paul E. McKenney" Cc: Alexander Potapenko , Boqun Feng , Borislav Petkov , Dmitry Vyukov , Ingo Molnar , Josh Poimboeuf , Mark Rutland , Peter Zijlstra , Thomas Gleixner , Waiman Long , Will Deacon , kasan-dev@googlegroups.com, linux-arch@vger.kernel.org, linux-doc@vger.kernel.org, linux-kbuild@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, x86@kernel.org X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 537F2F0001C9 X-Stat-Signature: rzu4ydzjmx8t8bki4qrgh4zaq65cynei Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=ZvljMtx+; spf=pass (imf17.hostedemail.com: domain of 3nQqWYQUKCB48FP8LAIIAF8.6IGFCHOR-GGEP46E.ILA@flex--elver.bounces.google.com designates 209.85.128.74 as permitted sender) smtp.mailfrom=3nQqWYQUKCB48FP8LAIIAF8.6IGFCHOR-GGEP46E.ILA@flex--elver.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1637223070-624562 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add the core memory barrier instrumentation functions. These invalidate the current in-flight reordered access based on the rules for the respective barrier types and in-flight access type. Signed-off-by: Marco Elver --- v2: * Rename kcsan_atomic_release() to kcsan_atomic_builtin_memorder() to avoid confusion. --- include/linux/kcsan-checks.h | 41 ++++++++++++++++++++++++++++++++++-- kernel/kcsan/core.c | 36 +++++++++++++++++++++++++++++++ 2 files changed, 75 insertions(+), 2 deletions(-) diff --git a/include/linux/kcsan-checks.h b/include/linux/kcsan-checks.h index a1c6a89fde71..c9e7c39a7d7b 100644 --- a/include/linux/kcsan-checks.h +++ b/include/linux/kcsan-checks.h @@ -36,6 +36,26 @@ */ void __kcsan_check_access(const volatile void *ptr, size_t size, int type); +/** + * __kcsan_mb - full memory barrier instrumentation + */ +void __kcsan_mb(void); + +/** + * __kcsan_wmb - write memory barrier instrumentation + */ +void __kcsan_wmb(void); + +/** + * __kcsan_rmb - read memory barrier instrumentation + */ +void __kcsan_rmb(void); + +/** + * __kcsan_release - release barrier instrumentation + */ +void __kcsan_release(void); + /** * kcsan_disable_current - disable KCSAN for the current context * @@ -159,6 +179,10 @@ void kcsan_end_scoped_access(struct kcsan_scoped_access *sa); static inline void __kcsan_check_access(const volatile void *ptr, size_t size, int type) { } +static inline void __kcsan_mb(void) { } +static inline void __kcsan_wmb(void) { } +static inline void __kcsan_rmb(void) { } +static inline void __kcsan_release(void) { } static inline void kcsan_disable_current(void) { } static inline void kcsan_enable_current(void) { } static inline void kcsan_enable_current_nowarn(void) { } @@ -191,12 +215,25 @@ static inline void kcsan_end_scoped_access(struct kcsan_scoped_access *sa) { } */ #define __kcsan_disable_current kcsan_disable_current #define __kcsan_enable_current kcsan_enable_current_nowarn -#else +#else /* __SANITIZE_THREAD__ */ static inline void kcsan_check_access(const volatile void *ptr, size_t size, int type) { } static inline void __kcsan_enable_current(void) { } static inline void __kcsan_disable_current(void) { } -#endif +#endif /* __SANITIZE_THREAD__ */ + +#if defined(CONFIG_KCSAN_WEAK_MEMORY) && \ + (defined(__SANITIZE_THREAD__) || defined(__KCSAN_INSTRUMENT_BARRIERS__)) +#define kcsan_mb __kcsan_mb +#define kcsan_wmb __kcsan_wmb +#define kcsan_rmb __kcsan_rmb +#define kcsan_release __kcsan_release +#else /* CONFIG_KCSAN_WEAK_MEMORY && (__SANITIZE_THREAD__ || __KCSAN_INSTRUMENT_BARRIERS__) */ +static inline void kcsan_mb(void) { } +static inline void kcsan_wmb(void) { } +static inline void kcsan_rmb(void) { } +static inline void kcsan_release(void) { } +#endif /* CONFIG_KCSAN_WEAK_MEMORY && (__SANITIZE_THREAD__ || __KCSAN_INSTRUMENT_BARRIERS__) */ /** * __kcsan_check_read - check regular read access for races diff --git a/kernel/kcsan/core.c b/kernel/kcsan/core.c index 24d82baa807d..840ed8e35f75 100644 --- a/kernel/kcsan/core.c +++ b/kernel/kcsan/core.c @@ -955,6 +955,28 @@ void __kcsan_check_access(const volatile void *ptr, size_t size, int type) } EXPORT_SYMBOL(__kcsan_check_access); +#define DEFINE_MEMORY_BARRIER(name, order_before_cond) \ + kcsan_noinstr void __kcsan_##name(void) \ + { \ + struct kcsan_scoped_access *sa; \ + if (within_noinstr(_RET_IP_)) \ + return; \ + instrumentation_begin(); \ + sa = get_reorder_access(get_ctx()); \ + if (!sa) \ + goto out; \ + if (order_before_cond) \ + sa->size = 0; \ + out: \ + instrumentation_end(); \ + } \ + EXPORT_SYMBOL(__kcsan_##name) + +DEFINE_MEMORY_BARRIER(mb, true); +DEFINE_MEMORY_BARRIER(wmb, sa->type & (KCSAN_ACCESS_WRITE | KCSAN_ACCESS_COMPOUND)); +DEFINE_MEMORY_BARRIER(rmb, !(sa->type & KCSAN_ACCESS_WRITE) || (sa->type & KCSAN_ACCESS_COMPOUND)); +DEFINE_MEMORY_BARRIER(release, true); + /* * KCSAN uses the same instrumentation that is emitted by supported compilers * for ThreadSanitizer (TSAN). @@ -1143,10 +1165,19 @@ EXPORT_SYMBOL(__tsan_init); * functions, whose job is to also execute the operation itself. */ +static __always_inline void kcsan_atomic_builtin_memorder(int memorder) +{ + if (memorder == __ATOMIC_RELEASE || + memorder == __ATOMIC_SEQ_CST || + memorder == __ATOMIC_ACQ_REL) + __kcsan_release(); +} + #define DEFINE_TSAN_ATOMIC_LOAD_STORE(bits) \ u##bits __tsan_atomic##bits##_load(const u##bits *ptr, int memorder); \ u##bits __tsan_atomic##bits##_load(const u##bits *ptr, int memorder) \ { \ + kcsan_atomic_builtin_memorder(memorder); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, KCSAN_ACCESS_ATOMIC, _RET_IP_); \ } \ @@ -1156,6 +1187,7 @@ EXPORT_SYMBOL(__tsan_init); void __tsan_atomic##bits##_store(u##bits *ptr, u##bits v, int memorder); \ void __tsan_atomic##bits##_store(u##bits *ptr, u##bits v, int memorder) \ { \ + kcsan_atomic_builtin_memorder(memorder); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, \ KCSAN_ACCESS_WRITE | KCSAN_ACCESS_ATOMIC, _RET_IP_); \ @@ -1168,6 +1200,7 @@ EXPORT_SYMBOL(__tsan_init); u##bits __tsan_atomic##bits##_##op(u##bits *ptr, u##bits v, int memorder); \ u##bits __tsan_atomic##bits##_##op(u##bits *ptr, u##bits v, int memorder) \ { \ + kcsan_atomic_builtin_memorder(memorder); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, \ KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | \ @@ -1200,6 +1233,7 @@ EXPORT_SYMBOL(__tsan_init); int __tsan_atomic##bits##_compare_exchange_##strength(u##bits *ptr, u##bits *exp, \ u##bits val, int mo, int fail_mo) \ { \ + kcsan_atomic_builtin_memorder(mo); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, \ KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | \ @@ -1215,6 +1249,7 @@ EXPORT_SYMBOL(__tsan_init); u##bits __tsan_atomic##bits##_compare_exchange_val(u##bits *ptr, u##bits exp, u##bits val, \ int mo, int fail_mo) \ { \ + kcsan_atomic_builtin_memorder(mo); \ if (!IS_ENABLED(CONFIG_KCSAN_IGNORE_ATOMICS)) { \ check_access(ptr, bits / BITS_PER_BYTE, \ KCSAN_ACCESS_COMPOUND | KCSAN_ACCESS_WRITE | \ @@ -1246,6 +1281,7 @@ DEFINE_TSAN_ATOMIC_OPS(64); void __tsan_atomic_thread_fence(int memorder); void __tsan_atomic_thread_fence(int memorder) { + kcsan_atomic_builtin_memorder(memorder); __atomic_thread_fence(memorder); } EXPORT_SYMBOL(__tsan_atomic_thread_fence);