From patchwork Tue Mar 1 01:04:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namhyung Kim X-Patchwork-Id: 12763947 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8E7CC433F5 for ; Tue, 1 Mar 2022 01:04:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231927AbiCABFG (ORCPT ); Mon, 28 Feb 2022 20:05:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231938AbiCABFF (ORCPT ); Mon, 28 Feb 2022 20:05:05 -0500 Received: from mail-pl1-x62a.google.com (mail-pl1-x62a.google.com [IPv6:2607:f8b0:4864:20::62a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F0B680922; Mon, 28 Feb 2022 17:04:23 -0800 (PST) Received: by mail-pl1-x62a.google.com with SMTP id l9so11738456pls.6; Mon, 28 Feb 2022 17:04:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JrNR0ubFsESPf43y01w+R9fi1phF7RotGPv8gNv+PG4=; b=TlIRGW5ki+2r1DGJzlEGSyKwiuTOfpNGgpmDLCHUp/G9CMWMSd+4KvysT1o0mnEbZY nq5b7FjpvitHGTEiTdyQ/ZoVObQAqMCpsmje5wNcIcLzkujDMXwNdZz/2pfG6VajWbVd ni/Nc5WAuyX6YTzWgXYE1vu1kwKqoFIxlvuLpegP8f0PRfeO4bxX7o+3V8a8VXSfSL1t mUavpRsQzBpFKHUqqVHDy9/PJDLnbrjAQXY7ZK/QRBCluP3OnlHbKAtQlQOXjnGB1Zkd BAd/tDNi8oaRObc/heaKSntGVFR/SEbTo/UDUTvYF4NzwM2IaUgZlV+duhUktLLFk7+x N/dQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=JrNR0ubFsESPf43y01w+R9fi1phF7RotGPv8gNv+PG4=; b=4LPTXRqXcWqZIkiuDxMn4GD0Qc424nJASQsNjo/tIi0bqAFn5+p41BV8+CEjS9H8f3 r9DDFbqyiyJC6ed5UQJEANqdoGPJXcPkraSnhLag0XcN4KdhkIZj6Uq7Rm9Dp+vgchPr HQazpZpZ6jhbfOYyVa+gj/oLccKtUHPsMnEQ7bfSda8heBtHG1JX+JhZ+UH6Xpx3dqtu DAp7sO0DB/uKk0DmsTu/zlnFOMgESgfm9hzNMQ7nReUg674q/pwfsJlFtVNzwi/U1urW tkK3ZpvO7P03x83a9cqqxyIimEa2dGaDq6X328ScapLw+jDeJQh0VdkerYBPV0nLByB5 XG+g== X-Gm-Message-State: AOAM530d5R4oQNuhYcMVHtHs2Ysd2ZdGoqdAVv6Ru4AHATSM9t9xsFtT Upj8BVi+6HJ1Yb+05vDOaws= X-Google-Smtp-Source: ABdhPJyfEE8rU4X5tKG//O1ZqAiGer717B/t2w0IWdQ1q+GSufua1ZPJB2V7F1xzZRKvy2TgvIMgZQ== X-Received: by 2002:a17:902:7e08:b0:151:65dd:a2d1 with SMTP id b8-20020a1709027e0800b0015165dda2d1mr8195879plm.66.1646096662540; Mon, 28 Feb 2022 17:04:22 -0800 (PST) Received: from balhae.hsd1.ca.comcast.net ([2601:647:4800:3540:726c:585a:8796:a60a]) by smtp.gmail.com with ESMTPSA id cv15-20020a17090afd0f00b001bedcbca1a9sm83861pjb.57.2022.02.28.17.04.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 17:04:21 -0800 (PST) Sender: Namhyung Kim From: Namhyung Kim To: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng Cc: LKML , Thomas Gleixner , Steven Rostedt , Mathieu Desnoyers , Byungchul Park , "Paul E. McKenney" , Arnd Bergmann , linux-arch@vger.kernel.org, bpf@vger.kernel.org, Radoslaw Burny Subject: [PATCH 1/4] locking: Add lock contention tracepoints Date: Mon, 28 Feb 2022 17:04:09 -0800 Message-Id: <20220301010412.431299-2-namhyung@kernel.org> X-Mailer: git-send-email 2.35.1.574.g5d30c73bfb-goog In-Reply-To: <20220301010412.431299-1-namhyung@kernel.org> References: <20220301010412.431299-1-namhyung@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org This adds two new lock contention tracepoints like below: * lock:contention_begin * lock:contention_end The lock:contention_begin takes a flags argument to classify locks. I found it useful to pass a task state it goes to and it can tell if the given lock is busy-waiting (spinlock) or sleeping (mutex or semaphore). Also it has info whether it's a reader-writer lock, real-time, and per-cpu. Move tracepoint definitions into a separate file so that we can use some of them without lockdep. Also lock_trace.h header was added to provide access to the tracepoints in the header file (like spinlock.h) which cannot include the tracepoint definition directly. Signed-off-by: Namhyung Kim --- include/linux/lock_trace.h | 31 +++++++++++++++++++++++++++ include/trace/events/lock.h | 42 ++++++++++++++++++++++++++++++++++++- kernel/locking/Makefile | 2 +- kernel/locking/lockdep.c | 1 - kernel/locking/tracepoint.c | 21 +++++++++++++++++++ 5 files changed, 94 insertions(+), 3 deletions(-) create mode 100644 include/linux/lock_trace.h create mode 100644 kernel/locking/tracepoint.c diff --git a/include/linux/lock_trace.h b/include/linux/lock_trace.h new file mode 100644 index 000000000000..d84bcc9570a4 --- /dev/null +++ b/include/linux/lock_trace.h @@ -0,0 +1,31 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef __LINUX_LOCK_TRACE_H +#define __LINUX_LOCK_TRACE_H + +#include + +DECLARE_TRACEPOINT(contention_begin); +DECLARE_TRACEPOINT(contention_end); + +#define LCB_F_READ (1U << 31) +#define LCB_F_WRITE (1U << 30) +#define LCB_F_RT (1U << 29) +#define LCB_F_PERCPU (1U << 28) + +extern void lock_contention_begin(void *lock, unsigned long ip, + unsigned int flags); +extern void lock_contention_end(void *lock); + +#define LOCK_CONTENTION_BEGIN(_lock, _flags) \ + do { \ + if (tracepoint_enabled(contention_begin)) \ + lock_contention_begin(_lock, _RET_IP_, _flags); \ + } while (0) + +#define LOCK_CONTENTION_END(_lock) \ + do { \ + if (tracepoint_enabled(contention_end)) \ + lock_contention_end(_lock); \ + } while (0) + +#endif /* __LINUX_LOCK_TRACE_H */ diff --git a/include/trace/events/lock.h b/include/trace/events/lock.h index d7512129a324..7bca0a537dbd 100644 --- a/include/trace/events/lock.h +++ b/include/trace/events/lock.h @@ -5,11 +5,12 @@ #if !defined(_TRACE_LOCK_H) || defined(TRACE_HEADER_MULTI_READ) #define _TRACE_LOCK_H -#include #include #ifdef CONFIG_LOCKDEP +#include + TRACE_EVENT(lock_acquire, TP_PROTO(struct lockdep_map *lock, unsigned int subclass, @@ -81,6 +82,45 @@ DEFINE_EVENT(lock, lock_acquired, #endif #endif +TRACE_EVENT(contention_begin, + + TP_PROTO(void *lock, unsigned long ip, unsigned int flags), + + TP_ARGS(lock, ip, flags), + + TP_STRUCT__entry( + __field(void *, lock_addr) + __field(unsigned long, ip) + __field(unsigned int, flags) + ), + + TP_fast_assign( + __entry->lock_addr = lock; + __entry->ip = ip; + __entry->flags = flags; + ), + + TP_printk("%p %pS (%x)", __entry->lock_addr, (void *) __entry->ip, + __entry->flags) +); + +TRACE_EVENT(contention_end, + + TP_PROTO(void *lock), + + TP_ARGS(lock), + + TP_STRUCT__entry( + __field(void *, lock_addr) + ), + + TP_fast_assign( + __entry->lock_addr = lock; + ), + + TP_printk("%p", __entry->lock_addr) +); + #endif /* _TRACE_LOCK_H */ /* This part must be outside protection */ diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile index d51cabf28f38..d212401adcdc 100644 --- a/kernel/locking/Makefile +++ b/kernel/locking/Makefile @@ -3,7 +3,7 @@ # and is generally not a function of system call inputs. KCOV_INSTRUMENT := n -obj-y += mutex.o semaphore.o rwsem.o percpu-rwsem.o +obj-y += mutex.o semaphore.o rwsem.o percpu-rwsem.o tracepoint.o # Avoid recursion lockdep -> KCSAN -> ... -> lockdep. KCSAN_SANITIZE_lockdep.o := n diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index 50036c10b518..08f8fb6a2d1e 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -60,7 +60,6 @@ #include "lockdep_internals.h" -#define CREATE_TRACE_POINTS #include #ifdef CONFIG_PROVE_LOCKING diff --git a/kernel/locking/tracepoint.c b/kernel/locking/tracepoint.c new file mode 100644 index 000000000000..d6f5c6c1d7bd --- /dev/null +++ b/kernel/locking/tracepoint.c @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#include + +#define CREATE_TRACE_POINTS +#include + +/* these are exported via LOCK_CONTENTION_{BEGIN,END} macro */ +EXPORT_TRACEPOINT_SYMBOL_GPL(contention_begin); +EXPORT_TRACEPOINT_SYMBOL_GPL(contention_end); + +void lock_contention_begin(void *lock, unsigned long ip, unsigned int flags) +{ + trace_contention_begin(lock, ip, flags); +} +EXPORT_SYMBOL_GPL(lock_contention_begin); + +void lock_contention_end(void *lock) +{ + trace_contention_end(lock); +} +EXPORT_SYMBOL_GPL(lock_contention_end); From patchwork Tue Mar 1 01:04:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namhyung Kim X-Patchwork-Id: 12763948 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 711D6C433EF for ; Tue, 1 Mar 2022 01:04:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231928AbiCABFG (ORCPT ); Mon, 28 Feb 2022 20:05:06 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231944AbiCABFF (ORCPT ); Mon, 28 Feb 2022 20:05:05 -0500 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 01A3DD0076; Mon, 28 Feb 2022 17:04:25 -0800 (PST) Received: by mail-pj1-x1035.google.com with SMTP id bx9-20020a17090af48900b001bc64ee7d3cso778326pjb.4; Mon, 28 Feb 2022 17:04:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=e3mjWWtnWgZ4Cp3dmXaVNszTfT+EcKE+NFcUN6juSww=; b=c5pBuFrzTEj/W2MOwFc215Jh+fETy7HWn/8+8Kb8dOFNxgUJkCorLOVwlmCNw4Cx/b 1t7JDRZ0SAqKa/+/gBtPVUJQLmFbg6Uk361SPrWRbT1kVUx+SRsUOErnms6ti8g/88dw SD74lDYg+0+2PzOO7Jxix8zLhU0pWjBH44Xjxc0AXQS53QwFFHmwMsn8XFiipJ0KWh0l 2UNjYwTOmCsnzJd4KGZxMa+kErfvNVXcm9vJuUJpjYPUz0F6UQX3URn6G+fRsJ2RjTdn fUWcilVvJKY/fpRbrL9Qy4XiUWRxcUjXrvhqwiKpQAq4LGUYFAdsREL2THjy5jryonXj mZ7w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=e3mjWWtnWgZ4Cp3dmXaVNszTfT+EcKE+NFcUN6juSww=; b=F4oJaA6shahtqKQBT3ePLJKMs1ds9ALSq1rKKhfw+MhfPlzxRTb13huZ/VAJvBQddX 9nshEV26RSnB07OvXbDeICS/mzq8fqIxDfJpkytVRiHmfys0t/vO3+WeEiI08EP7sv7G GKX5vR+9uBUTq7cG5D+K7S6jvDZhgqvGKVtF++WCT5lg3lKFbLbbTQsY2SLVtl/LVHIF HArjy7UBZffjXqDV1IhfmpPIqo8tXO1D8VrTEjGk0aeme2xeG83DHq/5vumL5gLZaxhs dmKUu2ZhAdl49lM6sLPBoE/E/Tv+rZwinT8TwfqOxq0ZG/LvQ3GstZeVbCcgbXgW+uJL 96Wg== X-Gm-Message-State: AOAM533nuGbZPUI6y2COONJ7peNokaRGnQxchx80i1qsAfO7Bsgi9MAx mPEiYYVCQJ/auamA9J2rvblSD6/v28k= X-Google-Smtp-Source: ABdhPJyq+TBrS+sYb8zCNrTKowKRQLey/6xDnCndrQ7EaPRHK0foCRd7WJ9GqjANzWx5C/IlNDf6jg== X-Received: by 2002:a17:90b:1104:b0:1b8:b90b:22c7 with SMTP id gi4-20020a17090b110400b001b8b90b22c7mr19571738pjb.45.1646096664401; Mon, 28 Feb 2022 17:04:24 -0800 (PST) Received: from balhae.hsd1.ca.comcast.net ([2601:647:4800:3540:726c:585a:8796:a60a]) by smtp.gmail.com with ESMTPSA id cv15-20020a17090afd0f00b001bedcbca1a9sm83861pjb.57.2022.02.28.17.04.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 17:04:23 -0800 (PST) Sender: Namhyung Kim From: Namhyung Kim To: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng Cc: LKML , Thomas Gleixner , Steven Rostedt , Mathieu Desnoyers , Byungchul Park , "Paul E. McKenney" , Arnd Bergmann , linux-arch@vger.kernel.org, bpf@vger.kernel.org, Radoslaw Burny Subject: [PATCH 2/4] locking: Apply contention tracepoints in the slow path Date: Mon, 28 Feb 2022 17:04:10 -0800 Message-Id: <20220301010412.431299-3-namhyung@kernel.org> X-Mailer: git-send-email 2.35.1.574.g5d30c73bfb-goog In-Reply-To: <20220301010412.431299-1-namhyung@kernel.org> References: <20220301010412.431299-1-namhyung@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Adding the lock contention tracepoints in various lock function slow paths. Note that each arch can define spinlock differently, I only added it only to the generic qspinlock for now. Signed-off-by: Namhyung Kim --- include/asm-generic/qrwlock.h | 5 +++++ include/asm-generic/qspinlock.h | 3 +++ include/trace/events/lock.h | 1 + kernel/locking/mutex.c | 4 ++++ kernel/locking/percpu-rwsem.c | 11 ++++++++++- kernel/locking/rtmutex.c | 12 +++++++++++- kernel/locking/rwbase_rt.c | 11 ++++++++++- kernel/locking/rwsem.c | 16 ++++++++++++++-- 8 files changed, 58 insertions(+), 5 deletions(-) diff --git a/include/asm-generic/qrwlock.h b/include/asm-generic/qrwlock.h index 7ae0ece07b4e..9735c39b05bb 100644 --- a/include/asm-generic/qrwlock.h +++ b/include/asm-generic/qrwlock.h @@ -12,6 +12,7 @@ #include #include #include +#include #include @@ -80,7 +81,9 @@ static inline void queued_read_lock(struct qrwlock *lock) return; /* The slowpath will decrement the reader count, if necessary. */ + LOCK_CONTENTION_BEGIN(lock, LCB_F_READ); queued_read_lock_slowpath(lock); + LOCK_CONTENTION_END(lock); } /** @@ -94,7 +97,9 @@ static inline void queued_write_lock(struct qrwlock *lock) if (likely(atomic_try_cmpxchg_acquire(&lock->cnts, &cnts, _QW_LOCKED))) return; + LOCK_CONTENTION_BEGIN(lock, LCB_F_WRITE); queued_write_lock_slowpath(lock); + LOCK_CONTENTION_END(lock); } /** diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h index d74b13825501..986b96fadbf9 100644 --- a/include/asm-generic/qspinlock.h +++ b/include/asm-generic/qspinlock.h @@ -12,6 +12,7 @@ #include #include +#include #ifndef queued_spin_is_locked /** @@ -82,7 +83,9 @@ static __always_inline void queued_spin_lock(struct qspinlock *lock) if (likely(atomic_try_cmpxchg_acquire(&lock->val, &val, _Q_LOCKED_VAL))) return; + LOCK_CONTENTION_BEGIN(lock, 0); queued_spin_lock_slowpath(lock, val); + LOCK_CONTENTION_END(lock); } #endif diff --git a/include/trace/events/lock.h b/include/trace/events/lock.h index 7bca0a537dbd..9b285083f88f 100644 --- a/include/trace/events/lock.h +++ b/include/trace/events/lock.h @@ -6,6 +6,7 @@ #define _TRACE_LOCK_H #include +#include #ifdef CONFIG_LOCKDEP diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 5e3585950ec8..756624c14dfd 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -30,6 +30,8 @@ #include #include +#include + #ifndef CONFIG_PREEMPT_RT #include "mutex.h" @@ -626,6 +628,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas waiter.ww_ctx = ww_ctx; lock_contended(&lock->dep_map, ip); + trace_contention_begin(lock, ip, state); if (!use_ww_ctx) { /* add waiting tasks to the end of the waitqueue (FIFO): */ @@ -688,6 +691,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas } raw_spin_lock(&lock->wait_lock); acquired: + trace_contention_end(lock); __set_current_state(TASK_RUNNING); if (ww_ctx) { diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c index c9fdae94e098..4049b79b3dcc 100644 --- a/kernel/locking/percpu-rwsem.c +++ b/kernel/locking/percpu-rwsem.c @@ -9,6 +9,7 @@ #include #include #include +#include int __percpu_init_rwsem(struct percpu_rw_semaphore *sem, const char *name, struct lock_class_key *key) @@ -171,9 +172,12 @@ bool __sched __percpu_down_read(struct percpu_rw_semaphore *sem, bool try) if (try) return false; + trace_contention_begin(sem, _RET_IP_, + LCB_F_READ | LCB_F_PERCPU | TASK_UNINTERRUPTIBLE); preempt_enable(); percpu_rwsem_wait(sem, /* .reader = */ true); preempt_disable(); + trace_contention_end(sem); return true; } @@ -224,8 +228,13 @@ void __sched percpu_down_write(struct percpu_rw_semaphore *sem) * Try set sem->block; this provides writer-writer exclusion. * Having sem->block set makes new readers block. */ - if (!__percpu_down_write_trylock(sem)) + if (!__percpu_down_write_trylock(sem)) { + unsigned int flags = LCB_F_WRITE | LCB_F_PERCPU | TASK_UNINTERRUPTIBLE; + + trace_contention_begin(sem, _RET_IP_, flags); percpu_rwsem_wait(sem, /* .reader = */ false); + trace_contention_end(sem); + } /* smp_mb() implied by __percpu_down_write_trylock() on success -- D matches A */ diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 8555c4efe97c..e49f5d2a232b 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -24,6 +24,8 @@ #include #include +#include + #include "rtmutex_common.h" #ifndef WW_RT @@ -1652,10 +1654,16 @@ static int __sched rt_mutex_slowlock(struct rt_mutex_base *lock, static __always_inline int __rt_mutex_lock(struct rt_mutex_base *lock, unsigned int state) { + int ret; + if (likely(rt_mutex_cmpxchg_acquire(lock, NULL, current))) return 0; - return rt_mutex_slowlock(lock, NULL, state); + trace_contention_begin(lock, _RET_IP_, LCB_F_RT | state); + ret = rt_mutex_slowlock(lock, NULL, state); + trace_contention_end(lock); + + return ret; } #endif /* RT_MUTEX_BUILD_MUTEX */ @@ -1718,9 +1726,11 @@ static __always_inline void __sched rtlock_slowlock(struct rt_mutex_base *lock) { unsigned long flags; + trace_contention_begin(lock, _RET_IP_, LCB_F_RT | TASK_RTLOCK_WAIT); raw_spin_lock_irqsave(&lock->wait_lock, flags); rtlock_slowlock_locked(lock); raw_spin_unlock_irqrestore(&lock->wait_lock, flags); + trace_contention_end(lock); } #endif /* RT_MUTEX_BUILD_SPINLOCKS */ diff --git a/kernel/locking/rwbase_rt.c b/kernel/locking/rwbase_rt.c index 6fd3162e4098..8a28f1195c58 100644 --- a/kernel/locking/rwbase_rt.c +++ b/kernel/locking/rwbase_rt.c @@ -136,10 +136,16 @@ static int __sched __rwbase_read_lock(struct rwbase_rt *rwb, static __always_inline int rwbase_read_lock(struct rwbase_rt *rwb, unsigned int state) { + int ret; + if (rwbase_read_trylock(rwb)) return 0; - return __rwbase_read_lock(rwb, state); + trace_contention_begin(rwb, _RET_IP_, LCB_F_READ | LCB_F_RT | state); + ret = __rwbase_read_lock(rwb, state); + trace_contention_end(rwb); + + return ret; } static void __sched __rwbase_read_unlock(struct rwbase_rt *rwb, @@ -246,12 +252,14 @@ static int __sched rwbase_write_lock(struct rwbase_rt *rwb, if (__rwbase_write_trylock(rwb)) goto out_unlock; + trace_contention_begin(rwb, _RET_IP_, LCB_F_WRITE | LCB_F_RT | state); rwbase_set_and_save_current_state(state); for (;;) { /* Optimized out for rwlocks */ if (rwbase_signal_pending_state(state, current)) { rwbase_restore_current_state(); __rwbase_write_unlock(rwb, 0, flags); + trace_contention_end(rwb); return -EINTR; } @@ -265,6 +273,7 @@ static int __sched rwbase_write_lock(struct rwbase_rt *rwb, set_current_state(state); } rwbase_restore_current_state(); + trace_contention_end(rwb); out_unlock: raw_spin_unlock_irqrestore(&rtm->wait_lock, flags); diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c index acde5d6f1254..a1a17af7f747 100644 --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -27,6 +27,7 @@ #include #include #include +#include #ifndef CONFIG_PREEMPT_RT #include "lock_events.h" @@ -1209,9 +1210,14 @@ static struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *sem) static inline int __down_read_common(struct rw_semaphore *sem, int state) { long count; + void *ret; if (!rwsem_read_trylock(sem, &count)) { - if (IS_ERR(rwsem_down_read_slowpath(sem, count, state))) + trace_contention_begin(sem, _RET_IP_, LCB_F_READ | state); + ret = rwsem_down_read_slowpath(sem, count, state); + trace_contention_end(sem); + + if (IS_ERR(ret)) return -EINTR; DEBUG_RWSEMS_WARN_ON(!is_rwsem_reader_owned(sem), sem); } @@ -1255,8 +1261,14 @@ static inline int __down_read_trylock(struct rw_semaphore *sem) */ static inline int __down_write_common(struct rw_semaphore *sem, int state) { + void *ret; + if (unlikely(!rwsem_write_trylock(sem))) { - if (IS_ERR(rwsem_down_write_slowpath(sem, state))) + trace_contention_begin(sem, _RET_IP_, LCB_F_WRITE | state); + ret = rwsem_down_write_slowpath(sem, state); + trace_contention_end(sem); + + if (IS_ERR(ret)) return -EINTR; } From patchwork Tue Mar 1 01:04:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namhyung Kim X-Patchwork-Id: 12763949 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 51B58C433F5 for ; Tue, 1 Mar 2022 01:04:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231956AbiCABFI (ORCPT ); Mon, 28 Feb 2022 20:05:08 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55182 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231392AbiCABFG (ORCPT ); Mon, 28 Feb 2022 20:05:06 -0500 Received: from mail-pg1-x536.google.com (mail-pg1-x536.google.com [IPv6:2607:f8b0:4864:20::536]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D8FC4D005F; Mon, 28 Feb 2022 17:04:26 -0800 (PST) Received: by mail-pg1-x536.google.com with SMTP id 139so13084767pge.1; Mon, 28 Feb 2022 17:04:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0yM/quxeVqCB0P6Eu2BBnjYZ4Tx/QdCrOaO8okg5iWo=; b=F/2F5Mak16oDCB4aG+yUBXI4U/uWfuo/xWxkWFqdPAxEScPG1/fKN+r+FmGQkVqmm5 5ASJI0BH/HCMown0NwrCHx9Yx2549mBwYC2DVfy/5XlEQsW8TNRJsD0QG7I4rjchgHCY aksBL0gHpvxaw+OVAMD0D0XUMbuJSfQRIVWgQg9XhD4D9LQIxDWFHn1napKOVy/NR67c E0aHzGtcoXISsn0llv6qzSzCZacTdhV+59NHIR08r1Myl1tNgOR3PrudTl7dQApSYOeO 7Ml7RqaibagRSxuTCUH6brjjn90FO44ka02j91jWrKWvyqJ35naz0lwvUEk3sgZhdXTK OArw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=0yM/quxeVqCB0P6Eu2BBnjYZ4Tx/QdCrOaO8okg5iWo=; b=JvzIL+Kpd9ixV1JmZ9REQiojUHxbssQxGhe8wkRzCnsrjAl62liDe/LyExs3SwFllx rPAzfMdyu7N/YYJ367OQTZgvvYiy++9tOOi6CxF2e7vvA1Hvvit9yNysMNzjV5yIBeKv XBqtKkP8amKCVpz2G5u3eqS7hZ3ALjnYqKgZzyJweq41ISG53tCMlHZTh2Z0GEK1ibOB IGH9kmEg7nPAQLyQT2s/VKgxB9foUW71Jj0PSrljEOl7Ly2PPlD+D8480k/NWS0AMXoq Zv55eWXowSXIOV/JUNScb9ItcD/jGGc4wk22gFK+Ef86IpFmrVHqisdNsnFdKcUWgiBI 4tJA== X-Gm-Message-State: AOAM531FS4WoWiWaeJiZrurDlzQcfHdCo4jWjt/hJuqa2k/QlmOsRpaP T7EFB2HAHqfuETd05oZG15A= X-Google-Smtp-Source: ABdhPJxmx8QY6WLC1rmMvip8jHagVdEZuVhy+ogBV9rFHUfQ1TaT8aorNNhFjYluEZtlOYASjmIxdA== X-Received: by 2002:a05:6a00:24ca:b0:4e1:cb76:32da with SMTP id d10-20020a056a0024ca00b004e1cb7632damr24244003pfv.81.1646096666357; Mon, 28 Feb 2022 17:04:26 -0800 (PST) Received: from balhae.hsd1.ca.comcast.net ([2601:647:4800:3540:726c:585a:8796:a60a]) by smtp.gmail.com with ESMTPSA id cv15-20020a17090afd0f00b001bedcbca1a9sm83861pjb.57.2022.02.28.17.04.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 17:04:25 -0800 (PST) Sender: Namhyung Kim From: Namhyung Kim To: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng Cc: LKML , Thomas Gleixner , Steven Rostedt , Mathieu Desnoyers , Byungchul Park , "Paul E. McKenney" , Arnd Bergmann , linux-arch@vger.kernel.org, bpf@vger.kernel.org, Radoslaw Burny Subject: [PATCH 3/4] locking/mutex: Pass proper call-site ip Date: Mon, 28 Feb 2022 17:04:11 -0800 Message-Id: <20220301010412.431299-4-namhyung@kernel.org> X-Mailer: git-send-email 2.35.1.574.g5d30c73bfb-goog In-Reply-To: <20220301010412.431299-1-namhyung@kernel.org> References: <20220301010412.431299-1-namhyung@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org The __mutex_lock_slowpath() and friends are declared as noinline and _RET_IP_ returns its caller as mutex_lock which is not meaningful. Pass the ip from mutex_lock() to have actual caller info in the trace. Signed-off-by: Namhyung Kim --- kernel/locking/mutex.c | 40 ++++++++++++++++++++-------------------- 1 file changed, 20 insertions(+), 20 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 756624c14dfd..126b014098f3 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -254,7 +254,7 @@ static void __mutex_handoff(struct mutex *lock, struct task_struct *task) * We also put the fastpath first in the kernel image, to make sure the * branch is predicted by the CPU as default-untaken. */ -static void __sched __mutex_lock_slowpath(struct mutex *lock); +static void __sched __mutex_lock_slowpath(struct mutex *lock, unsigned long ip); /** * mutex_lock - acquire the mutex @@ -282,7 +282,7 @@ void __sched mutex_lock(struct mutex *lock) might_sleep(); if (!__mutex_trylock_fast(lock)) - __mutex_lock_slowpath(lock); + __mutex_lock_slowpath(lock, _RET_IP_); } EXPORT_SYMBOL(mutex_lock); #endif @@ -947,10 +947,10 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne * mutex_lock_interruptible() and mutex_trylock(). */ static noinline int __sched -__mutex_lock_killable_slowpath(struct mutex *lock); +__mutex_lock_killable_slowpath(struct mutex *lock, unsigned long ip); static noinline int __sched -__mutex_lock_interruptible_slowpath(struct mutex *lock); +__mutex_lock_interruptible_slowpath(struct mutex *lock, unsigned long ip); /** * mutex_lock_interruptible() - Acquire the mutex, interruptible by signals. @@ -971,7 +971,7 @@ int __sched mutex_lock_interruptible(struct mutex *lock) if (__mutex_trylock_fast(lock)) return 0; - return __mutex_lock_interruptible_slowpath(lock); + return __mutex_lock_interruptible_slowpath(lock, _RET_IP_); } EXPORT_SYMBOL(mutex_lock_interruptible); @@ -995,7 +995,7 @@ int __sched mutex_lock_killable(struct mutex *lock) if (__mutex_trylock_fast(lock)) return 0; - return __mutex_lock_killable_slowpath(lock); + return __mutex_lock_killable_slowpath(lock, _RET_IP_); } EXPORT_SYMBOL(mutex_lock_killable); @@ -1020,36 +1020,36 @@ void __sched mutex_lock_io(struct mutex *lock) EXPORT_SYMBOL_GPL(mutex_lock_io); static noinline void __sched -__mutex_lock_slowpath(struct mutex *lock) +__mutex_lock_slowpath(struct mutex *lock, unsigned long ip) { - __mutex_lock(lock, TASK_UNINTERRUPTIBLE, 0, NULL, _RET_IP_); + __mutex_lock(lock, TASK_UNINTERRUPTIBLE, 0, NULL, ip); } static noinline int __sched -__mutex_lock_killable_slowpath(struct mutex *lock) +__mutex_lock_killable_slowpath(struct mutex *lock, unsigned long ip) { - return __mutex_lock(lock, TASK_KILLABLE, 0, NULL, _RET_IP_); + return __mutex_lock(lock, TASK_KILLABLE, 0, NULL, ip); } static noinline int __sched -__mutex_lock_interruptible_slowpath(struct mutex *lock) +__mutex_lock_interruptible_slowpath(struct mutex *lock, unsigned long ip) { - return __mutex_lock(lock, TASK_INTERRUPTIBLE, 0, NULL, _RET_IP_); + return __mutex_lock(lock, TASK_INTERRUPTIBLE, 0, NULL, ip); } static noinline int __sched -__ww_mutex_lock_slowpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) +__ww_mutex_lock_slowpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx, + unsigned long ip) { - return __ww_mutex_lock(&lock->base, TASK_UNINTERRUPTIBLE, 0, - _RET_IP_, ctx); + return __ww_mutex_lock(&lock->base, TASK_UNINTERRUPTIBLE, 0, ip, ctx); } static noinline int __sched __ww_mutex_lock_interruptible_slowpath(struct ww_mutex *lock, - struct ww_acquire_ctx *ctx) + struct ww_acquire_ctx *ctx, + unsigned long ip) { - return __ww_mutex_lock(&lock->base, TASK_INTERRUPTIBLE, 0, - _RET_IP_, ctx); + return __ww_mutex_lock(&lock->base, TASK_INTERRUPTIBLE, 0, ip, ctx); } #endif @@ -1094,7 +1094,7 @@ ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) return 0; } - return __ww_mutex_lock_slowpath(lock, ctx); + return __ww_mutex_lock_slowpath(lock, ctx, _RET_IP_); } EXPORT_SYMBOL(ww_mutex_lock); @@ -1109,7 +1109,7 @@ ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) return 0; } - return __ww_mutex_lock_interruptible_slowpath(lock, ctx); + return __ww_mutex_lock_interruptible_slowpath(lock, ctx, _RET_IP_); } EXPORT_SYMBOL(ww_mutex_lock_interruptible); From patchwork Tue Mar 1 01:04:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namhyung Kim X-Patchwork-Id: 12763950 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1CB74C43217 for ; Tue, 1 Mar 2022 01:04:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231941AbiCABFM (ORCPT ); Mon, 28 Feb 2022 20:05:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55518 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231938AbiCABFJ (ORCPT ); Mon, 28 Feb 2022 20:05:09 -0500 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2AC41D10B5; Mon, 28 Feb 2022 17:04:29 -0800 (PST) Received: by mail-pl1-x630.google.com with SMTP id l9so11738587pls.6; Mon, 28 Feb 2022 17:04:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3gFbnjPO5BppYFMj5w1roPmhQ+QxHohBl6chbdLG//Y=; b=CXmgh4qlkuelcapoCZgjZF28iLW/v4m5kJWYg6rBRfsU5DUl5zh4Mx0gPu3iOMz3kO sFQGFd6OZidgbTw+ycrtlKVIoJ04jfEm903MrXEYb3Ye7g8rQzwtMP5SxCnM/8O2DcYG KT1ai0FZQmQft3xfOcmZianVXTgsQBzqqS4NTWjXOvprzJIkqfWaPbmFidCLpGuiO50q RoTpV49RiE9neuzCuiAgVSmktEJDNMl7oCrFaTPLX20NjLpELw8YKLCpL3ObHKWUeW2G LT5UEgsot4rHvhExuMyojkcNhWk92zSSn+9rxU18DRtd+BuKHdncxbZO23c99eMzeWOG +71w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=3gFbnjPO5BppYFMj5w1roPmhQ+QxHohBl6chbdLG//Y=; b=tUXCjdh1FqzxPU90BCz2mfKWP4zQqT0qGoFW3x09sfyCw3rU0DMTm48WQLAJG9tMYK rmO8oguHEmsCLs3eSkp5izSR0jEwZznGRmhiyc8oJc0WTLxdrP9g9sE9UPXmEuz+Hl45 5pKBS8v63JKyBUs05MpXDgCg/Tbb5viFJ89PtzypijONQ1Y87xIFINrgSE83fjKAtVxR FsGOKZtoTwTT00CVewxJoPIOhtoQLC3dYbQszXWOO2lBPq1kwHSd8UZt7vT09sVytOjH mIuXJHgBQ3uM+oK3+hKs3aWzjWu2Tzfs/ojmX30KYrsG2CVtE8AgiQfIzrX06uf9KQXC QdZg== X-Gm-Message-State: AOAM530TPcYNr9knwXBqP4rrJU0QJCmDsEZZa+dS9Okh4WSUbnq5m0qY kEAaEujQ1Bx4GB/H+6pu0LE= X-Google-Smtp-Source: ABdhPJwuQUOhH5HgWMT6ym5KV9IIXSrqjClUjSJJjUuczVfQIokubn+Ue0IR3NYj+iLFUgDjahxybg== X-Received: by 2002:a17:902:e949:b0:14b:1f32:e926 with SMTP id b9-20020a170902e94900b0014b1f32e926mr23813235pll.170.1646096668567; Mon, 28 Feb 2022 17:04:28 -0800 (PST) Received: from balhae.hsd1.ca.comcast.net ([2601:647:4800:3540:726c:585a:8796:a60a]) by smtp.gmail.com with ESMTPSA id cv15-20020a17090afd0f00b001bedcbca1a9sm83861pjb.57.2022.02.28.17.04.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 28 Feb 2022 17:04:28 -0800 (PST) Sender: Namhyung Kim From: Namhyung Kim To: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng Cc: LKML , Thomas Gleixner , Steven Rostedt , Mathieu Desnoyers , Byungchul Park , "Paul E. McKenney" , Arnd Bergmann , linux-arch@vger.kernel.org, bpf@vger.kernel.org, Radoslaw Burny Subject: [PATCH 4/4] locking/rwsem: Pass proper call-site ip Date: Mon, 28 Feb 2022 17:04:12 -0800 Message-Id: <20220301010412.431299-5-namhyung@kernel.org> X-Mailer: git-send-email 2.35.1.574.g5d30c73bfb-goog In-Reply-To: <20220301010412.431299-1-namhyung@kernel.org> References: <20220301010412.431299-1-namhyung@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org For some reason, __down_read_common() was not inlined in my system. So I can only see its caller down_read() in the tracepoint. It should pass an IP of the actual caller. Let's add a new variants of LOCK_CONTENDED macro to pass _RET_IP_ to the lock function and make rwsem down functions take an ip argument Signed-off-by: Namhyung Kim --- include/linux/lockdep.h | 29 ++++++++++++++++- kernel/locking/rwsem.c | 69 ++++++++++++++++++++++------------------- 2 files changed, 65 insertions(+), 33 deletions(-) diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h index 467b94257105..6aca885f356c 100644 --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -453,7 +453,16 @@ do { \ lock_contended(&(_lock)->dep_map, _RET_IP_); \ lock(_lock); \ } \ - lock_acquired(&(_lock)->dep_map, _RET_IP_); \ + lock_acquired(&(_lock)->dep_map, _RET_IP_); \ +} while (0) + +#define LOCK_CONTENDED_IP(_lock, try, lock) \ +do { \ + if (!try(_lock)) { \ + lock_contended(&(_lock)->dep_map, _RET_IP_); \ + lock(_lock, _RET_IP_); \ + } \ + lock_acquired(&(_lock)->dep_map, _RET_IP_); \ } while (0) #define LOCK_CONTENDED_RETURN(_lock, try, lock) \ @@ -468,6 +477,18 @@ do { \ ____err; \ }) +#define LOCK_CONTENDED_RETURN_IP(_lock, try, lock) \ +({ \ + int ____err = 0; \ + if (!try(_lock)) { \ + lock_contended(&(_lock)->dep_map, _RET_IP_); \ + ____err = lock(_lock, _RET_IP_); \ + } \ + if (!____err) \ + lock_acquired(&(_lock)->dep_map, _RET_IP_); \ + ____err; \ +}) + #else /* CONFIG_LOCK_STAT */ #define lock_contended(lockdep_map, ip) do {} while (0) @@ -476,9 +497,15 @@ do { \ #define LOCK_CONTENDED(_lock, try, lock) \ lock(_lock) +#define LOCK_CONTENDED_IP(_lock, try, lock) \ + lock(_lock, _RET_IP_) + #define LOCK_CONTENDED_RETURN(_lock, try, lock) \ lock(_lock) +#define LOCK_CONTENDED_RETURN_IP(_lock, try, lock) \ + lock(_lock, _RET_IP_) + #endif /* CONFIG_LOCK_STAT */ #ifdef CONFIG_PROVE_LOCKING diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c index a1a17af7f747..eafb0faaed0d 100644 --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -1207,13 +1207,14 @@ static struct rw_semaphore *rwsem_downgrade_wake(struct rw_semaphore *sem) /* * lock for reading */ -static inline int __down_read_common(struct rw_semaphore *sem, int state) +static inline int __down_read_common(struct rw_semaphore *sem, int state, + unsigned long ip) { long count; void *ret; if (!rwsem_read_trylock(sem, &count)) { - trace_contention_begin(sem, _RET_IP_, LCB_F_READ | state); + trace_contention_begin(sem, ip, LCB_F_READ | state); ret = rwsem_down_read_slowpath(sem, count, state); trace_contention_end(sem); @@ -1224,19 +1225,19 @@ static inline int __down_read_common(struct rw_semaphore *sem, int state) return 0; } -static inline void __down_read(struct rw_semaphore *sem) +static inline void __down_read(struct rw_semaphore *sem, unsigned long ip) { - __down_read_common(sem, TASK_UNINTERRUPTIBLE); + __down_read_common(sem, TASK_UNINTERRUPTIBLE, ip); } -static inline int __down_read_interruptible(struct rw_semaphore *sem) +static inline int __down_read_interruptible(struct rw_semaphore *sem, unsigned long ip) { - return __down_read_common(sem, TASK_INTERRUPTIBLE); + return __down_read_common(sem, TASK_INTERRUPTIBLE, ip); } -static inline int __down_read_killable(struct rw_semaphore *sem) +static inline int __down_read_killable(struct rw_semaphore *sem, unsigned long ip) { - return __down_read_common(sem, TASK_KILLABLE); + return __down_read_common(sem, TASK_KILLABLE, ip); } static inline int __down_read_trylock(struct rw_semaphore *sem) @@ -1259,12 +1260,13 @@ static inline int __down_read_trylock(struct rw_semaphore *sem) /* * lock for writing */ -static inline int __down_write_common(struct rw_semaphore *sem, int state) +static inline int __down_write_common(struct rw_semaphore *sem, int state, + unsigned long ip) { void *ret; if (unlikely(!rwsem_write_trylock(sem))) { - trace_contention_begin(sem, _RET_IP_, LCB_F_WRITE | state); + trace_contention_begin(sem, ip, LCB_F_WRITE | state); ret = rwsem_down_write_slowpath(sem, state); trace_contention_end(sem); @@ -1275,14 +1277,14 @@ static inline int __down_write_common(struct rw_semaphore *sem, int state) return 0; } -static inline void __down_write(struct rw_semaphore *sem) +static inline void __down_write(struct rw_semaphore *sem, unsigned long ip) { - __down_write_common(sem, TASK_UNINTERRUPTIBLE); + __down_write_common(sem, TASK_UNINTERRUPTIBLE, ip); } -static inline int __down_write_killable(struct rw_semaphore *sem) +static inline int __down_write_killable(struct rw_semaphore *sem, unsigned long ip) { - return __down_write_common(sem, TASK_KILLABLE); + return __down_write_common(sem, TASK_KILLABLE, ip); } static inline int __down_write_trylock(struct rw_semaphore *sem) @@ -1397,17 +1399,17 @@ void __init_rwsem(struct rw_semaphore *sem, const char *name, } EXPORT_SYMBOL(__init_rwsem); -static inline void __down_read(struct rw_semaphore *sem) +static inline void __down_read(struct rw_semaphore *sem, unsigned long ip) { rwbase_read_lock(&sem->rwbase, TASK_UNINTERRUPTIBLE); } -static inline int __down_read_interruptible(struct rw_semaphore *sem) +static inline int __down_read_interruptible(struct rw_semaphore *sem, unsigned long ip) { return rwbase_read_lock(&sem->rwbase, TASK_INTERRUPTIBLE); } -static inline int __down_read_killable(struct rw_semaphore *sem) +static inline int __down_read_killable(struct rw_semaphore *sem, unsigned long ip) { return rwbase_read_lock(&sem->rwbase, TASK_KILLABLE); } @@ -1422,12 +1424,12 @@ static inline void __up_read(struct rw_semaphore *sem) rwbase_read_unlock(&sem->rwbase, TASK_NORMAL); } -static inline void __sched __down_write(struct rw_semaphore *sem) +static inline void __sched __down_write(struct rw_semaphore *sem, unsigned long ip) { rwbase_write_lock(&sem->rwbase, TASK_UNINTERRUPTIBLE); } -static inline int __sched __down_write_killable(struct rw_semaphore *sem) +static inline int __sched __down_write_killable(struct rw_semaphore *sem, unsigned long ip) { return rwbase_write_lock(&sem->rwbase, TASK_KILLABLE); } @@ -1472,7 +1474,7 @@ void __sched down_read(struct rw_semaphore *sem) might_sleep(); rwsem_acquire_read(&sem->dep_map, 0, 0, _RET_IP_); - LOCK_CONTENDED(sem, __down_read_trylock, __down_read); + LOCK_CONTENDED_IP(sem, __down_read_trylock, __down_read); } EXPORT_SYMBOL(down_read); @@ -1481,7 +1483,8 @@ int __sched down_read_interruptible(struct rw_semaphore *sem) might_sleep(); rwsem_acquire_read(&sem->dep_map, 0, 0, _RET_IP_); - if (LOCK_CONTENDED_RETURN(sem, __down_read_trylock, __down_read_interruptible)) { + if (LOCK_CONTENDED_RETURN_IP(sem, __down_read_trylock, + __down_read_interruptible)) { rwsem_release(&sem->dep_map, _RET_IP_); return -EINTR; } @@ -1495,7 +1498,8 @@ int __sched down_read_killable(struct rw_semaphore *sem) might_sleep(); rwsem_acquire_read(&sem->dep_map, 0, 0, _RET_IP_); - if (LOCK_CONTENDED_RETURN(sem, __down_read_trylock, __down_read_killable)) { + if (LOCK_CONTENDED_RETURN_IP(sem, __down_read_trylock, + __down_read_killable)) { rwsem_release(&sem->dep_map, _RET_IP_); return -EINTR; } @@ -1524,7 +1528,7 @@ void __sched down_write(struct rw_semaphore *sem) { might_sleep(); rwsem_acquire(&sem->dep_map, 0, 0, _RET_IP_); - LOCK_CONTENDED(sem, __down_write_trylock, __down_write); + LOCK_CONTENDED_IP(sem, __down_write_trylock, __down_write); } EXPORT_SYMBOL(down_write); @@ -1536,8 +1540,8 @@ int __sched down_write_killable(struct rw_semaphore *sem) might_sleep(); rwsem_acquire(&sem->dep_map, 0, 0, _RET_IP_); - if (LOCK_CONTENDED_RETURN(sem, __down_write_trylock, - __down_write_killable)) { + if (LOCK_CONTENDED_RETURN_IP(sem, __down_write_trylock, + __down_write_killable)) { rwsem_release(&sem->dep_map, _RET_IP_); return -EINTR; } @@ -1596,7 +1600,7 @@ void down_read_nested(struct rw_semaphore *sem, int subclass) { might_sleep(); rwsem_acquire_read(&sem->dep_map, subclass, 0, _RET_IP_); - LOCK_CONTENDED(sem, __down_read_trylock, __down_read); + LOCK_CONTENDED_IP(sem, __down_read_trylock, __down_read); } EXPORT_SYMBOL(down_read_nested); @@ -1605,7 +1609,8 @@ int down_read_killable_nested(struct rw_semaphore *sem, int subclass) might_sleep(); rwsem_acquire_read(&sem->dep_map, subclass, 0, _RET_IP_); - if (LOCK_CONTENDED_RETURN(sem, __down_read_trylock, __down_read_killable)) { + if (LOCK_CONTENDED_RETURN_IP(sem, __down_read_trylock, + __down_read_killable)) { rwsem_release(&sem->dep_map, _RET_IP_); return -EINTR; } @@ -1618,14 +1623,14 @@ void _down_write_nest_lock(struct rw_semaphore *sem, struct lockdep_map *nest) { might_sleep(); rwsem_acquire_nest(&sem->dep_map, 0, 0, nest, _RET_IP_); - LOCK_CONTENDED(sem, __down_write_trylock, __down_write); + LOCK_CONTENDED_IP(sem, __down_write_trylock, __down_write); } EXPORT_SYMBOL(_down_write_nest_lock); void down_read_non_owner(struct rw_semaphore *sem) { might_sleep(); - __down_read(sem); + __down_read(sem, _RET_IP_); __rwsem_set_reader_owned(sem, NULL); } EXPORT_SYMBOL(down_read_non_owner); @@ -1634,7 +1639,7 @@ void down_write_nested(struct rw_semaphore *sem, int subclass) { might_sleep(); rwsem_acquire(&sem->dep_map, subclass, 0, _RET_IP_); - LOCK_CONTENDED(sem, __down_write_trylock, __down_write); + LOCK_CONTENDED_IP(sem, __down_write_trylock, __down_write); } EXPORT_SYMBOL(down_write_nested); @@ -1643,8 +1648,8 @@ int __sched down_write_killable_nested(struct rw_semaphore *sem, int subclass) might_sleep(); rwsem_acquire(&sem->dep_map, subclass, 0, _RET_IP_); - if (LOCK_CONTENDED_RETURN(sem, __down_write_trylock, - __down_write_killable)) { + if (LOCK_CONTENDED_RETURN_IP(sem, __down_write_trylock, + __down_write_killable)) { rwsem_release(&sem->dep_map, _RET_IP_); return -EINTR; }