From patchwork Tue Mar 22 18:57:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namhyung Kim X-Patchwork-Id: 12788952 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BA09CC433FE for ; Tue, 22 Mar 2022 18:57:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229908AbiCVS6q (ORCPT ); Tue, 22 Mar 2022 14:58:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51628 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229846AbiCVS6n (ORCPT ); Tue, 22 Mar 2022 14:58:43 -0400 Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9F4C091548; Tue, 22 Mar 2022 11:57:15 -0700 (PDT) Received: by mail-pg1-x52d.google.com with SMTP id w21so9579888pgm.7; Tue, 22 Mar 2022 11:57:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gAMBj1S29rgAzHcNPl/UWJtGAptXvHHW2Jh7MIwSDOM=; b=hmhlSxZTx6aS+hIMhHNUz2KF//sFEwtfxPJISnORBDnKoJz+FNhUrWg2su2tviz/cN J2Q7GYV3bp2NNybGH17XweiZlGNpdm2v3fJmudkY1dKQLA0nidLaj8syH+wdh0kWXaTJ +8duOg01rsa+Za8uuBZJxXAeeG6qxd004HiungJcyaasNl1SUbArMI6MlzxAIgZl8H2f JbSIS6PEIWgTPVFeM4R9fFClqdICrgKL0A8rMNKEiyFAi7zWa1mj9s03Ejwx+5ESUNRs jrVH//4MvM9ENJlifOH2YFl9dmEfWCWRCjYUbi3XL9jqgFTHiVWpHG1r8lpBiKe8ZqMJ ApGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=gAMBj1S29rgAzHcNPl/UWJtGAptXvHHW2Jh7MIwSDOM=; b=0I9hC0o8I84NG0a2akb1dd86lVaeaBNfZOte73H4o9qYN/x7SzY3YJhdJ0A1bDCe9P crXe+dF1pQI8LgRl+dY4qN+ngwejQWvmGiZImeR2rsByhizmesU9d2x7/eyourRA3VT4 l+lbACS65PPHr4gX/wqOh7uB9lJK2avhGteFziwxf2X7fOXiz+E+r1kVR+SXcXWyDU5f 1yuNpoCe4i+CMX2JEFhR1MYz11NqCwGSNTkIyLTBemngL5O4xu15kpn0rtLSDfg1Rj39 AYPGq4mYZNhCwVQ7Mh8tv4B1XmVXdB+bBtiqMSAFO2ze3Fja0eF1Fh5lOzCeqvaJPRuR EIGw== X-Gm-Message-State: AOAM531PyBtdQwZI8Z7G9eO3ZfvcPniA3gNmRKBzvkXADHcAniDBPO6h lTfnbY/y65eKzgNvVei1a1Cshyxl3PU= X-Google-Smtp-Source: ABdhPJz/yMJgmlsKxx/8y/7qKGZmUAeKx49YYWs9Wej7R+HBSQop0d4WzwpMbAKrAcafRuZhsCPbvA== X-Received: by 2002:a63:ea51:0:b0:380:7c35:188e with SMTP id l17-20020a63ea51000000b003807c35188emr23113078pgk.607.1647975435002; Tue, 22 Mar 2022 11:57:15 -0700 (PDT) Received: from balhae.hsd1.ca.comcast.net ([2601:647:4800:3540:c09f:7727:246c:bda2]) by smtp.gmail.com with ESMTPSA id u14-20020a056a00124e00b004fab8f3245fsm3772681pfi.149.2022.03.22.11.57.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Mar 2022 11:57:14 -0700 (PDT) Sender: Namhyung Kim From: Namhyung Kim To: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng Cc: LKML , Thomas Gleixner , Steven Rostedt , Byungchul Park , "Paul E. McKenney" , Mathieu Desnoyers , Arnd Bergmann , Radoslaw Burny , linux-arch@vger.kernel.org, bpf@vger.kernel.org, Hyeonggon Yoo <42.hyeyoo@gmail.com> Subject: [PATCH 1/2] locking: Add lock contention tracepoints Date: Tue, 22 Mar 2022 11:57:08 -0700 Message-Id: <20220322185709.141236-2-namhyung@kernel.org> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog In-Reply-To: <20220322185709.141236-1-namhyung@kernel.org> References: <20220322185709.141236-1-namhyung@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org This adds two new lock contention tracepoints like below: * lock:contention_begin * lock:contention_end The lock:contention_begin takes a flags argument to classify locks. I found it useful to identify what kind of locks it's tracing like if it's spinning or sleeping, reader-writer lock, real-time, and per-cpu. Move tracepoint definitions into mutex.c so that we can use them without lockdep. Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Namhyung Kim --- include/trace/events/lock.h | 61 +++++++++++++++++++++++++++++++++++-- kernel/locking/lockdep.c | 1 - kernel/locking/mutex.c | 3 ++ 3 files changed, 61 insertions(+), 4 deletions(-) diff --git a/include/trace/events/lock.h b/include/trace/events/lock.h index d7512129a324..b9b6e3edd518 100644 --- a/include/trace/events/lock.h +++ b/include/trace/events/lock.h @@ -5,11 +5,21 @@ #if !defined(_TRACE_LOCK_H) || defined(TRACE_HEADER_MULTI_READ) #define _TRACE_LOCK_H -#include +#include #include +/* flags for lock:contention_begin */ +#define LCB_F_SPIN (1U << 0) +#define LCB_F_READ (1U << 1) +#define LCB_F_WRITE (1U << 2) +#define LCB_F_RT (1U << 3) +#define LCB_F_PERCPU (1U << 4) + + #ifdef CONFIG_LOCKDEP +#include + TRACE_EVENT(lock_acquire, TP_PROTO(struct lockdep_map *lock, unsigned int subclass, @@ -78,8 +88,53 @@ DEFINE_EVENT(lock, lock_acquired, TP_ARGS(lock, ip) ); -#endif -#endif +#endif /* CONFIG_LOCK_STAT */ +#endif /* CONFIG_LOCKDEP */ + +TRACE_EVENT(contention_begin, + + TP_PROTO(void *lock, unsigned int flags), + + TP_ARGS(lock, flags), + + TP_STRUCT__entry( + __field(void *, lock_addr) + __field(unsigned int, flags) + ), + + TP_fast_assign( + __entry->lock_addr = lock; + __entry->flags = flags; + ), + + TP_printk("%p (flags=%s)", __entry->lock_addr, + __print_flags(__entry->flags, "|", + { LCB_F_SPIN, "SPIN" }, + { LCB_F_READ, "READ" }, + { LCB_F_WRITE, "WRITE" }, + { LCB_F_RT, "RT" }, + { LCB_F_PERCPU, "PERCPU" } + )) +); + +TRACE_EVENT(contention_end, + + TP_PROTO(void *lock, int ret), + + TP_ARGS(lock, ret), + + TP_STRUCT__entry( + __field(void *, lock_addr) + __field(int, ret) + ), + + TP_fast_assign( + __entry->lock_addr = lock; + __entry->ret = ret; + ), + + TP_printk("%p (ret=%d)", __entry->lock_addr, __entry->ret) +); #endif /* _TRACE_LOCK_H */ diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index 50036c10b518..08f8fb6a2d1e 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -60,7 +60,6 @@ #include "lockdep_internals.h" -#define CREATE_TRACE_POINTS #include #ifdef CONFIG_PROVE_LOCKING diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 5e3585950ec8..ee2fd7614a93 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -30,6 +30,9 @@ #include #include +#define CREATE_TRACE_POINTS +#include + #ifndef CONFIG_PREEMPT_RT #include "mutex.h" From patchwork Tue Mar 22 18:57:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Namhyung Kim X-Patchwork-Id: 12788953 X-Patchwork-Delegate: bpf@iogearbox.net Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3405EC433FE for ; Tue, 22 Mar 2022 18:57:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230260AbiCVS6y (ORCPT ); Tue, 22 Mar 2022 14:58:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51664 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229903AbiCVS6q (ORCPT ); Tue, 22 Mar 2022 14:58:46 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E368291AC1; Tue, 22 Mar 2022 11:57:17 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id mm17-20020a17090b359100b001c6da62a559so3917485pjb.3; Tue, 22 Mar 2022 11:57:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=keBC9Cx+GJD+G5K0Ly5PenznMe+A5tNwjNrarSMPHBA=; b=NFf0LtyI24+Q8VE54OYusrQ04kac7DYJBxK0WMr1H+2dmCCu7ZJX8Khn+uF1486Avv 5ntX7SSIZrQhEmf7MmcT0QdLmH54Le2AwY75WarC0mPKUcwiCMz3cYzzNnbFskdyw5Mw uKDRznbc6zvZCzTYdSSDqo4+WErJp7kv6GSIfs7NUOMCpJZYlejYnLuIe9Ume+Rn8gL1 We9GmR5aYE8uY7NRnKqsbuZripS+IO1yIe39+E1hZyE6mEKzFrnb/40+Fpgm1VzCrgxs qulVwoZw7mLE3mPJjDKLwFFxTPSwtFd1uWFRWVojjR+LgnXIF+eaDTDl5JrP+aK+zUEA d7gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-transfer-encoding; bh=keBC9Cx+GJD+G5K0Ly5PenznMe+A5tNwjNrarSMPHBA=; b=aWCGGdpFLOmt94oRK5gVkgUmfF+fQvZAwhiYxq4cSgdwduTNX9igXL+7muP93FC9t2 XHEVoHQRW+a00k0+wI+q8iURlGIE2rzyn1gX6YQVxtKwdimcdLks7ZtpBA44Em4zbilI q39+Tn5NRsr+wsEEoN/+y75LxdAOkJ1E3SbPfGSy37m/dfJqj37SD5LCgdGDuNPfeqS9 Djh0YUi0w86748tAYduD5Vtu4oQbFH5WnSRQmkyjqggiujI3oXtd+M/RFZIxxgLur2yL cchB8ZDLYO1gKPBDPihX/UqfELjmABAMdHvzVDOUwYvEPUdb7dHvlz8jIJSwJ1oWGXjB OlJQ== X-Gm-Message-State: AOAM5324QmHyyvmr9INVxcgbNxsK2PA2MzSRlUCYJDXGY1vR3ceB8P1O 4EdipZtIOUUNjiCanHfwDOo= X-Google-Smtp-Source: ABdhPJyBs8Y3ozg7kH6SjL1LnKarp2lRFTvpbbSMncl4VTl/9Lu6/zI3Di6kyuZ6cfEsT98aJouojw== X-Received: by 2002:a17:902:ef46:b0:153:81f7:7fc2 with SMTP id e6-20020a170902ef4600b0015381f77fc2mr20047798plx.26.1647975437342; Tue, 22 Mar 2022 11:57:17 -0700 (PDT) Received: from balhae.hsd1.ca.comcast.net ([2601:647:4800:3540:c09f:7727:246c:bda2]) by smtp.gmail.com with ESMTPSA id u14-20020a056a00124e00b004fab8f3245fsm3772681pfi.149.2022.03.22.11.57.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 22 Mar 2022 11:57:16 -0700 (PDT) Sender: Namhyung Kim From: Namhyung Kim To: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng Cc: LKML , Thomas Gleixner , Steven Rostedt , Byungchul Park , "Paul E. McKenney" , Mathieu Desnoyers , Arnd Bergmann , Radoslaw Burny , linux-arch@vger.kernel.org, bpf@vger.kernel.org, Hyeonggon Yoo <42.hyeyoo@gmail.com> Subject: [PATCH 2/2] locking: Apply contention tracepoints in the slow path Date: Tue, 22 Mar 2022 11:57:09 -0700 Message-Id: <20220322185709.141236-3-namhyung@kernel.org> X-Mailer: git-send-email 2.35.1.894.gb6a874cedc-goog In-Reply-To: <20220322185709.141236-1-namhyung@kernel.org> References: <20220322185709.141236-1-namhyung@kernel.org> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Adding the lock contention tracepoints in various lock function slow paths. Note that each arch can define spinlock differently, I only added it only to the generic qspinlock for now. Tested-by: Hyeonggon Yoo <42.hyeyoo@gmail.com> Signed-off-by: Namhyung Kim --- kernel/locking/mutex.c | 3 +++ kernel/locking/percpu-rwsem.c | 3 +++ kernel/locking/qrwlock.c | 9 +++++++++ kernel/locking/qspinlock.c | 5 +++++ kernel/locking/rtmutex.c | 11 +++++++++++ kernel/locking/rwbase_rt.c | 3 +++ kernel/locking/rwsem.c | 9 +++++++++ kernel/locking/semaphore.c | 15 ++++++++++++++- 8 files changed, 57 insertions(+), 1 deletion(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index ee2fd7614a93..c88deda77cf2 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -644,6 +644,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas } set_current_state(state); + trace_contention_begin(lock, 0); for (;;) { bool first; @@ -710,6 +711,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas skip_wait: /* got the lock - cleanup and rejoice! */ lock_acquired(&lock->dep_map, ip); + trace_contention_end(lock, 0); if (ww_ctx) ww_mutex_lock_acquired(ww, ww_ctx); @@ -721,6 +723,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas err: __set_current_state(TASK_RUNNING); __mutex_remove_waiter(lock, &waiter); + trace_contention_end(lock, ret); err_early_kill: raw_spin_unlock(&lock->wait_lock); debug_mutex_free_waiter(&waiter); diff --git a/kernel/locking/percpu-rwsem.c b/kernel/locking/percpu-rwsem.c index c9fdae94e098..833043613af6 100644 --- a/kernel/locking/percpu-rwsem.c +++ b/kernel/locking/percpu-rwsem.c @@ -9,6 +9,7 @@ #include #include #include +#include int __percpu_init_rwsem(struct percpu_rw_semaphore *sem, const char *name, struct lock_class_key *key) @@ -154,6 +155,7 @@ static void percpu_rwsem_wait(struct percpu_rw_semaphore *sem, bool reader) } spin_unlock_irq(&sem->waiters.lock); + trace_contention_begin(sem, LCB_F_PERCPU | (reader ? LCB_F_READ : LCB_F_WRITE)); while (wait) { set_current_state(TASK_UNINTERRUPTIBLE); if (!smp_load_acquire(&wq_entry.private)) @@ -161,6 +163,7 @@ static void percpu_rwsem_wait(struct percpu_rw_semaphore *sem, bool reader) schedule(); } __set_current_state(TASK_RUNNING); + trace_contention_end(sem, 0); } bool __sched __percpu_down_read(struct percpu_rw_semaphore *sem, bool try) diff --git a/kernel/locking/qrwlock.c b/kernel/locking/qrwlock.c index ec36b73f4733..b9f6f963d77f 100644 --- a/kernel/locking/qrwlock.c +++ b/kernel/locking/qrwlock.c @@ -12,6 +12,7 @@ #include #include #include +#include /** * queued_read_lock_slowpath - acquire read lock of a queue rwlock @@ -34,6 +35,8 @@ void queued_read_lock_slowpath(struct qrwlock *lock) } atomic_sub(_QR_BIAS, &lock->cnts); + trace_contention_begin(lock, LCB_F_READ | LCB_F_SPIN); + /* * Put the reader into the wait queue */ @@ -51,6 +54,8 @@ void queued_read_lock_slowpath(struct qrwlock *lock) * Signal the next one in queue to become queue head */ arch_spin_unlock(&lock->wait_lock); + + trace_contention_end(lock, 0); } EXPORT_SYMBOL(queued_read_lock_slowpath); @@ -62,6 +67,8 @@ void queued_write_lock_slowpath(struct qrwlock *lock) { int cnts; + trace_contention_begin(lock, LCB_F_WRITE | LCB_F_SPIN); + /* Put the writer into the wait queue */ arch_spin_lock(&lock->wait_lock); @@ -79,5 +86,7 @@ void queued_write_lock_slowpath(struct qrwlock *lock) } while (!atomic_try_cmpxchg_acquire(&lock->cnts, &cnts, _QW_LOCKED)); unlock: arch_spin_unlock(&lock->wait_lock); + + trace_contention_end(lock, 0); } EXPORT_SYMBOL(queued_write_lock_slowpath); diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index cbff6ba53d56..65a9a10caa6f 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -22,6 +22,7 @@ #include #include #include +#include /* * Include queued spinlock statistics code @@ -401,6 +402,8 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) idx = node->count++; tail = encode_tail(smp_processor_id(), idx); + trace_contention_begin(lock, LCB_F_SPIN); + /* * 4 nodes are allocated based on the assumption that there will * not be nested NMIs taking spinlocks. That may not be true in @@ -554,6 +557,8 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) pv_kick_node(lock, next); release: + trace_contention_end(lock, 0); + /* * release the node */ diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 8555c4efe97c..7779ee8abc2a 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -24,6 +24,8 @@ #include #include +#include + #include "rtmutex_common.h" #ifndef WW_RT @@ -1579,6 +1581,8 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock, set_current_state(state); + trace_contention_begin(lock, LCB_F_RT); + ret = task_blocks_on_rt_mutex(lock, waiter, current, ww_ctx, chwalk); if (likely(!ret)) ret = rt_mutex_slowlock_block(lock, ww_ctx, state, NULL, waiter); @@ -1601,6 +1605,9 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock, * unconditionally. We might have to fix that up. */ fixup_rt_mutex_waiters(lock); + + trace_contention_end(lock, ret); + return ret; } @@ -1683,6 +1690,8 @@ static void __sched rtlock_slowlock_locked(struct rt_mutex_base *lock) /* Save current state and set state to TASK_RTLOCK_WAIT */ current_save_and_set_rtlock_wait_state(); + trace_contention_begin(lock, LCB_F_RT); + task_blocks_on_rt_mutex(lock, &waiter, current, NULL, RT_MUTEX_MIN_CHAINWALK); for (;;) { @@ -1712,6 +1721,8 @@ static void __sched rtlock_slowlock_locked(struct rt_mutex_base *lock) */ fixup_rt_mutex_waiters(lock); debug_rt_mutex_free_waiter(&waiter); + + trace_contention_end(lock, 0); } static __always_inline void __sched rtlock_slowlock(struct rt_mutex_base *lock) diff --git a/kernel/locking/rwbase_rt.c b/kernel/locking/rwbase_rt.c index 6fd3162e4098..ec7b1fda7982 100644 --- a/kernel/locking/rwbase_rt.c +++ b/kernel/locking/rwbase_rt.c @@ -247,11 +247,13 @@ static int __sched rwbase_write_lock(struct rwbase_rt *rwb, goto out_unlock; rwbase_set_and_save_current_state(state); + trace_contention_begin(rwb, LCB_F_WRITE | LCB_F_RT); for (;;) { /* Optimized out for rwlocks */ if (rwbase_signal_pending_state(state, current)) { rwbase_restore_current_state(); __rwbase_write_unlock(rwb, 0, flags); + trace_contention_end(rwb, -EINTR); return -EINTR; } @@ -265,6 +267,7 @@ static int __sched rwbase_write_lock(struct rwbase_rt *rwb, set_current_state(state); } rwbase_restore_current_state(); + trace_contention_end(rwb, 0); out_unlock: raw_spin_unlock_irqrestore(&rtm->wait_lock, flags); diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c index acde5d6f1254..465db7bd84f8 100644 --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -27,6 +27,7 @@ #include #include #include +#include #ifndef CONFIG_PREEMPT_RT #include "lock_events.h" @@ -1014,6 +1015,8 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, unsigned int stat raw_spin_unlock_irq(&sem->wait_lock); wake_up_q(&wake_q); + trace_contention_begin(sem, LCB_F_READ); + /* wait to be given the lock */ for (;;) { set_current_state(state); @@ -1035,6 +1038,7 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, unsigned int stat __set_current_state(TASK_RUNNING); lockevent_inc(rwsem_rlock); + trace_contention_end(sem, 0); return sem; out_nolock: @@ -1042,6 +1046,7 @@ rwsem_down_read_slowpath(struct rw_semaphore *sem, long count, unsigned int stat raw_spin_unlock_irq(&sem->wait_lock); __set_current_state(TASK_RUNNING); lockevent_inc(rwsem_rlock_fail); + trace_contention_end(sem, -EINTR); return ERR_PTR(-EINTR); } @@ -1109,6 +1114,8 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state) wait: /* wait until we successfully acquire the lock */ set_current_state(state); + trace_contention_begin(sem, LCB_F_WRITE); + for (;;) { if (rwsem_try_write_lock(sem, &waiter)) { /* rwsem_try_write_lock() implies ACQUIRE on success */ @@ -1148,6 +1155,7 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state) __set_current_state(TASK_RUNNING); raw_spin_unlock_irq(&sem->wait_lock); lockevent_inc(rwsem_wlock); + trace_contention_end(sem, 0); return sem; out_nolock: @@ -1159,6 +1167,7 @@ rwsem_down_write_slowpath(struct rw_semaphore *sem, int state) raw_spin_unlock_irq(&sem->wait_lock); wake_up_q(&wake_q); lockevent_inc(rwsem_wlock_fail); + trace_contention_end(sem, -EINTR); return ERR_PTR(-EINTR); } diff --git a/kernel/locking/semaphore.c b/kernel/locking/semaphore.c index 9ee381e4d2a4..f2654d2fe43a 100644 --- a/kernel/locking/semaphore.c +++ b/kernel/locking/semaphore.c @@ -32,6 +32,7 @@ #include #include #include +#include static noinline void __down(struct semaphore *sem); static noinline int __down_interruptible(struct semaphore *sem); @@ -205,7 +206,7 @@ struct semaphore_waiter { * constant, and thus optimised away by the compiler. Likewise the * 'timeout' parameter for the cases without timeouts. */ -static inline int __sched __down_common(struct semaphore *sem, long state, +static inline int __sched ___down_common(struct semaphore *sem, long state, long timeout) { struct semaphore_waiter waiter; @@ -236,6 +237,18 @@ static inline int __sched __down_common(struct semaphore *sem, long state, return -EINTR; } +static inline int __sched __down_common(struct semaphore *sem, long state, + long timeout) +{ + int ret; + + trace_contention_begin(sem, 0); + ret = ___down_common(sem, state, timeout); + trace_contention_end(sem, ret); + + return ret; +} + static noinline void __sched __down(struct semaphore *sem) { __down_common(sem, TASK_UNINTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT);