From patchwork Thu Mar 28 04:45:58 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Turquette X-Patchwork-Id: 2354191 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork1.kernel.org Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) by patchwork1.kernel.org (Postfix) with ESMTP id 8ACB43FD40 for ; Thu, 28 Mar 2013 04:50:00 +0000 (UTC) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1UL4kS-0003eG-5I; Thu, 28 Mar 2013 04:47:36 +0000 Received: from mail-pa0-f51.google.com ([209.85.220.51]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1UL4k3-0003aI-0d for linux-arm-kernel@lists.infradead.org; Thu, 28 Mar 2013 04:47:12 +0000 Received: by mail-pa0-f51.google.com with SMTP id jh10so1841908pab.24 for ; Wed, 27 Mar 2013 21:47:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:from:to:cc:subject:date:message-id:x-mailer:in-reply-to :references:x-gm-message-state; bh=v+jUqsGb6I6Qu34jCo/7OlaiTQl9tBxeUktKUNYCPUg=; b=L8D+szPXeTBZ6ZSxabY8Tkeirudw/2SiVmO6egcKJ58Dg+6VLLttecFWUlArzIF9Rh Zcmz/fc4a8J5dcLtn+g6NQ2DH5r03UAUycvpDT3aoZ229AHV5+9SJvxGWu9mwyA0lL0B 0kXlYX7Rf3WSEEvYBmBaCDsv3qYksV0lrq/gTAoNK36u3TBLX0OnMSCvMUuXZI+0U1ty 9SoJqY2MzH9afxn+mv5ee7Iumh8nOUT+s469mqYTwDW8IHcGHOiRUq6CkfhTQrjMgkGN A6t5AML++fM86S9Xqc0WtF+P8iUvYiBVDpg+/apOFX2ogz1dzTK2eEq/WLMPicFH5zY/ khlw== X-Received: by 10.66.163.132 with SMTP id yi4mr33748198pab.104.1364446026145; Wed, 27 Mar 2013 21:47:06 -0700 (PDT) Received: from quantum.gateway.2wire.net (adsl-69-228-93-79.dsl.pltn13.pacbell.net. [69.228.93.79]) by mx.google.com with ESMTPS id hw16sm26047314pab.19.2013.03.27.21.47.03 (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 27 Mar 2013 21:47:05 -0700 (PDT) From: Mike Turquette To: linux-kernel@vger.kernel.org Subject: [PATCH 2/2] clk: allow reentrant calls into the clk framework Date: Wed, 27 Mar 2013 21:45:58 -0700 Message-Id: <1364445958-2999-3-git-send-email-mturquette@linaro.org> X-Mailer: git-send-email 1.7.10.4 In-Reply-To: <1364445958-2999-1-git-send-email-mturquette@linaro.org> References: <1364368183-24420-1-git-send-email-mturquette@linaro.org> <1364445958-2999-1-git-send-email-mturquette@linaro.org> X-Gm-Message-State: ALoCoQkQEthIfkR8dWg32NoQT0qYGIY0dvirR8p5RN5Ee1o4YFzEAg7d8iA7ZZyjywa67UK8ljUH X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130328_004711_196075_FA2E62D0 X-CRM114-Status: GOOD ( 15.72 ) X-Spam-Score: -2.6 (--) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-2.6 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.220.51 listed in list.dnswl.org] -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] Cc: ulf.hansson@linaro.org, linaro-kernel@lists.linaro.org, Mike Turquette , patches@linaro.org, laurent.pinchart@ideasonboard.com, rajagopal.venkat@linaro.org, davidb@codeaurora.org, tglx@linutronix.de, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org Reentrancy into the clock framework is necessary for clock operations that result in nested calls to the clk api. A common example is a clock that is prepared via an i2c transaction, such as a clock inside of a discrete audio chip or a power management IC. The i2c subsystem itself will use the clk api resulting in a deadlock: clk_prepare(audio_clk) i2c_transfer(..) clk_prepare(i2c_controller_clk) The ability to reenter the clock framework prevents this deadlock. Other use cases exist such as allowing .set_rate callbacks to call clk_set_parent to achieve the best rate, or to save power in certain configurations. Yet another example is performing pinctrl operations from a clk_ops callback. Calls into the pinctrl subsystem may call clk_{un}prepare on an unrelated clock. Allowing for nested calls to reenter the clock framework enables both of these use cases. Reentrancy is implemented by two global pointers that track the owner currently holding a global lock. One pointer tracks the owner during sleepable, mutex-protected operations and the other one tracks the owner during non-interruptible, spinlock-protected operations. When the clk framework is entered we try to hold the global lock. If it is held we compare the current task id against the current owner; a match implies a nested call and we reenter. If the values do not match then we block on the lock until it is released. Signed-off-by: Mike Turquette Cc: Rajagopal Venkat Cc: David Brown Cc: Ulf Hansson Cc: Laurent Pinchart Cc: Thomas Gleixner Reviewed-by: Thomas Gleixner --- Changes since v4: * remove uneccesary atomic operations * remove casting bugs * place reentrancy logic into locking helper functions * improve debugging with reference counting and WARNs drivers/clk/clk.c | 43 +++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 41 insertions(+), 2 deletions(-) diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c index bea47d5..fe7c054 100644 --- a/drivers/clk/clk.c +++ b/drivers/clk/clk.c @@ -19,10 +19,17 @@ #include #include #include +#include static DEFINE_SPINLOCK(enable_lock); static DEFINE_MUTEX(prepare_lock); +static struct task_struct *prepare_owner; +static struct task_struct *enable_owner; + +static int prepare_refcnt; +static int enable_refcnt; + static HLIST_HEAD(clk_root_list); static HLIST_HEAD(clk_orphan_list); static LIST_HEAD(clk_notifier_list); @@ -30,21 +37,53 @@ static LIST_HEAD(clk_notifier_list); /*** locking ***/ static void clk_prepare_lock(void) { - mutex_lock(&prepare_lock); + if (!mutex_trylock(&prepare_lock)) { + if (prepare_owner == current) { + prepare_refcnt++; + return; + } + mutex_lock(&prepare_lock); + } + WARN_ON_ONCE(prepare_owner != NULL); + WARN_ON_ONCE(prepare_refcnt != 0); + prepare_owner = current; + prepare_refcnt = 1; } static void clk_prepare_unlock(void) { + WARN_ON_ONCE(prepare_owner != current); + WARN_ON_ONCE(prepare_refcnt == 0); + + if (--prepare_refcnt) + return; + prepare_owner = NULL; mutex_unlock(&prepare_lock); } static void clk_enable_lock(unsigned long *flags) { - spin_lock_irqsave(&enable_lock, *flags); + if (!spin_trylock_irqsave(&enable_lock, *flags)) { + if (enable_owner == current) { + enable_refcnt++; + return; + } + spin_lock_irqsave(&enable_lock, *flags); + } + WARN_ON_ONCE(enable_owner != NULL); + WARN_ON_ONCE(enable_refcnt != 0); + enable_owner = current; + enable_refcnt = 1; } static void clk_enable_unlock(unsigned long *flags) { + WARN_ON_ONCE(enable_owner != current); + WARN_ON_ONCE(enable_refcnt == 0); + + if (--enable_refcnt) + return; + enable_owner = NULL; spin_unlock_irqrestore(&enable_lock, *flags); }