From patchwork Fri May 1 17:07:06 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lina Iyer X-Patchwork-Id: 6311961 Return-Path: X-Original-To: patchwork-linux-arm-msm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 6C113BEEE5 for ; Fri, 1 May 2015 17:07:26 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7F0392042B for ; Fri, 1 May 2015 17:07:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7384F2043C for ; Fri, 1 May 2015 17:07:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751298AbbEARHQ (ORCPT ); Fri, 1 May 2015 13:07:16 -0400 Received: from mail-ig0-f181.google.com ([209.85.213.181]:36877 "EHLO mail-ig0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752118AbbEARHM (ORCPT ); Fri, 1 May 2015 13:07:12 -0400 Received: by igblo3 with SMTP id lo3so42073244igb.0 for ; Fri, 01 May 2015 10:07:12 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=jqgVa05K6OL+juuT/1XeJRbnPqJUTWt0ChgomkIkC5Y=; b=L8wRio10l0Z+K1LWV7Ru1ZwjRMyFebVUuNw9caBF7TDv690gEc4Bvn0/zLRI6v8hsQ 0z67BndkBebiGJpwv508qIu0VkVTRh3EqesgdjI9v5MEUN9rrj5eKlh3a1g0GiEhVSta fjEd9hCZUvWMxvQROD0ksUve4P2hh8ZCbmll2wvVbfU88HV8QadIWmwITRYUWf2OUOyi QTDiP+7Nb0dxxsCR6fK5hbOqz+cnQ2r/JtsaxnowrwKbLw0ADZlhLKkbpF/Tio3VkMqH 6J72X1M9iMTKc729/Cm0vDZQ/Fua2r82aW6XqDXNomX7dUGRBkGwEnSypn5k71st67yO IBjw== X-Gm-Message-State: ALoCoQl90JvPKAUsgCGaAILncvCjF8lL1uWi/hILiaTEDjfAG9eud24lvGDzKODeutA0oORwnlq+ X-Received: by 10.107.132.223 with SMTP id o92mr13626566ioi.49.1430500032166; Fri, 01 May 2015 10:07:12 -0700 (PDT) Received: from localhost.localdomain (c-24-8-37-141.hsd1.co.comcast.net. [24.8.37.141]) by mx.google.com with ESMTPSA id n8sm3849973ioe.37.2015.05.01.10.07.11 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 01 May 2015 10:07:11 -0700 (PDT) From: Lina Iyer To: ohad@wizery.com, s-anna@ti.com, Bjorn.Andersson@sonymobile.com, agross@codeaurora.org Cc: linux-arm-msm@vger.kernel.org, linux-kernel@vger.kernel.org, galak@codeaurora.org, jhugo@codeaurora.org, Lina Iyer Subject: [PATCH RFC] hwspinlock: Don't take software spinlock before hwspinlock Date: Fri, 1 May 2015 11:07:06 -0600 Message-Id: <1430500026-47990-1-git-send-email-lina.iyer@linaro.org> X-Mailer: git-send-email 2.1.4 Sender: linux-arm-msm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-arm-msm@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Some uses of the hwspinlock could be that one entity acquires the lock and the other entity releases the lock. This allows for a serialized traversal path from the locking entity to the other. For example, the cpuidle entry from Linux to the firmware to power down the core, can be serialized across the context switch by locking the hwspinlock in Linux and releasing it in the firmware. Do not force the caller of __hwspin_trylock() to acquire a kernel spinlock before acquiring the hwspinlock. Cc: Jeffrey Hugo Cc: Ohad Ben-Cohen Cc: Suman Anna Cc: Andy Gross Signed-off-by: Lina Iyer --- drivers/hwspinlock/hwspinlock_core.c | 56 ++++++++++++++++++++---------------- include/linux/hwspinlock.h | 1 + 2 files changed, 32 insertions(+), 25 deletions(-) diff --git a/drivers/hwspinlock/hwspinlock_core.c b/drivers/hwspinlock/hwspinlock_core.c index 461a0d7..bdc59f2 100644 --- a/drivers/hwspinlock/hwspinlock_core.c +++ b/drivers/hwspinlock/hwspinlock_core.c @@ -105,30 +105,34 @@ int __hwspin_trylock(struct hwspinlock *hwlock, int mode, unsigned long *flags) * problems with hwspinlock usage (e.g. scheduler checks like * 'scheduling while atomic' etc.) */ - if (mode == HWLOCK_IRQSTATE) - ret = spin_trylock_irqsave(&hwlock->lock, *flags); - else if (mode == HWLOCK_IRQ) - ret = spin_trylock_irq(&hwlock->lock); - else - ret = spin_trylock(&hwlock->lock); + if (mode != HWLOCK_NOLOCK) { + if (mode == HWLOCK_IRQSTATE) + ret = spin_trylock_irqsave(&hwlock->lock, *flags); + else if (mode == HWLOCK_IRQ) + ret = spin_trylock_irq(&hwlock->lock); + else + ret = spin_trylock(&hwlock->lock); - /* is lock already taken by another context on the local cpu ? */ - if (!ret) - return -EBUSY; + /* is lock already taken by another context on the local cpu? */ + if (!ret) + return -EBUSY; + } /* try to take the hwspinlock device */ ret = hwlock->bank->ops->trylock(hwlock); - /* if hwlock is already taken, undo spin_trylock_* and exit */ - if (!ret) { - if (mode == HWLOCK_IRQSTATE) - spin_unlock_irqrestore(&hwlock->lock, *flags); - else if (mode == HWLOCK_IRQ) - spin_unlock_irq(&hwlock->lock); - else - spin_unlock(&hwlock->lock); + if (mode != HWLOCK_NOLOCK) { + /* if hwlock is already taken, undo spin_trylock_* and exit */ + if (!ret) { + if (mode == HWLOCK_IRQSTATE) + spin_unlock_irqrestore(&hwlock->lock, *flags); + else if (mode == HWLOCK_IRQ) + spin_unlock_irq(&hwlock->lock); + else + spin_unlock(&hwlock->lock); - return -EBUSY; + return -EBUSY; + } } /* @@ -247,13 +251,15 @@ void __hwspin_unlock(struct hwspinlock *hwlock, int mode, unsigned long *flags) hwlock->bank->ops->unlock(hwlock); - /* Undo the spin_trylock{_irq, _irqsave} called while locking */ - if (mode == HWLOCK_IRQSTATE) - spin_unlock_irqrestore(&hwlock->lock, *flags); - else if (mode == HWLOCK_IRQ) - spin_unlock_irq(&hwlock->lock); - else - spin_unlock(&hwlock->lock); + if (mode != HWLOCK_NOLOCK) { + /* Undo the spin_trylock{_irq, _irqsave} called while locking */ + if (mode == HWLOCK_IRQSTATE) + spin_unlock_irqrestore(&hwlock->lock, *flags); + else if (mode == HWLOCK_IRQ) + spin_unlock_irq(&hwlock->lock); + else + spin_unlock(&hwlock->lock); + } } EXPORT_SYMBOL_GPL(__hwspin_unlock); diff --git a/include/linux/hwspinlock.h b/include/linux/hwspinlock.h index 3343298..219b333 100644 --- a/include/linux/hwspinlock.h +++ b/include/linux/hwspinlock.h @@ -24,6 +24,7 @@ /* hwspinlock mode argument */ #define HWLOCK_IRQSTATE 0x01 /* Disable interrupts, save state */ #define HWLOCK_IRQ 0x02 /* Disable interrupts, don't save state */ +#define HWLOCK_NOLOCK 0xFF /* Dont take any lock */ struct device; struct hwspinlock;