From patchwork Sat Mar 21 11:25:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11450933 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5C40C913 for ; Sat, 21 Mar 2020 11:35:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 44AF620781 for ; Sat, 21 Mar 2020 11:35:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728797AbgCULfp (ORCPT ); Sat, 21 Mar 2020 07:35:45 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38509 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728584AbgCULfA (ORCPT ); Sat, 21 Mar 2020 07:35:00 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOQ-0001zG-FR; Sat, 21 Mar 2020 12:34:18 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id 6AF95FFC8D; Sat, 21 Mar 2020 12:34:17 +0100 (CET) Message-Id: <20200321113240.841963112@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:25:45 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Greg Kroah-Hartman , Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, kbuild test robot , Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 01/20] PCI/switchtec: Fix init_completion race condition with poll_wait() References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Logan Gunthorpe The call to init_completion() in mrpc_queue_cmd() can theoretically race with the call to poll_wait() in switchtec_dev_poll(). poll() write() switchtec_dev_poll() switchtec_dev_write() poll_wait(&s->comp.wait); mrpc_queue_cmd() init_completion(&s->comp) init_waitqueue_head(&s->comp.wait) To my knowledge, no one has hit this bug. Fix this by using reinit_completion() instead of init_completion() in mrpc_queue_cmd(). Fixes: 080b47def5e5 ("MicroSemi Switchtec management interface driver") Reported-by: Sebastian Andrzej Siewior Signed-off-by: Logan Gunthorpe Signed-off-by: Thomas Gleixner Acked-by: Bjorn Helgaas Cc: Kurt Schwemmer Cc: linux-pci@vger.kernel.org Link: https://lkml.kernel.org/r/20200313183608.2646-1-logang@deltatee.com --- drivers/pci/switch/switchtec.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c index a823b4b8ef8a..81dc7ac01381 100644 --- a/drivers/pci/switch/switchtec.c +++ b/drivers/pci/switch/switchtec.c @@ -175,7 +175,7 @@ static int mrpc_queue_cmd(struct switchtec_user *stuser) kref_get(&stuser->kref); stuser->read_len = sizeof(stuser->data); stuser_set_state(stuser, MRPC_QUEUED); - init_completion(&stuser->comp); + reinit_completion(&stuser->comp); list_add_tail(&stuser->list, &stdev->mrpc_queue); mrpc_cmd_submit(stdev); From patchwork Sat Mar 21 11:25:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11450979 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5D1CA139A for ; Sat, 21 Mar 2020 11:36:30 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3AD60208C3 for ; Sat, 21 Mar 2020 11:36:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728594AbgCULe5 (ORCPT ); Sat, 21 Mar 2020 07:34:57 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38471 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727071AbgCULe4 (ORCPT ); Sat, 21 Mar 2020 07:34:56 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOQ-0001zH-Br; Sat, 21 Mar 2020 12:34:18 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id AC6861039FD; Sat, 21 Mar 2020 12:34:17 +0100 (CET) Message-Id: <20200321113240.936097534@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:25:46 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Greg Kroah-Hartman , Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, kbuild test robot , Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 02/20] pci/switchtec: Replace completion wait queue usage for poll References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Sebastian Andrzej Siewior The poll callback is using the completion wait queue and sticks it into poll_wait() to wake up pollers after a command has completed. This works to some extent, but cannot provide EPOLLEXCLUSIVE support because the waker side uses complete_all() which unconditionally wakes up all waiters. complete_all() is required because completions internally use exclusive wait and complete() only wakes up one waiter by default. This mixes conceptually different mechanisms and relies on internal implementation details of completions, which in turn puts contraints on changing the internal implementation of completions. Replace it with a regular wait queue and store the state in struct switchtec_user. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner Reviewed-by: Logan Gunthorpe Acked-by: Peter Zijlstra (Intel) Acked-by: Bjorn Helgaas Cc: Kurt Schwemmer Cc: linux-pci@vger.kernel.org --- V2: Reworded changelog. --- drivers/pci/switch/switchtec.c | 22 +++++++++++++--------- 1 file changed, 13 insertions(+), 9 deletions(-) --- a/drivers/pci/switch/switchtec.c +++ b/drivers/pci/switch/switchtec.c @@ -52,10 +52,11 @@ struct switchtec_user { enum mrpc_state state; - struct completion comp; + wait_queue_head_t cmd_comp; struct kref kref; struct list_head list; + bool cmd_done; u32 cmd; u32 status; u32 return_code; @@ -77,7 +78,7 @@ static struct switchtec_user *stuser_cre stuser->stdev = stdev; kref_init(&stuser->kref); INIT_LIST_HEAD(&stuser->list); - init_completion(&stuser->comp); + init_waitqueue_head(&stuser->cmd_comp); stuser->event_cnt = atomic_read(&stdev->event_cnt); dev_dbg(&stdev->dev, "%s: %p\n", __func__, stuser); @@ -175,7 +176,7 @@ static int mrpc_queue_cmd(struct switcht kref_get(&stuser->kref); stuser->read_len = sizeof(stuser->data); stuser_set_state(stuser, MRPC_QUEUED); - reinit_completion(&stuser->comp); + stuser->cmd_done = false; list_add_tail(&stuser->list, &stdev->mrpc_queue); mrpc_cmd_submit(stdev); @@ -222,7 +223,8 @@ static void mrpc_complete_cmd(struct swi memcpy_fromio(stuser->data, &stdev->mmio_mrpc->output_data, stuser->read_len); out: - complete_all(&stuser->comp); + stuser->cmd_done = true; + wake_up_interruptible(&stuser->cmd_comp); list_del_init(&stuser->list); stuser_put(stuser); stdev->mrpc_busy = 0; @@ -529,10 +531,11 @@ static ssize_t switchtec_dev_read(struct mutex_unlock(&stdev->mrpc_mutex); if (filp->f_flags & O_NONBLOCK) { - if (!try_wait_for_completion(&stuser->comp)) + if (!stuser->cmd_done) return -EAGAIN; } else { - rc = wait_for_completion_interruptible(&stuser->comp); + rc = wait_event_interruptible(stuser->cmd_comp, + stuser->cmd_done); if (rc < 0) return rc; } @@ -580,7 +583,7 @@ static __poll_t switchtec_dev_poll(struc struct switchtec_dev *stdev = stuser->stdev; __poll_t ret = 0; - poll_wait(filp, &stuser->comp.wait, wait); + poll_wait(filp, &stuser->cmd_comp, wait); poll_wait(filp, &stdev->event_wq, wait); if (lock_mutex_and_test_alive(stdev)) @@ -588,7 +591,7 @@ static __poll_t switchtec_dev_poll(struc mutex_unlock(&stdev->mrpc_mutex); - if (try_wait_for_completion(&stuser->comp)) + if (stuser->cmd_done) ret |= EPOLLIN | EPOLLRDNORM; if (stuser->event_cnt != atomic_read(&stdev->event_cnt)) @@ -1272,7 +1275,8 @@ static void stdev_kill(struct switchtec_ /* Wake up and kill any users waiting on an MRPC request */ list_for_each_entry_safe(stuser, tmpuser, &stdev->mrpc_queue, list) { - complete_all(&stuser->comp); + stuser->cmd_done = true; + wake_up_interruptible(&stuser->cmd_comp); list_del_init(&stuser->list); stuser_put(stuser); } From patchwork Sat Mar 21 11:25:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11451003 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D499E1894 for ; Sat, 21 Mar 2020 11:36:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id BC3B720780 for ; Sat, 21 Mar 2020 11:36:49 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728853AbgCULgb (ORCPT ); Sat, 21 Mar 2020 07:36:31 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38488 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728496AbgCULe5 (ORCPT ); Sat, 21 Mar 2020 07:34:57 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOQ-0001zI-Kw; Sat, 21 Mar 2020 12:34:18 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id EAFF31039FF; Sat, 21 Mar 2020 12:34:17 +0100 (CET) Message-Id: <20200321113241.043380271@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:25:47 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , Greg Kroah-Hartman , Felipe Balbi , linux-usb@vger.kernel.org, Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, kbuild test robot , Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 03/20] usb: gadget: Use completion interface instead of open coding it References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Thomas Gleixner ep_io() uses a completion on stack and open codes the waiting with: wait_event_interruptible (done.wait, done.done); and wait_event (done.wait, done.done); This waits in non-exclusive mode for complete(), but there is no reason to do so because the completion can only be waited for by the task itself and complete() wakes exactly one exlusive waiter. Replace the open coded implementation with the corresponding wait_for_completion*() functions. No functional change. Reported-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner Reviewed-by: Greg Kroah-Hartman Cc: Felipe Balbi Cc: linux-usb@vger.kernel.org Acked-by: Felipe Balbi --- V2: New patch to avoid the conversion to swait interfaces later --- drivers/usb/gadget/legacy/inode.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) --- a/drivers/usb/gadget/legacy/inode.c +++ b/drivers/usb/gadget/legacy/inode.c @@ -344,7 +344,7 @@ ep_io (struct ep_data *epdata, void *buf spin_unlock_irq (&epdata->dev->lock); if (likely (value == 0)) { - value = wait_event_interruptible (done.wait, done.done); + value = wait_for_completion_interruptible(&done); if (value != 0) { spin_lock_irq (&epdata->dev->lock); if (likely (epdata->ep != NULL)) { @@ -353,7 +353,7 @@ ep_io (struct ep_data *epdata, void *buf usb_ep_dequeue (epdata->ep, epdata->req); spin_unlock_irq (&epdata->dev->lock); - wait_event (done.wait, done.done); + wait_for_completion(&done); if (epdata->status == -ECONNRESET) epdata->status = -EINTR; } else { From patchwork Sat Mar 21 11:25:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11451079 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C5C58913 for ; Sat, 21 Mar 2020 11:37:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A43C62077D for ; Sat, 21 Mar 2020 11:37:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728624AbgCULhl (ORCPT ); Sat, 21 Mar 2020 07:37:41 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38468 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727033AbgCULez (ORCPT ); Sat, 21 Mar 2020 07:34:55 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOR-0001zL-1n; Sat, 21 Mar 2020 12:34:19 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id 36749104084; Sat, 21 Mar 2020 12:34:18 +0100 (CET) Message-Id: <20200321113241.150783464@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:25:48 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , Greg Kroah-Hartman , Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, linux-usb@vger.kernel.org, Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Felipe Balbi , Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, kbuild test robot , Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 04/20] orinoco_usb: Use the regular completion interfaces References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Thomas Gleixner The completion usage in this driver is interesting: - it uses a magic complete function which according to the comment was implemented by invoking complete() four times in a row because complete_all() was not exported at that time. - it uses an open coded wait/poll which checks completion:done. Only one wait side (device removal) uses the regular wait_for_completion() interface. The rationale behind this is to prevent that wait_for_completion() consumes completion::done which would prevent that all waiters are woken. This is not necessary with complete_all() as that sets completion::done to UINT_MAX which is left unmodified by the woken waiters. Replace the magic complete function with complete_all() and convert the open coded wait/poll to regular completion interfaces. This changes the wait to exclusive wait mode. But that does not make any difference because the wakers use complete_all() which ignores the exclusive mode. Reported-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner Reviewed-by: Greg Kroah-Hartman Cc: Kalle Valo Cc: "David S. Miller" Cc: linux-wireless@vger.kernel.org Cc: netdev@vger.kernel.org Cc: linux-usb@vger.kernel.org --- V2: New patch to avoid conversion to swait functions later. --- drivers/net/wireless/intersil/orinoco/orinoco_usb.c | 21 ++++---------------- 1 file changed, 5 insertions(+), 16 deletions(-) --- a/drivers/net/wireless/intersil/orinoco/orinoco_usb.c +++ b/drivers/net/wireless/intersil/orinoco/orinoco_usb.c @@ -365,17 +365,6 @@ static struct request_context *ezusb_all return ctx; } - -/* Hopefully the real complete_all will soon be exported, in the mean - * while this should work. */ -static inline void ezusb_complete_all(struct completion *comp) -{ - complete(comp); - complete(comp); - complete(comp); - complete(comp); -} - static void ezusb_ctx_complete(struct request_context *ctx) { struct ezusb_priv *upriv = ctx->upriv; @@ -409,7 +398,7 @@ static void ezusb_ctx_complete(struct re netif_wake_queue(dev); } - ezusb_complete_all(&ctx->done); + complete_all(&ctx->done); ezusb_request_context_put(ctx); break; @@ -419,7 +408,7 @@ static void ezusb_ctx_complete(struct re /* This is normal, as all request contexts get flushed * when the device is disconnected */ err("Called, CTX not terminating, but device gone"); - ezusb_complete_all(&ctx->done); + complete_all(&ctx->done); ezusb_request_context_put(ctx); break; } @@ -690,11 +679,11 @@ static void ezusb_req_ctx_wait(struct ez * get the chance to run themselves. So we make sure * that we don't sleep for ever */ int msecs = DEF_TIMEOUT * (1000 / HZ); - while (!ctx->done.done && msecs--) + + while (!try_wait_for_completion(&ctx->done) && msecs--) udelay(1000); } else { - wait_event_interruptible(ctx->done.wait, - ctx->done.done); + wait_for_completion(&ctx->done); } break; default: From patchwork Sat Mar 21 11:25:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11450863 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F0F0F913 for ; Sat, 21 Mar 2020 11:35:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D873220789 for ; Sat, 21 Mar 2020 11:35:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728603AbgCULe6 (ORCPT ); Sat, 21 Mar 2020 07:34:58 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38490 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728497AbgCULe5 (ORCPT ); Sat, 21 Mar 2020 07:34:57 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOR-0001zT-Eh; Sat, 21 Mar 2020 12:34:19 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id 74A94FFBBF; Sat, 21 Mar 2020 12:34:18 +0100 (CET) Message-Id: <20200321113241.246190285@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:25:49 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Greg Kroah-Hartman , Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, kbuild test robot , Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 05/20] acpi: Remove header dependency References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Peter Zijlstra In order to avoid future header hell, remove the inclusion of proc_fs.h from acpi_bus.h. All it needs is a forward declaration of a struct. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Thomas Gleixner Cc: Darren Hart Cc: Andy Shevchenko Cc: platform-driver-x86@vger.kernel.org Cc: Greg Kroah-Hartman Cc: Zhang Rui Cc: "Rafael J. Wysocki" Cc: linux-pm@vger.kernel.org Cc: Len Brown Cc: linux-acpi@vger.kernel.org --- drivers/platform/x86/dell-smo8800.c | 1 + drivers/platform/x86/wmi.c | 1 + drivers/thermal/intel/int340x_thermal/acpi_thermal_rel.c | 1 + include/acpi/acpi_bus.h | 2 +- 4 files changed, 4 insertions(+), 1 deletion(-) --- a/drivers/platform/x86/dell-smo8800.c +++ b/drivers/platform/x86/dell-smo8800.c @@ -16,6 +16,7 @@ #include #include #include +#include struct smo8800_device { u32 irq; /* acpi device irq */ --- a/drivers/platform/x86/wmi.c +++ b/drivers/platform/x86/wmi.c @@ -29,6 +29,7 @@ #include #include #include +#include #include ACPI_MODULE_NAME("wmi"); --- a/drivers/thermal/intel/int340x_thermal/acpi_thermal_rel.c +++ b/drivers/thermal/intel/int340x_thermal/acpi_thermal_rel.c @@ -19,6 +19,7 @@ #include #include #include +#include #include "acpi_thermal_rel.h" static acpi_handle acpi_thermal_rel_handle; --- a/include/acpi/acpi_bus.h +++ b/include/acpi/acpi_bus.h @@ -80,7 +80,7 @@ bool acpi_dev_present(const char *hid, c #ifdef CONFIG_ACPI -#include +struct proc_dir_entry; #define ACPI_BUS_FILE_ROOT "acpi" extern struct proc_dir_entry *acpi_root_dir; From patchwork Sat Mar 21 11:25:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11451029 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AC520913 for ; Sat, 21 Mar 2020 11:37:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 95B5B20786 for ; Sat, 21 Mar 2020 11:37:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728602AbgCULhD (ORCPT ); Sat, 21 Mar 2020 07:37:03 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38466 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727028AbgCULe4 (ORCPT ); Sat, 21 Mar 2020 07:34:56 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOR-0001zU-F0; Sat, 21 Mar 2020 12:34:19 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id B80C01039FD; Sat, 21 Mar 2020 12:34:18 +0100 (CET) Message-Id: <20200321113241.339289758@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:25:50 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , kbuild test robot , Nick Hu , Greentime Hu , Vincent Chen , Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Greg Kroah-Hartman , Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 06/20] nds32: Remove mm.h from asm/uaccess.h References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Sebastian Andrzej Siewior The defconfig compiles without linux/mm.h. With mm.h included the include chain leands to: | CC kernel/locking/percpu-rwsem.o | In file included from include/linux/huge_mm.h:8, | from include/linux/mm.h:567, | from arch/nds32/include/asm/uaccess.h:, | from include/linux/uaccess.h:11, | from include/linux/sched/task.h:11, | from include/linux/sched/signal.h:9, | from include/linux/rcuwait.h:6, | from include/linux/percpu-rwsem.h:8, | from kernel/locking/percpu-rwsem.c:6: | include/linux/fs.h:1422:29: error: array type has incomplete element type 'struct percpu_rw_semaphore' | 1422 | struct percpu_rw_semaphore rw_sem[SB_FREEZE_LEVELS]; once rcuwait.h includes linux/sched/signal.h. Remove the linux/mm.h include. Reported-by: kbuild test robot Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner Cc: Nick Hu Cc: Greentime Hu Cc: Vincent Chen --- V3: New patch --- arch/nds32/include/asm/uaccess.h | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/nds32/include/asm/uaccess.h b/arch/nds32/include/asm/uaccess.h index 8916ad9f9f139..3a9219f53ee0d 100644 --- a/arch/nds32/include/asm/uaccess.h +++ b/arch/nds32/include/asm/uaccess.h @@ -11,7 +11,6 @@ #include #include #include -#include #define __asmeq(x, y) ".ifnc " x "," y " ; .err ; .endif\n\t" From patchwork Sat Mar 21 11:25:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11451007 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0B047913 for ; Sat, 21 Mar 2020 11:36:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E5EBA20781 for ; Sat, 21 Mar 2020 11:36:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728580AbgCULe5 (ORCPT ); Sat, 21 Mar 2020 07:34:57 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38449 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726995AbgCULez (ORCPT ); Sat, 21 Mar 2020 07:34:55 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOW-0001zV-KV; Sat, 21 Mar 2020 12:34:24 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id 0A1D2FFC8D; Sat, 21 Mar 2020 12:34:19 +0100 (CET) Message-Id: <20200321113241.434999165@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:25:51 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , kbuild test robot , Guo Ren , linux-csky@vger.kernel.org, Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Greg Kroah-Hartman , Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, Nick Hu , Greentime Hu , Vincent Chen , Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 07/20] csky: Remove mm.h from asm/uaccess.h References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Sebastian Andrzej Siewior The defconfig compiles without linux/mm.h. With mm.h included the include chain leands to: | CC kernel/locking/percpu-rwsem.o | In file included from include/linux/huge_mm.h:8, | from include/linux/mm.h:567, | from arch/csky/include/asm/uaccess.h:, | from include/linux/uaccess.h:11, | from include/linux/sched/task.h:11, | from include/linux/sched/signal.h:9, | from include/linux/rcuwait.h:6, | from include/linux/percpu-rwsem.h:8, | from kernel/locking/percpu-rwsem.c:6: | include/linux/fs.h:1422:29: error: array type has incomplete element type 'struct percpu_rw_semaphore' | 1422 | struct percpu_rw_semaphore rw_sem[SB_FREEZE_LEVELS]; once rcuwait.h includes linux/sched/signal.h. Remove the linux/mm.h include. Reported-by: kbuild test robot Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner Cc: Guo Ren Cc: linux-csky@vger.kernel.org --- V3: New patch --- arch/csky/include/asm/uaccess.h | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/csky/include/asm/uaccess.h b/arch/csky/include/asm/uaccess.h index eaa1c3403a424..abefa125b93cf 100644 --- a/arch/csky/include/asm/uaccess.h +++ b/arch/csky/include/asm/uaccess.h @@ -11,7 +11,6 @@ #include #include #include -#include #include #include #include From patchwork Sat Mar 21 11:25:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11451037 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6E24D139A for ; Sat, 21 Mar 2020 11:37:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 56C0D20780 for ; Sat, 21 Mar 2020 11:37:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728760AbgCULhN (ORCPT ); Sat, 21 Mar 2020 07:37:13 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38472 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727100AbgCULe4 (ORCPT ); Sat, 21 Mar 2020 07:34:56 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOS-0001zZ-4Y; Sat, 21 Mar 2020 12:34:20 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id 4AC801039FF; Sat, 21 Mar 2020 12:34:19 +0100 (CET) Message-Id: <20200321113241.531525286@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:25:52 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , kbuild test robot , Brian Cain , linux-hexagon@vger.kernel.org, Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Greg Kroah-Hartman , Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 08/20] hexagon: Remove mm.h from asm/uaccess.h References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Sebastian Andrzej Siewior The defconfig compiles without linux/mm.h. With mm.h included the include chain leands to: | CC kernel/locking/percpu-rwsem.o | In file included from include/linux/huge_mm.h:8, | from include/linux/mm.h:567, | from arch/hexagon/include/asm/uaccess.h:, | from include/linux/uaccess.h:11, | from include/linux/sched/task.h:11, | from include/linux/sched/signal.h:9, | from include/linux/rcuwait.h:6, | from include/linux/percpu-rwsem.h:8, | from kernel/locking/percpu-rwsem.c:6: | include/linux/fs.h:1422:29: error: array type has incomplete element type 'struct percpu_rw_semaphore' | 1422 | struct percpu_rw_semaphore rw_sem[SB_FREEZE_LEVELS]; once rcuwait.h includes linux/sched/signal.h. Remove the linux/mm.h include. Reported-by: kbuild test robot Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner Cc: Brian Cain Cc: linux-hexagon@vger.kernel.org Acked-by: Brian Cain --- V3: New patch --- arch/hexagon/include/asm/uaccess.h | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/hexagon/include/asm/uaccess.h b/arch/hexagon/include/asm/uaccess.h index 00cb38faad0c4..c1019a736ff13 100644 --- a/arch/hexagon/include/asm/uaccess.h +++ b/arch/hexagon/include/asm/uaccess.h @@ -10,7 +10,6 @@ /* * User space memory access functions */ -#include #include /* From patchwork Sat Mar 21 11:25:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11450879 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3E188139A for ; Sat, 21 Mar 2020 11:35:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1BE3820780 for ; Sat, 21 Mar 2020 11:35:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728709AbgCULfD (ORCPT ); Sat, 21 Mar 2020 07:35:03 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38511 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728589AbgCULfA (ORCPT ); Sat, 21 Mar 2020 07:35:00 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOX-0001ze-CH; Sat, 21 Mar 2020 12:34:25 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id 88BED104084; Sat, 21 Mar 2020 12:34:19 +0100 (CET) Message-Id: <20200321113241.624070289@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:25:53 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , kbuild test robot , Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Greg Kroah-Hartman , Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 09/20] ia64: Remove mm.h from asm/uaccess.h References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Sebastian Andrzej Siewior The defconfig compiles without linux/mm.h. With mm.h included the include chain leands to: | CC kernel/locking/percpu-rwsem.o | In file included from include/linux/huge_mm.h:8, | from include/linux/mm.h:567, | from arch/ia64/include/asm/uaccess.h:, | from include/linux/uaccess.h:11, | from include/linux/sched/task.h:11, | from include/linux/sched/signal.h:9, | from include/linux/rcuwait.h:6, | from include/linux/percpu-rwsem.h:8, | from kernel/locking/percpu-rwsem.c:6: | include/linux/fs.h:1422:29: error: array type has incomplete element type 'struct percpu_rw_semaphore' | 1422 | struct percpu_rw_semaphore rw_sem[SB_FREEZE_LEVELS]; once rcuwait.h includes linux/sched/signal.h. Remove the linux/mm.h include. Reported-by: kbuild test robot Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner Cc: Tony Luck Cc: Fenghua Yu Cc: linux-ia64@vger.kernel.org --- V3: New patch --- arch/ia64/include/asm/uaccess.h | 1 - arch/ia64/kernel/process.c | 1 + arch/ia64/mm/ioremap.c | 1 + 3 files changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/ia64/include/asm/uaccess.h b/arch/ia64/include/asm/uaccess.h index 89782ad3fb887..5c7e79eccaeed 100644 --- a/arch/ia64/include/asm/uaccess.h +++ b/arch/ia64/include/asm/uaccess.h @@ -35,7 +35,6 @@ #include #include -#include #include #include diff --git a/arch/ia64/kernel/process.c b/arch/ia64/kernel/process.c index 968b5f33e725e..743aaf5283278 100644 --- a/arch/ia64/kernel/process.c +++ b/arch/ia64/kernel/process.c @@ -681,3 +681,4 @@ machine_power_off (void) machine_halt(); } +EXPORT_SYMBOL(ia64_delay_loop); diff --git a/arch/ia64/mm/ioremap.c b/arch/ia64/mm/ioremap.c index a09cfa0645369..55fd3eb753ff9 100644 --- a/arch/ia64/mm/ioremap.c +++ b/arch/ia64/mm/ioremap.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include From patchwork Sat Mar 21 11:25:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11450987 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 0E18E913 for ; Sat, 21 Mar 2020 11:36:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EA1242077D for ; Sat, 21 Mar 2020 11:36:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728860AbgCULgc (ORCPT ); Sat, 21 Mar 2020 07:36:32 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38489 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728493AbgCULe5 (ORCPT ); Sat, 21 Mar 2020 07:34:57 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOX-0001zf-CX; Sat, 21 Mar 2020 12:34:25 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id C92BE1040A1; Sat, 21 Mar 2020 12:34:19 +0100 (CET) Message-Id: <20200321113241.719022171@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:25:54 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , kbuild test robot , Michal Simek , Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Greg Kroah-Hartman , Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 10/20] microblaze: Remove mm.h from asm/uaccess.h References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Sebastian Andrzej Siewior The defconfig compiles without linux/mm.h. With mm.h included the include chain leands to: | CC kernel/locking/percpu-rwsem.o | In file included from include/linux/huge_mm.h:8, | from include/linux/mm.h:567, | from arch/microblaze/include/asm/uaccess.h:, | from include/linux/uaccess.h:11, | from include/linux/sched/task.h:11, | from include/linux/sched/signal.h:9, | from include/linux/rcuwait.h:6, | from include/linux/percpu-rwsem.h:8, | from kernel/locking/percpu-rwsem.c:6: | include/linux/fs.h:1422:29: error: array type has incomplete element type 'struct percpu_rw_semaphore' | 1422 | struct percpu_rw_semaphore rw_sem[SB_FREEZE_LEVELS]; once rcuwait.h includes linux/sched/signal.h. Remove the linux/mm.h include. Reported-by: kbuild test robot Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner Cc: Michal Simek --- V3; New patch --- arch/microblaze/include/asm/uaccess.h | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/microblaze/include/asm/uaccess.h b/arch/microblaze/include/asm/uaccess.h index a1f206b90753a..4916d5fbea5e3 100644 --- a/arch/microblaze/include/asm/uaccess.h +++ b/arch/microblaze/include/asm/uaccess.h @@ -12,7 +12,6 @@ #define _ASM_MICROBLAZE_UACCESS_H #include -#include #include #include From patchwork Sat Mar 21 11:25:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11450963 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2964B139A for ; Sat, 21 Mar 2020 11:36:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 10D4720786 for ; Sat, 21 Mar 2020 11:36:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728606AbgCULe6 (ORCPT ); Sat, 21 Mar 2020 07:34:58 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38496 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728521AbgCULe5 (ORCPT ); Sat, 21 Mar 2020 07:34:57 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOT-0001zi-1B; Sat, 21 Mar 2020 12:34:21 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id 16805FFBBF; Sat, 21 Mar 2020 12:34:20 +0100 (CET) Message-Id: <20200321113241.824030968@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:25:55 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Greg Kroah-Hartman , Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, kbuild test robot , Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 11/20] rcuwait: Add @state argument to rcuwait_wait_event() References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Peter Zijlstra (Intel) Extend rcuwait_wait_event() with a state variable so that it is not restricted to UNINTERRUPTIBLE waits. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Thomas Gleixner Cc: Linus Torvalds Cc: Davidlohr Bueso --- include/linux/rcuwait.h | 12 ++++++++++-- kernel/locking/percpu-rwsem.c | 2 +- 2 files changed, 11 insertions(+), 3 deletions(-) --- a/include/linux/rcuwait.h +++ b/include/linux/rcuwait.h @@ -3,6 +3,7 @@ #define _LINUX_RCUWAIT_H_ #include +#include /* * rcuwait provides a way of blocking and waking up a single @@ -30,23 +31,30 @@ extern void rcuwait_wake_up(struct rcuwa * The caller is responsible for locking around rcuwait_wait_event(), * such that writes to @task are properly serialized. */ -#define rcuwait_wait_event(w, condition) \ +#define rcuwait_wait_event(w, condition, state) \ ({ \ + int __ret = 0; \ rcu_assign_pointer((w)->task, current); \ for (;;) { \ /* \ * Implicit barrier (A) pairs with (B) in \ * rcuwait_wake_up(). \ */ \ - set_current_state(TASK_UNINTERRUPTIBLE); \ + set_current_state(state); \ if (condition) \ break; \ \ + if (signal_pending_state(state, current)) { \ + __ret = -EINTR; \ + break; \ + } \ + \ schedule(); \ } \ \ WRITE_ONCE((w)->task, NULL); \ __set_current_state(TASK_RUNNING); \ + __ret; \ }) #endif /* _LINUX_RCUWAIT_H_ */ --- a/kernel/locking/percpu-rwsem.c +++ b/kernel/locking/percpu-rwsem.c @@ -162,7 +162,7 @@ void percpu_down_write(struct percpu_rw_ */ /* Wait for all now active readers to complete. */ - rcuwait_wait_event(&sem->writer, readers_active_check(sem)); + rcuwait_wait_event(&sem->writer, readers_active_check(sem), TASK_UNINTERRUPTIBLE); } EXPORT_SYMBOL_GPL(percpu_down_write); From patchwork Sat Mar 21 11:25:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11451091 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 24A46913 for ; Sat, 21 Mar 2020 11:37:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0D81320780 for ; Sat, 21 Mar 2020 11:37:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728514AbgCULey (ORCPT ); Sat, 21 Mar 2020 07:34:54 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38431 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726834AbgCULey (ORCPT ); Sat, 21 Mar 2020 07:34:54 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOU-0001zl-4v; Sat, 21 Mar 2020 12:34:22 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id 53DF71039FD; Sat, 21 Mar 2020 12:34:20 +0100 (CET) Message-Id: <20200321113241.930037873@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:25:56 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Greg Kroah-Hartman , Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, kbuild test robot , Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 12/20] powerpc/ps3: Convert half completion to rcuwait References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Thomas Gleixner The PS3 notification interrupt and kthread use a hacked up completion to communicate. Since we're wanting to change the completion implementation and this is abuse anyway, replace it with a simple rcuwait since there is only ever the one waiter. AFAICT the kthread uses TASK_INTERRUPTIBLE to not increase loadavg, kthreads cannot receive signals by default and this one doesn't look different. Use TASK_IDLE instead. Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Thomas Gleixner Cc: Michael Ellerman Cc: Arnd Bergmann Cc: Geoff Levand Cc: linuxppc-dev@lists.ozlabs.org --- V3: Folded the init fix from bigeasy V2: New patch to avoid the magic completion wait variant --- arch/powerpc/platforms/ps3/device-init.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) --- a/arch/powerpc/platforms/ps3/device-init.c +++ b/arch/powerpc/platforms/ps3/device-init.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include @@ -670,7 +671,8 @@ struct ps3_notification_device { spinlock_t lock; u64 tag; u64 lv1_status; - struct completion done; + struct rcuwait wait; + bool done; }; enum ps3_notify_type { @@ -712,7 +714,8 @@ static irqreturn_t ps3_notification_inte pr_debug("%s:%u: completed, status 0x%llx\n", __func__, __LINE__, status); dev->lv1_status = status; - complete(&dev->done); + dev->done = true; + rcuwait_wake_up(&dev->wait); } spin_unlock(&dev->lock); return IRQ_HANDLED; @@ -725,12 +728,12 @@ static int ps3_notification_read_write(s unsigned long flags; int res; - init_completion(&dev->done); spin_lock_irqsave(&dev->lock, flags); res = write ? lv1_storage_write(dev->sbd.dev_id, 0, 0, 1, 0, lpar, &dev->tag) : lv1_storage_read(dev->sbd.dev_id, 0, 0, 1, 0, lpar, &dev->tag); + dev->done = false; spin_unlock_irqrestore(&dev->lock, flags); if (res) { pr_err("%s:%u: %s failed %d\n", __func__, __LINE__, op, res); @@ -738,14 +741,10 @@ static int ps3_notification_read_write(s } pr_debug("%s:%u: notification %s issued\n", __func__, __LINE__, op); - res = wait_event_interruptible(dev->done.wait, - dev->done.done || kthread_should_stop()); + rcuwait_wait_event(&dev->wait, dev->done || kthread_should_stop(), TASK_IDLE); + if (kthread_should_stop()) res = -EINTR; - if (res) { - pr_debug("%s:%u: interrupted %s\n", __func__, __LINE__, op); - return res; - } if (dev->lv1_status) { pr_err("%s:%u: %s not completed, status 0x%llx\n", __func__, @@ -810,6 +809,7 @@ static int ps3_probe_thread(void *data) } spin_lock_init(&dev.lock); + rcuwait_init(&dev.wait); res = request_irq(irq, ps3_notification_interrupt, 0, "ps3_notification", &dev); From patchwork Sat Mar 21 11:25:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11450953 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 24E9A913 for ; Sat, 21 Mar 2020 11:36:13 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0367720658 for ; Sat, 21 Mar 2020 11:36:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728715AbgCULgD (ORCPT ); Sat, 21 Mar 2020 07:36:03 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38483 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728481AbgCULe6 (ORCPT ); Sat, 21 Mar 2020 07:34:58 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOV-0001zp-3Y; Sat, 21 Mar 2020 12:34:23 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id 92A3F1039FF; Sat, 21 Mar 2020 12:34:20 +0100 (CET) Message-Id: <20200321113242.026561244@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:25:57 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Greg Kroah-Hartman , Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, kbuild test robot , Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, Davidlohr Bueso Subject: [patch V3 13/20] Documentation: Add lock ordering and nesting documentation References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Thomas Gleixner The kernel provides a variety of locking primitives. The nesting of these lock types and the implications of them on RT enabled kernels is nowhere documented. Add initial documentation. Signed-off-by: Thomas Gleixner Cc: "Paul E . McKenney" Cc: Jonathan Corbet Cc: Davidlohr Bueso Cc: Randy Dunlap --- V3: Addressed review comments from Paul, Jonathan, Davidlohr V2: Addressed review comments from Randy --- Documentation/locking/index.rst | 1 Documentation/locking/locktypes.rst | 299 ++++++++++++++++++++++++++++++++++++ 2 files changed, 300 insertions(+) create mode 100644 Documentation/locking/locktypes.rst --- a/Documentation/locking/index.rst +++ b/Documentation/locking/index.rst @@ -7,6 +7,7 @@ locking .. toctree:: :maxdepth: 1 + locktypes lockdep-design lockstat locktorture --- /dev/null +++ b/Documentation/locking/locktypes.rst @@ -0,0 +1,299 @@ +.. SPDX-License-Identifier: GPL-2.0 + +.. _kernel_hacking_locktypes: + +========================== +Lock types and their rules +========================== + +Introduction +============ + +The kernel provides a variety of locking primitives which can be divided +into two categories: + + - Sleeping locks + - Spinning locks + +This document conceptually describes these lock types and provides rules +for their nesting, including the rules for use under PREEMPT_RT. + + +Lock categories +=============== + +Sleeping locks +-------------- + +Sleeping locks can only be acquired in preemptible task context. + +Although implementations allow try_lock() from other contexts, it is +necessary to carefully evaluate the safety of unlock() as well as of +try_lock(). Furthermore, it is also necessary to evaluate the debugging +versions of these primitives. In short, don't acquire sleeping locks from +other contexts unless there is no other option. + +Sleeping lock types: + + - mutex + - rt_mutex + - semaphore + - rw_semaphore + - ww_mutex + - percpu_rw_semaphore + +On PREEMPT_RT kernels, these lock types are converted to sleeping locks: + + - spinlock_t + - rwlock_t + +Spinning locks +-------------- + + - raw_spinlock_t + - bit spinlocks + +On non-PREEMPT_RT kernels, these lock types are also spinning locks: + + - spinlock_t + - rwlock_t + +Spinning locks implicitly disable preemption and the lock / unlock functions +can have suffixes which apply further protections: + + =================== ==================================================== + _bh() Disable / enable bottom halves (soft interrupts) + _irq() Disable / enable interrupts + _irqsave/restore() Save and disable / restore interrupt disabled state + =================== ==================================================== + + +rtmutex +======= + +RT-mutexes are mutexes with support for priority inheritance (PI). + +PI has limitations on non PREEMPT_RT enabled kernels due to preemption and +interrupt disabled sections. + +PI clearly cannot preempt preemption-disabled or interrupt-disabled +regions of code, even on PREEMPT_RT kernels. Instead, PREEMPT_RT kernels +execute most such regions of code in preemptible task context, especially +interrupt handlers and soft interrupts. This conversion allows spinlock_t +and rwlock_t to be implemented via RT-mutexes. + + +raw_spinlock_t and spinlock_t +============================= + +raw_spinlock_t +-------------- + +raw_spinlock_t is a strict spinning lock implementation regardless of the +kernel configuration including PREEMPT_RT enabled kernels. + +raw_spinlock_t is a strict spinning lock implementation in all kernels, +including PREEMPT_RT kernels. Use raw_spinlock_t only in real critical +core code, low level interrupt handling and places where disabling +preemption or interrupts is required, for example, to safely access +hardware state. raw_spinlock_t can sometimes also be used when the +critical section is tiny, thus avoiding RT-mutex overhead. + +spinlock_t +---------- + +The semantics of spinlock_t change with the state of CONFIG_PREEMPT_RT. + +On a non PREEMPT_RT enabled kernel spinlock_t is mapped to raw_spinlock_t +and has exactly the same semantics. + +spinlock_t and PREEMPT_RT +------------------------- + +On a PREEMPT_RT enabled kernel spinlock_t is mapped to a separate +implementation based on rt_mutex which changes the semantics: + + - Preemption is not disabled + + - The hard interrupt related suffixes for spin_lock / spin_unlock + operations (_irq, _irqsave / _irqrestore) do not affect the CPUs + interrupt disabled state + + - The soft interrupt related suffix (_bh()) still disables softirq + handlers. + + Non-PREEMPT_RT kernels disable preemption to get this effect. + + PREEMPT_RT kernels use a per-CPU lock for serialization which keeps + preemption disabled. The lock disables softirq handlers and also + prevents reentrancy due to task preemption. + +PREEMPT_RT kernels preserve all other spinlock_t semantics: + + - Tasks holding a spinlock_t do not migrate. Non-PREEMPT_RT kernels + avoid migration by disabling preemption. PREEMPT_RT kernels instead + disable migration, which ensures that pointers to per-CPU variables + remain valid even if the task is preempted. + + - Task state is preserved across spinlock acquisition, ensuring that the + task-state rules apply to all kernel configurations. Non-PREEMPT_RT + kernels leave task state untouched. However, PREEMPT_RT must change + task state if the task blocks during acquisition. Therefore, it saves + the current task state before blocking and the corresponding lock wakeup + restores it. + + Other types of wakeups would normally unconditionally set the task state + to RUNNING, but that does not work here because the task must remain + blocked until the lock becomes available. Therefore, when a non-lock + wakeup attempts to awaken a task blocked waiting for a spinlock, it + instead sets the saved state to RUNNING. Then, when the lock + acquisition completes, the lock wakeup sets the task state to the saved + state, in this case setting it to RUNNING. + +rwlock_t +======== + +rwlock_t is a multiple readers and single writer lock mechanism. + +Non-PREEMPT_RT kernels implement rwlock_t as a spinning lock and the +suffix rules of spinlock_t apply accordingly. The implementation is fair, +thus preventing writer starvation. + +rwlock_t and PREEMPT_RT +----------------------- + +PREEMPT_RT kernels map rwlock_t to a separate rt_mutex-based +implementation, thus changing semantics: + + - All the spinlock_t changes also apply to rwlock_t. + + - Because an rwlock_t writer cannot grant its priority to multiple + readers, a preempted low-priority reader will continue holding its lock, + thus starving even high-priority writers. In contrast, because readers + can grant their priority to a writer, a preempted low-priority writer + will have its priority boosted until it releases the lock, thus + preventing that writer from starving readers. + + +PREEMPT_RT caveats +================== + +spinlock_t and rwlock_t +----------------------- + +These changes in spinlock_t and rwlock_t semantics on PREEMPT_RT kernels +have a few implications. For example, on a non-PREEMPT_RT kernel the +following code sequence works as expected:: + + local_irq_disable(); + spin_lock(&lock); + +and is fully equivalent to:: + + spin_lock_irq(&lock); + +Same applies to rwlock_t and the _irqsave() suffix variants. + +On PREEMPT_RT kernel this code sequence breaks because RT-mutex requires a +fully preemptible context. Instead, use spin_lock_irq() or +spin_lock_irqsave() and their unlock counterparts. In cases where the +interrupt disabling and locking must remain separate, PREEMPT_RT offers a +local_lock mechanism. Acquiring the local_lock pins the task to a CPU, +allowing things like per-CPU irq-disabled locks to be acquired. However, +this approach should be used only where absolutely necessary. + + +raw_spinlock_t +-------------- + +Acquiring a raw_spinlock_t disables preemption and possibly also +interrupts, so the critical section must avoid acquiring a regular +spinlock_t or rwlock_t, for example, the critical section must avoid +allocating memory. Thus, on a non-PREEMPT_RT kernel the following code +works perfectly:: + + raw_spin_lock(&lock); + p = kmalloc(sizeof(*p), GFP_ATOMIC); + +But this code fails on PREEMPT_RT kernels because the memory allocator is +fully preemptible and therefore cannot be invoked from truly atomic +contexts. However, it is perfectly fine to invoke the memory allocator +while holding normal non-raw spinlocks because they do not disable +preemption on PREEMPT_RT kernels:: + + spin_lock(&lock); + p = kmalloc(sizeof(*p), GFP_ATOMIC); + + +bit spinlocks +------------- + +Bit spinlocks are problematic for PREEMPT_RT as they cannot be easily +substituted by an RT-mutex based implementation for obvious reasons. + +The semantics of bit spinlocks are preserved on PREEMPT_RT kernels and the +caveats vs. raw_spinlock_t apply. + +Some bit spinlocks are substituted by regular spinlock_t for PREEMPT_RT but +this requires conditional (#ifdef'ed) code changes at the usage site while +the spinlock_t substitution is simply done by the compiler and the +conditionals are restricted to header files and core implementation of the +locking primitives and the usage sites do not require any changes. + + +Lock type nesting rules +======================= + +The most basic rules are: + + - Lock types of the same lock category (sleeping, spinning) can nest + arbitrarily as long as they respect the general lock ordering rules to + prevent deadlocks. + + - Sleeping lock types cannot nest inside spinning lock types. + + - Spinning lock types can nest inside sleeping lock types. + +These rules apply in general independent of CONFIG_PREEMPT_RT. + +As PREEMPT_RT changes the lock category of spinlock_t and rwlock_t from +spinning to sleeping this has obviously restrictions how they can nest with +raw_spinlock_t. + +This results in the following nest ordering: + + 1) Sleeping locks + 2) spinlock_t and rwlock_t + 3) raw_spinlock_t and bit spinlocks + +Lockdep is aware of these constraints to ensure that they are respected. + + +Owner semantics +=============== + +Most lock types in the Linux kernel have strict owner semantics, i.e. the +context (task) which acquires a lock has to release it. + +There are two exceptions: + + - semaphores + - rwsems + +semaphores have no owner semantics for historical reason, and as such +trylock and release operations can be called from any context. They are +often used for both serialization and waiting purposes. That's generally +discouraged and should be replaced by separate serialization and wait +mechanisms, such as mutexes and completions. + +rwsems have grown interfaces which allow non owner release for special +purposes. This usage is problematic on PREEMPT_RT because PREEMPT_RT +substitutes all locking primitives except semaphores with RT-mutex based +implementations to provide priority inheritance for all lock types except +the truly spinning ones. Priority inheritance on ownerless locks is +obviously impossible. + +For now the rwsem non-owner release excludes code which utilizes it from +being used on PREEMPT_RT enabled kernels. In same cases this can be +mitigated by disabling portions of the code, in other cases the complete +functionality has to be disabled until a workable solution has been found. From patchwork Sat Mar 21 11:25:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11451009 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D88B3913 for ; Sat, 21 Mar 2020 11:36:54 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id AE6752077D for ; Sat, 21 Mar 2020 11:36:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728584AbgCULgx (ORCPT ); Sat, 21 Mar 2020 07:36:53 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38478 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727212AbgCULe4 (ORCPT ); Sat, 21 Mar 2020 07:34:56 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOW-0002Cb-2m; Sat, 21 Mar 2020 12:34:24 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id D1AE31040C5; Sat, 21 Mar 2020 12:34:20 +0100 (CET) Message-Id: <20200321113242.120587764@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:25:58 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Greg Kroah-Hartman , Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, kbuild test robot , Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 14/20] timekeeping: Split jiffies seqlock References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Thomas Gleixner seqlock consists of a sequence counter and a spinlock_t which is used to serialize the writers. spinlock_t is substituted by a "sleeping" spinlock on PREEMPT_RT enabled kernels which breaks the usage in the timekeeping code as the writers are executed in hard interrupt and therefore non-preemptible context even on PREEMPT_RT. The spinlock in seqlock cannot be unconditionally replaced by a raw_spinlock_t as many seqlock users have nesting spinlock sections or other code which is not suitable to run in truly atomic context on RT. Instead of providing a raw_seqlock API for a single use case, open code the seqlock for the jiffies use case and implement it with a raw_spinlock_t and a sequence counter. Signed-off-by: Thomas Gleixner --- kernel/time/jiffies.c | 7 ++++--- kernel/time/tick-common.c | 10 ++++++---- kernel/time/tick-sched.c | 19 ++++++++++++------- kernel/time/timekeeping.c | 6 ++++-- kernel/time/timekeeping.h | 3 ++- 5 files changed, 28 insertions(+), 17 deletions(-) --- a/kernel/time/jiffies.c +++ b/kernel/time/jiffies.c @@ -58,7 +58,8 @@ static struct clocksource clocksource_ji .max_cycles = 10, }; -__cacheline_aligned_in_smp DEFINE_SEQLOCK(jiffies_lock); +__cacheline_aligned_in_smp DEFINE_RAW_SPINLOCK(jiffies_lock); +__cacheline_aligned_in_smp seqcount_t jiffies_seq; #if (BITS_PER_LONG < 64) u64 get_jiffies_64(void) @@ -67,9 +68,9 @@ u64 get_jiffies_64(void) u64 ret; do { - seq = read_seqbegin(&jiffies_lock); + seq = read_seqcount_begin(&jiffies_seq); ret = jiffies_64; - } while (read_seqretry(&jiffies_lock, seq)); + } while (read_seqcount_retry(&jiffies_seq, seq)); return ret; } EXPORT_SYMBOL(get_jiffies_64); --- a/kernel/time/tick-common.c +++ b/kernel/time/tick-common.c @@ -84,13 +84,15 @@ int tick_is_oneshot_available(void) static void tick_periodic(int cpu) { if (tick_do_timer_cpu == cpu) { - write_seqlock(&jiffies_lock); + raw_spin_lock(&jiffies_lock); + write_seqcount_begin(&jiffies_seq); /* Keep track of the next tick event */ tick_next_period = ktime_add(tick_next_period, tick_period); do_timer(1); - write_sequnlock(&jiffies_lock); + write_seqcount_end(&jiffies_seq); + raw_spin_unlock(&jiffies_lock); update_wall_time(); } @@ -162,9 +164,9 @@ void tick_setup_periodic(struct clock_ev ktime_t next; do { - seq = read_seqbegin(&jiffies_lock); + seq = read_seqcount_begin(&jiffies_seq); next = tick_next_period; - } while (read_seqretry(&jiffies_lock, seq)); + } while (read_seqcount_retry(&jiffies_seq, seq)); clockevents_switch_state(dev, CLOCK_EVT_STATE_ONESHOT); --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -65,7 +65,8 @@ static void tick_do_update_jiffies64(kti return; /* Reevaluate with jiffies_lock held */ - write_seqlock(&jiffies_lock); + raw_spin_lock(&jiffies_lock); + write_seqcount_begin(&jiffies_seq); delta = ktime_sub(now, last_jiffies_update); if (delta >= tick_period) { @@ -91,10 +92,12 @@ static void tick_do_update_jiffies64(kti /* Keep the tick_next_period variable up to date */ tick_next_period = ktime_add(last_jiffies_update, tick_period); } else { - write_sequnlock(&jiffies_lock); + write_seqcount_end(&jiffies_seq); + raw_spin_unlock(&jiffies_lock); return; } - write_sequnlock(&jiffies_lock); + write_seqcount_end(&jiffies_seq); + raw_spin_unlock(&jiffies_lock); update_wall_time(); } @@ -105,12 +108,14 @@ static ktime_t tick_init_jiffy_update(vo { ktime_t period; - write_seqlock(&jiffies_lock); + raw_spin_lock(&jiffies_lock); + write_seqcount_begin(&jiffies_seq); /* Did we start the jiffies update yet ? */ if (last_jiffies_update == 0) last_jiffies_update = tick_next_period; period = last_jiffies_update; - write_sequnlock(&jiffies_lock); + write_seqcount_end(&jiffies_seq); + raw_spin_unlock(&jiffies_lock); return period; } @@ -676,10 +681,10 @@ static ktime_t tick_nohz_next_event(stru /* Read jiffies and the time when jiffies were updated last */ do { - seq = read_seqbegin(&jiffies_lock); + seq = read_seqcount_begin(&jiffies_seq); basemono = last_jiffies_update; basejiff = jiffies; - } while (read_seqretry(&jiffies_lock, seq)); + } while (read_seqcount_retry(&jiffies_seq, seq)); ts->last_jiffies = basejiff; ts->timer_expires_base = basemono; --- a/kernel/time/timekeeping.c +++ b/kernel/time/timekeeping.c @@ -2397,8 +2397,10 @@ EXPORT_SYMBOL(hardpps); */ void xtime_update(unsigned long ticks) { - write_seqlock(&jiffies_lock); + raw_spin_lock(&jiffies_lock); + write_seqcount_begin(&jiffies_seq); do_timer(ticks); - write_sequnlock(&jiffies_lock); + write_seqcount_end(&jiffies_seq); + raw_spin_unlock(&jiffies_lock); update_wall_time(); } --- a/kernel/time/timekeeping.h +++ b/kernel/time/timekeeping.h @@ -25,7 +25,8 @@ static inline void sched_clock_resume(vo extern void do_timer(unsigned long ticks); extern void update_wall_time(void); -extern seqlock_t jiffies_lock; +extern raw_spinlock_t jiffies_lock; +extern seqcount_t jiffies_seq; #define CS_NAME_LEN 32 From patchwork Sat Mar 21 11:25:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11451065 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 375381668 for ; Sat, 21 Mar 2020 11:37:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1F9A020722 for ; Sat, 21 Mar 2020 11:37:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728668AbgCULh3 (ORCPT ); Sat, 21 Mar 2020 07:37:29 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38469 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727052AbgCULez (ORCPT ); Sat, 21 Mar 2020 07:34:55 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOW-0002EM-Vk; Sat, 21 Mar 2020 12:34:25 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id 1C9FE1040C6; Sat, 21 Mar 2020 12:34:21 +0100 (CET) Message-Id: <20200321113242.228481202@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:25:59 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Greg Kroah-Hartman , Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, kbuild test robot , Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 15/20] sched/swait: Prepare usage in completions References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Thomas Gleixner As a preparation to use simple wait queues for completions: - Provide swake_up_all_locked() to support complete_all() - Make __prepare_to_swait() public available This is done to enable the usage of complete() within truly atomic contexts on a PREEMPT_RT enabled kernel. Signed-off-by: Thomas Gleixner Cc: Linus Torvalds --- V2: Add comment to swake_up_all_locked() --- kernel/sched/sched.h | 3 +++ kernel/sched/swait.c | 15 ++++++++++++++- 2 files changed, 17 insertions(+), 1 deletion(-) --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2492,3 +2492,6 @@ static inline bool is_per_cpu_kthread(st return true; } #endif + +void swake_up_all_locked(struct swait_queue_head *q); +void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait); --- a/kernel/sched/swait.c +++ b/kernel/sched/swait.c @@ -32,6 +32,19 @@ void swake_up_locked(struct swait_queue_ } EXPORT_SYMBOL(swake_up_locked); +/* + * Wake up all waiters. This is an interface which is solely exposed for + * completions and not for general usage. + * + * It is intentionally different from swake_up_all() to allow usage from + * hard interrupt context and interrupt disabled regions. + */ +void swake_up_all_locked(struct swait_queue_head *q) +{ + while (!list_empty(&q->task_list)) + swake_up_locked(q); +} + void swake_up_one(struct swait_queue_head *q) { unsigned long flags; @@ -69,7 +82,7 @@ void swake_up_all(struct swait_queue_hea } EXPORT_SYMBOL(swake_up_all); -static void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait) +void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait) { wait->task = current; if (list_empty(&wait->task_list)) From patchwork Sat Mar 21 11:26:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11450911 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BFE2C913 for ; Sat, 21 Mar 2020 11:35:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9DF8A20754 for ; Sat, 21 Mar 2020 11:35:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728686AbgCULf2 (ORCPT ); Sat, 21 Mar 2020 07:35:28 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38513 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728617AbgCULfD (ORCPT ); Sat, 21 Mar 2020 07:35:03 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOa-0002G3-Hx; Sat, 21 Mar 2020 12:34:28 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id 5E7261040C7; Sat, 21 Mar 2020 12:34:21 +0100 (CET) Message-Id: <20200321113242.317954042@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:26:00 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , Davidlohr Bueso , Greg Kroah-Hartman , Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, kbuild test robot , Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap Subject: [patch V3 16/20] completion: Use simple wait queues References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Thomas Gleixner completion uses a wait_queue_head_t to enqueue waiters. wait_queue_head_t contains a spinlock_t to protect the list of waiters which excludes it from being used in truly atomic context on a PREEMPT_RT enabled kernel. The spinlock in the wait queue head cannot be replaced by a raw_spinlock because: - wait queues can have custom wakeup callbacks, which acquire other spinlock_t locks and have potentially long execution times - wake_up() walks an unbounded number of list entries during the wake up and may wake an unbounded number of waiters. For simplicity and performance reasons complete() should be usable on PREEMPT_RT enabled kernels. completions do not use custom wakeup callbacks and are usually single waiter, except for a few corner cases. Replace the wait queue in the completion with a simple wait queue (swait), which uses a raw_spinlock_t for protecting the waiter list and therefore is safe to use inside truly atomic regions on PREEMPT_RT. There is no semantical or functional change: - completions use the exclusive wait mode which is what swait provides - complete() wakes one exclusive waiter - complete_all() wakes all waiters while holding the lock which protects the wait queue against newly incoming waiters. The conversion to swait preserves this behaviour. complete_all() might cause unbound latencies with a large number of waiters being woken at once, but most complete_all() usage sites are either in testing or initialization code or have only a really small number of concurrent waiters which for now does not cause a latency problem. Keep it simple for now. The fixup of the warning check in the USB gadget driver is just a straight forward conversion of the lockless waiter check from one waitqueue type to the other. Signed-off-by: Thomas Gleixner Reviewed-by: Joel Fernandes (Google) Reviewed-by: Davidlohr Bueso Reviewed-by: Greg Kroah-Hartman Acked-by: Linus Torvalds --- V2: Split out the orinoco and usb gadget parts and amended change log --- drivers/usb/gadget/function/f_fs.c | 2 +- include/linux/completion.h | 8 ++++---- kernel/sched/completion.c | 36 +++++++++++++++++++----------------- 3 files changed, 24 insertions(+), 22 deletions(-) --- a/drivers/usb/gadget/function/f_fs.c +++ b/drivers/usb/gadget/function/f_fs.c @@ -1703,7 +1703,7 @@ static void ffs_data_put(struct ffs_data pr_info("%s(): freeing\n", __func__); ffs_data_clear(ffs); BUG_ON(waitqueue_active(&ffs->ev.waitq) || - waitqueue_active(&ffs->ep0req_completion.wait) || + swait_active(&ffs->ep0req_completion.wait) || waitqueue_active(&ffs->wait)); destroy_workqueue(ffs->io_completion_wq); kfree(ffs->dev_name); --- a/include/linux/completion.h +++ b/include/linux/completion.h @@ -9,7 +9,7 @@ * See kernel/sched/completion.c for details. */ -#include +#include /* * struct completion - structure used to maintain state for a "completion" @@ -25,7 +25,7 @@ */ struct completion { unsigned int done; - wait_queue_head_t wait; + struct swait_queue_head wait; }; #define init_completion_map(x, m) __init_completion(x) @@ -34,7 +34,7 @@ static inline void complete_acquire(stru static inline void complete_release(struct completion *x) {} #define COMPLETION_INITIALIZER(work) \ - { 0, __WAIT_QUEUE_HEAD_INITIALIZER((work).wait) } + { 0, __SWAIT_QUEUE_HEAD_INITIALIZER((work).wait) } #define COMPLETION_INITIALIZER_ONSTACK_MAP(work, map) \ (*({ init_completion_map(&(work), &(map)); &(work); })) @@ -85,7 +85,7 @@ static inline void complete_release(stru static inline void __init_completion(struct completion *x) { x->done = 0; - init_waitqueue_head(&x->wait); + init_swait_queue_head(&x->wait); } /** --- a/kernel/sched/completion.c +++ b/kernel/sched/completion.c @@ -29,12 +29,12 @@ void complete(struct completion *x) { unsigned long flags; - spin_lock_irqsave(&x->wait.lock, flags); + raw_spin_lock_irqsave(&x->wait.lock, flags); if (x->done != UINT_MAX) x->done++; - __wake_up_locked(&x->wait, TASK_NORMAL, 1); - spin_unlock_irqrestore(&x->wait.lock, flags); + swake_up_locked(&x->wait); + raw_spin_unlock_irqrestore(&x->wait.lock, flags); } EXPORT_SYMBOL(complete); @@ -58,10 +58,12 @@ void complete_all(struct completion *x) { unsigned long flags; - spin_lock_irqsave(&x->wait.lock, flags); + WARN_ON(irqs_disabled()); + + raw_spin_lock_irqsave(&x->wait.lock, flags); x->done = UINT_MAX; - __wake_up_locked(&x->wait, TASK_NORMAL, 0); - spin_unlock_irqrestore(&x->wait.lock, flags); + swake_up_all_locked(&x->wait); + raw_spin_unlock_irqrestore(&x->wait.lock, flags); } EXPORT_SYMBOL(complete_all); @@ -70,20 +72,20 @@ do_wait_for_common(struct completion *x, long (*action)(long), long timeout, int state) { if (!x->done) { - DECLARE_WAITQUEUE(wait, current); + DECLARE_SWAITQUEUE(wait); - __add_wait_queue_entry_tail_exclusive(&x->wait, &wait); do { if (signal_pending_state(state, current)) { timeout = -ERESTARTSYS; break; } + __prepare_to_swait(&x->wait, &wait); __set_current_state(state); - spin_unlock_irq(&x->wait.lock); + raw_spin_unlock_irq(&x->wait.lock); timeout = action(timeout); - spin_lock_irq(&x->wait.lock); + raw_spin_lock_irq(&x->wait.lock); } while (!x->done && timeout); - __remove_wait_queue(&x->wait, &wait); + __finish_swait(&x->wait, &wait); if (!x->done) return timeout; } @@ -100,9 +102,9 @@ static inline long __sched complete_acquire(x); - spin_lock_irq(&x->wait.lock); + raw_spin_lock_irq(&x->wait.lock); timeout = do_wait_for_common(x, action, timeout, state); - spin_unlock_irq(&x->wait.lock); + raw_spin_unlock_irq(&x->wait.lock); complete_release(x); @@ -291,12 +293,12 @@ bool try_wait_for_completion(struct comp if (!READ_ONCE(x->done)) return false; - spin_lock_irqsave(&x->wait.lock, flags); + raw_spin_lock_irqsave(&x->wait.lock, flags); if (!x->done) ret = false; else if (x->done != UINT_MAX) x->done--; - spin_unlock_irqrestore(&x->wait.lock, flags); + raw_spin_unlock_irqrestore(&x->wait.lock, flags); return ret; } EXPORT_SYMBOL(try_wait_for_completion); @@ -322,8 +324,8 @@ bool completion_done(struct completion * * otherwise we can end up freeing the completion before complete() * is done referencing it. */ - spin_lock_irqsave(&x->wait.lock, flags); - spin_unlock_irqrestore(&x->wait.lock, flags); + raw_spin_lock_irqsave(&x->wait.lock, flags); + raw_spin_unlock_irqrestore(&x->wait.lock, flags); return true; } EXPORT_SYMBOL(completion_done); From patchwork Sat Mar 21 11:26:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11450887 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BD2E518C6 for ; Sat, 21 Mar 2020 11:35:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 9BDC920786 for ; Sat, 21 Mar 2020 11:35:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728752AbgCULfQ (ORCPT ); Sat, 21 Mar 2020 07:35:16 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38516 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726961AbgCULfG (ORCPT ); Sat, 21 Mar 2020 07:35:06 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOX-0002Hj-QZ; Sat, 21 Mar 2020 12:34:26 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id 9EB21FFBBF; Sat, 21 Mar 2020 12:34:21 +0100 (CET) Message-Id: <20200321113242.427089655@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:26:01 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Greg Kroah-Hartman , Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, kbuild test robot , Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 17/20] lockdep: Introduce wait-type checks References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: base64 X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Peter Zijlstra Extend lockdep to validate lock wait-type context. The current wait-types are: LD_WAIT_FREE, /* wait free, rcu etc.. */ LD_WAIT_SPIN, /* spin loops, raw_spinlock_t etc.. */ LD_WAIT_CONFIG, /* CONFIG_PREEMPT_LOCK, spinlock_t etc.. */ LD_WAIT_SLEEP, /* sleeping locks, mutex_t etc.. */ Where lockdep validates that the current lock (the one being acquired) fits in the current wait-context (as generated by the held stack). This ensures that there is no attempt to acquire mutexes while holding spinlocks, to acquire spinlocks while holding raw_spinlocks and so on. In other words, its a more fancy might_sleep(). Obviously RCU made the entire ordeal more complex than a simple single value test because RCU can be acquired in (pretty much) any context and while it presents a context to nested locks it is not the same as it got acquired in. Therefore its necessary to split the wait_type into two values, one representing the acquire (outer) and one representing the nested context (inner). For most 'normal' locks these two are the same. [ To make static initialization easier we have the rule that: .outer == INV means .outer == .inner; because INV == 0. ] It further means that its required to find the minimal .inner of the held stack to compare against the outer of the new lock; because while 'normal' RCU presents a CONFIG type to nested locks, if it is taken while already holding a SPIN type it obviously doesn't relax the rules. Below is an example output generated by the trivial test code: raw_spin_lock(&foo); spin_lock(&bar); spin_unlock(&bar); raw_spin_unlock(&foo); [ BUG: Invalid wait context ] ----------------------------- swapper/0/1 is trying to lock: ffffc90000013f20 (&bar){....}-{3:3}, at: kernel_init+0xdb/0x187 other info that might help us debug this: 1 lock held by swapper/0/1: #0: ffffc90000013ee0 (&foo){+.+.}-{2:2}, at: kernel_init+0xd1/0x187 The way to read it is to look at the new -{n,m} part in the lock description; -{3:3} for the attempted lock, and try and match that up to the held locks, which in this case is the one: -{2,2}. This tells that the acquiring lock requires a more relaxed environment than presented by the lock stack. Currently only the normal locks and RCU are converted, the rest of the lockdep users defaults to .inner = INV which is ignored. More conversions can be done when desired. The check for spinlock_t nesting is not enabled by default. It's a separate config option for now as there are known problems which are currently addressed. The config option allows to identify these problems and to verify that the solutions found are indeed solving them. The config switch will be removed and the checks will permanently enabled once the vast majority of issues has been addressed. [ bigeasy: Move LD_WAIT_FREE,… out of CONFIG_LOCKDEP to avoid compile failure with CONFIG_DEBUG_SPINLOCK + !CONFIG_LOCKDEP] [ tglx: Add the config option ] Requested-by: Thomas Gleixner Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner --- V2: Fix the LOCKDEP=y && LOCK_PROVE=n case --- include/linux/irqflags.h | 8 ++ include/linux/lockdep.h | 71 +++++++++++++++++--- include/linux/mutex.h | 7 +- include/linux/rwlock_types.h | 6 + include/linux/rwsem.h | 6 + include/linux/sched.h | 1 include/linux/spinlock.h | 35 +++++++--- include/linux/spinlock_types.h | 24 +++++- kernel/irq/handle.c | 7 ++ kernel/locking/lockdep.c | 138 ++++++++++++++++++++++++++++++++++++++-- kernel/locking/mutex-debug.c | 2 kernel/locking/rwsem.c | 2 kernel/locking/spinlock_debug.c | 6 - kernel/rcu/update.c | 24 +++++- lib/Kconfig.debug | 17 ++++ 15 files changed, 307 insertions(+), 47 deletions(-) --- a/include/linux/irqflags.h +++ b/include/linux/irqflags.h @@ -37,7 +37,12 @@ # define trace_softirqs_enabled(p) ((p)->softirqs_enabled) # define trace_hardirq_enter() \ do { \ - current->hardirq_context++; \ + if (!current->hardirq_context++) \ + current->hardirq_threaded = 0; \ +} while (0) +# define trace_hardirq_threaded() \ +do { \ + current->hardirq_threaded = 1; \ } while (0) # define trace_hardirq_exit() \ do { \ @@ -59,6 +64,7 @@ do { \ # define trace_hardirqs_enabled(p) 0 # define trace_softirqs_enabled(p) 0 # define trace_hardirq_enter() do { } while (0) +# define trace_hardirq_threaded() do { } while (0) # define trace_hardirq_exit() do { } while (0) # define lockdep_softirq_enter() do { } while (0) # define lockdep_softirq_exit() do { } while (0) --- a/include/linux/lockdep.h +++ b/include/linux/lockdep.h @@ -21,6 +21,22 @@ extern int lock_stat; #include +enum lockdep_wait_type { + LD_WAIT_INV = 0, /* not checked, catch all */ + + LD_WAIT_FREE, /* wait free, rcu etc.. */ + LD_WAIT_SPIN, /* spin loops, raw_spinlock_t etc.. */ + +#ifdef CONFIG_PROVE_RAW_LOCK_NESTING + LD_WAIT_CONFIG, /* CONFIG_PREEMPT_LOCK, spinlock_t etc.. */ +#else + LD_WAIT_CONFIG = LD_WAIT_SPIN, +#endif + LD_WAIT_SLEEP, /* sleeping locks, mutex_t etc.. */ + + LD_WAIT_MAX, /* must be last */ +}; + #ifdef CONFIG_LOCKDEP #include @@ -111,6 +127,9 @@ struct lock_class { int name_version; const char *name; + short wait_type_inner; + short wait_type_outer; + #ifdef CONFIG_LOCK_STAT unsigned long contention_point[LOCKSTAT_POINTS]; unsigned long contending_point[LOCKSTAT_POINTS]; @@ -158,6 +177,8 @@ struct lockdep_map { struct lock_class_key *key; struct lock_class *class_cache[NR_LOCKDEP_CACHING_CLASSES]; const char *name; + short wait_type_outer; /* can be taken in this context */ + short wait_type_inner; /* presents this context */ #ifdef CONFIG_LOCK_STAT int cpu; unsigned long ip; @@ -299,8 +320,21 @@ extern void lockdep_unregister_key(struc * to lockdep: */ -extern void lockdep_init_map(struct lockdep_map *lock, const char *name, - struct lock_class_key *key, int subclass); +extern void lockdep_init_map_waits(struct lockdep_map *lock, const char *name, + struct lock_class_key *key, int subclass, short inner, short outer); + +static inline void +lockdep_init_map_wait(struct lockdep_map *lock, const char *name, + struct lock_class_key *key, int subclass, short inner) +{ + lockdep_init_map_waits(lock, name, key, subclass, inner, LD_WAIT_INV); +} + +static inline void lockdep_init_map(struct lockdep_map *lock, const char *name, + struct lock_class_key *key, int subclass) +{ + lockdep_init_map_wait(lock, name, key, subclass, LD_WAIT_INV); +} /* * Reinitialize a lock key - for cases where there is special locking or @@ -308,18 +342,29 @@ extern void lockdep_init_map(struct lock * of dependencies wrong: they are either too broad (they need a class-split) * or they are too narrow (they suffer from a false class-split): */ -#define lockdep_set_class(lock, key) \ - lockdep_init_map(&(lock)->dep_map, #key, key, 0) -#define lockdep_set_class_and_name(lock, key, name) \ - lockdep_init_map(&(lock)->dep_map, name, key, 0) -#define lockdep_set_class_and_subclass(lock, key, sub) \ - lockdep_init_map(&(lock)->dep_map, #key, key, sub) -#define lockdep_set_subclass(lock, sub) \ - lockdep_init_map(&(lock)->dep_map, #lock, \ - (lock)->dep_map.key, sub) +#define lockdep_set_class(lock, key) \ + lockdep_init_map_waits(&(lock)->dep_map, #key, key, 0, \ + (lock)->dep_map.wait_type_inner, \ + (lock)->dep_map.wait_type_outer) + +#define lockdep_set_class_and_name(lock, key, name) \ + lockdep_init_map_waits(&(lock)->dep_map, name, key, 0, \ + (lock)->dep_map.wait_type_inner, \ + (lock)->dep_map.wait_type_outer) + +#define lockdep_set_class_and_subclass(lock, key, sub) \ + lockdep_init_map_waits(&(lock)->dep_map, #key, key, sub,\ + (lock)->dep_map.wait_type_inner, \ + (lock)->dep_map.wait_type_outer) + +#define lockdep_set_subclass(lock, sub) \ + lockdep_init_map_waits(&(lock)->dep_map, #lock, (lock)->dep_map.key, sub,\ + (lock)->dep_map.wait_type_inner, \ + (lock)->dep_map.wait_type_outer) #define lockdep_set_novalidate_class(lock) \ lockdep_set_class_and_name(lock, &__lockdep_no_validate__, #lock) + /* * Compare locking classes */ @@ -432,6 +477,10 @@ static inline void lockdep_set_selftest_ # define lock_set_class(l, n, k, s, i) do { } while (0) # define lock_set_subclass(l, s, i) do { } while (0) # define lockdep_init() do { } while (0) +# define lockdep_init_map_waits(lock, name, key, sub, inner, outer) \ + do { (void)(name); (void)(key); } while (0) +# define lockdep_init_map_wait(lock, name, key, sub, inner) \ + do { (void)(name); (void)(key); } while (0) # define lockdep_init_map(lock, name, key, sub) \ do { (void)(name); (void)(key); } while (0) # define lockdep_set_class(lock, key) do { (void)(key); } while (0) --- a/include/linux/mutex.h +++ b/include/linux/mutex.h @@ -109,8 +109,11 @@ do { \ } while (0) #ifdef CONFIG_DEBUG_LOCK_ALLOC -# define __DEP_MAP_MUTEX_INITIALIZER(lockname) \ - , .dep_map = { .name = #lockname } +# define __DEP_MAP_MUTEX_INITIALIZER(lockname) \ + , .dep_map = { \ + .name = #lockname, \ + .wait_type_inner = LD_WAIT_SLEEP, \ + } #else # define __DEP_MAP_MUTEX_INITIALIZER(lockname) #endif --- a/include/linux/rwlock_types.h +++ b/include/linux/rwlock_types.h @@ -22,7 +22,11 @@ typedef struct { #define RWLOCK_MAGIC 0xdeaf1eed #ifdef CONFIG_DEBUG_LOCK_ALLOC -# define RW_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname } +# define RW_DEP_MAP_INIT(lockname) \ + .dep_map = { \ + .name = #lockname, \ + .wait_type_inner = LD_WAIT_CONFIG, \ + } #else # define RW_DEP_MAP_INIT(lockname) #endif --- a/include/linux/rwsem.h +++ b/include/linux/rwsem.h @@ -71,7 +71,11 @@ static inline int rwsem_is_locked(struct /* Common initializer macros and functions */ #ifdef CONFIG_DEBUG_LOCK_ALLOC -# define __RWSEM_DEP_MAP_INIT(lockname) , .dep_map = { .name = #lockname } +# define __RWSEM_DEP_MAP_INIT(lockname) \ + , .dep_map = { \ + .name = #lockname, \ + .wait_type_inner = LD_WAIT_SLEEP, \ + } #else # define __RWSEM_DEP_MAP_INIT(lockname) #endif --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -970,6 +970,7 @@ struct task_struct { #ifdef CONFIG_TRACE_IRQFLAGS unsigned int irq_events; + unsigned int hardirq_threaded; unsigned long hardirq_enable_ip; unsigned long hardirq_disable_ip; unsigned int hardirq_enable_event; --- a/include/linux/spinlock.h +++ b/include/linux/spinlock.h @@ -93,12 +93,13 @@ #ifdef CONFIG_DEBUG_SPINLOCK extern void __raw_spin_lock_init(raw_spinlock_t *lock, const char *name, - struct lock_class_key *key); -# define raw_spin_lock_init(lock) \ -do { \ - static struct lock_class_key __key; \ - \ - __raw_spin_lock_init((lock), #lock, &__key); \ + struct lock_class_key *key, short inner); + +# define raw_spin_lock_init(lock) \ +do { \ + static struct lock_class_key __key; \ + \ + __raw_spin_lock_init((lock), #lock, &__key, LD_WAIT_SPIN); \ } while (0) #else @@ -327,12 +328,26 @@ static __always_inline raw_spinlock_t *s return &lock->rlock; } -#define spin_lock_init(_lock) \ -do { \ - spinlock_check(_lock); \ - raw_spin_lock_init(&(_lock)->rlock); \ +#ifdef CONFIG_DEBUG_SPINLOCK + +# define spin_lock_init(lock) \ +do { \ + static struct lock_class_key __key; \ + \ + __raw_spin_lock_init(spinlock_check(lock), \ + #lock, &__key, LD_WAIT_CONFIG); \ +} while (0) + +#else + +# define spin_lock_init(_lock) \ +do { \ + spinlock_check(_lock); \ + *(_lock) = __SPIN_LOCK_UNLOCKED(_lock); \ } while (0) +#endif + static __always_inline void spin_lock(spinlock_t *lock) { raw_spin_lock(&lock->rlock); --- a/include/linux/spinlock_types.h +++ b/include/linux/spinlock_types.h @@ -33,8 +33,18 @@ typedef struct raw_spinlock { #define SPINLOCK_OWNER_INIT ((void *)-1L) #ifdef CONFIG_DEBUG_LOCK_ALLOC -# define SPIN_DEP_MAP_INIT(lockname) .dep_map = { .name = #lockname } +# define RAW_SPIN_DEP_MAP_INIT(lockname) \ + .dep_map = { \ + .name = #lockname, \ + .wait_type_inner = LD_WAIT_SPIN, \ + } +# define SPIN_DEP_MAP_INIT(lockname) \ + .dep_map = { \ + .name = #lockname, \ + .wait_type_inner = LD_WAIT_CONFIG, \ + } #else +# define RAW_SPIN_DEP_MAP_INIT(lockname) # define SPIN_DEP_MAP_INIT(lockname) #endif @@ -51,7 +61,7 @@ typedef struct raw_spinlock { { \ .raw_lock = __ARCH_SPIN_LOCK_UNLOCKED, \ SPIN_DEBUG_INIT(lockname) \ - SPIN_DEP_MAP_INIT(lockname) } + RAW_SPIN_DEP_MAP_INIT(lockname) } #define __RAW_SPIN_LOCK_UNLOCKED(lockname) \ (raw_spinlock_t) __RAW_SPIN_LOCK_INITIALIZER(lockname) @@ -72,11 +82,17 @@ typedef struct spinlock { }; } spinlock_t; +#define ___SPIN_LOCK_INITIALIZER(lockname) \ + { \ + .raw_lock = __ARCH_SPIN_LOCK_UNLOCKED, \ + SPIN_DEBUG_INIT(lockname) \ + SPIN_DEP_MAP_INIT(lockname) } + #define __SPIN_LOCK_INITIALIZER(lockname) \ - { { .rlock = __RAW_SPIN_LOCK_INITIALIZER(lockname) } } + { { .rlock = ___SPIN_LOCK_INITIALIZER(lockname) } } #define __SPIN_LOCK_UNLOCKED(lockname) \ - (spinlock_t ) __SPIN_LOCK_INITIALIZER(lockname) + (spinlock_t) __SPIN_LOCK_INITIALIZER(lockname) #define DEFINE_SPINLOCK(x) spinlock_t x = __SPIN_LOCK_UNLOCKED(x) --- a/kernel/irq/handle.c +++ b/kernel/irq/handle.c @@ -145,6 +145,13 @@ irqreturn_t __handle_irq_event_percpu(st for_each_action_of_desc(desc, action) { irqreturn_t res; + /* + * If this IRQ would be threaded under force_irqthreads, mark it so. + */ + if (irq_settings_can_thread(desc) && + !(action->flags & (IRQF_NO_THREAD | IRQF_PERCPU | IRQF_ONESHOT))) + trace_hardirq_threaded(); + trace_irq_handler_entry(irq, action); res = action->handler(irq, action->dev_id); trace_irq_handler_exit(irq, action, res); --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -653,7 +653,9 @@ static void print_lock_name(struct lock_ printk(KERN_CONT " ("); __print_lock_name(class); - printk(KERN_CONT "){%s}", usage); + printk(KERN_CONT "){%s}-{%hd:%hd}", usage, + class->wait_type_outer ?: class->wait_type_inner, + class->wait_type_inner); } static void print_lockdep_cache(struct lockdep_map *lock) @@ -1230,6 +1232,8 @@ register_lock_class(struct lockdep_map * WARN_ON_ONCE(!list_empty(&class->locks_before)); WARN_ON_ONCE(!list_empty(&class->locks_after)); class->name_version = count_matching_names(class); + class->wait_type_inner = lock->wait_type_inner; + class->wait_type_outer = lock->wait_type_outer; /* * We use RCU's safe list-add method to make * parallel walking of the hash-list safe: @@ -3682,6 +3686,113 @@ static int mark_lock(struct task_struct return ret; } +static int +print_lock_invalid_wait_context(struct task_struct *curr, + struct held_lock *hlock) +{ + if (!debug_locks_off()) + return 0; + if (debug_locks_silent) + return 0; + + pr_warn("\n"); + pr_warn("=============================\n"); + pr_warn("[ BUG: Invalid wait context ]\n"); + print_kernel_ident(); + pr_warn("-----------------------------\n"); + + pr_warn("%s/%d is trying to lock:\n", curr->comm, task_pid_nr(curr)); + print_lock(hlock); + + pr_warn("other info that might help us debug this:\n"); + lockdep_print_held_locks(curr); + + pr_warn("stack backtrace:\n"); + dump_stack(); + + return 0; +} + +/* + * Verify the wait_type context. + * + * This check validates we takes locks in the right wait-type order; that is it + * ensures that we do not take mutexes inside spinlocks and do not attempt to + * acquire spinlocks inside raw_spinlocks and the sort. + * + * The entire thing is slightly more complex because of RCU, RCU is a lock that + * can be taken from (pretty much) any context but also has constraints. + * However when taken in a stricter environment the RCU lock does not loosen + * the constraints. + * + * Therefore we must look for the strictest environment in the lock stack and + * compare that to the lock we're trying to acquire. + */ +static int check_wait_context(struct task_struct *curr, struct held_lock *next) +{ + short next_inner = hlock_class(next)->wait_type_inner; + short next_outer = hlock_class(next)->wait_type_outer; + short curr_inner; + int depth; + + if (!curr->lockdep_depth || !next_inner || next->trylock) + return 0; + + if (!next_outer) + next_outer = next_inner; + + /* + * Find start of current irq_context.. + */ + for (depth = curr->lockdep_depth - 1; depth >= 0; depth--) { + struct held_lock *prev = curr->held_locks + depth; + if (prev->irq_context != next->irq_context) + break; + } + depth++; + + /* + * Set appropriate wait type for the context; for IRQs we have to take + * into account force_irqthread as that is implied by PREEMPT_RT. + */ + if (curr->hardirq_context) { + /* + * Check if force_irqthreads will run us threaded. + */ + if (curr->hardirq_threaded) + curr_inner = LD_WAIT_CONFIG; + else + curr_inner = LD_WAIT_SPIN; + } else if (curr->softirq_context) { + /* + * Softirqs are always threaded. + */ + curr_inner = LD_WAIT_CONFIG; + } else { + curr_inner = LD_WAIT_MAX; + } + + for (; depth < curr->lockdep_depth; depth++) { + struct held_lock *prev = curr->held_locks + depth; + short prev_inner = hlock_class(prev)->wait_type_inner; + + if (prev_inner) { + /* + * We can have a bigger inner than a previous one + * when outer is smaller than inner, as with RCU. + * + * Also due to trylocks. + */ + curr_inner = min(curr_inner, prev_inner); + } + } + + if (next_outer > curr_inner) + return print_lock_invalid_wait_context(curr, next); + + return 0; +} + #else /* CONFIG_PROVE_LOCKING */ static inline int @@ -3701,13 +3812,20 @@ static inline int separate_irq_context(s return 0; } +static inline int check_wait_context(struct task_struct *curr, + struct held_lock *next) +{ + return 0; +} + #endif /* CONFIG_PROVE_LOCKING */ /* * Initialize a lock instance's lock-class mapping info: */ -void lockdep_init_map(struct lockdep_map *lock, const char *name, - struct lock_class_key *key, int subclass) +void lockdep_init_map_waits(struct lockdep_map *lock, const char *name, + struct lock_class_key *key, int subclass, + short inner, short outer) { int i; @@ -3728,6 +3846,9 @@ void lockdep_init_map(struct lockdep_map lock->name = name; + lock->wait_type_outer = outer; + lock->wait_type_inner = inner; + /* * No key, no joy, we need to hash something. */ @@ -3761,7 +3882,7 @@ void lockdep_init_map(struct lockdep_map raw_local_irq_restore(flags); } } -EXPORT_SYMBOL_GPL(lockdep_init_map); +EXPORT_SYMBOL_GPL(lockdep_init_map_waits); struct lock_class_key __lockdep_no_validate__; EXPORT_SYMBOL_GPL(__lockdep_no_validate__); @@ -3862,7 +3983,7 @@ static int __lock_acquire(struct lockdep class_idx = class - lock_classes; - if (depth) { + if (depth) { /* we're holding locks */ hlock = curr->held_locks + depth - 1; if (hlock->class_idx == class_idx && nest_lock) { if (!references) @@ -3904,6 +4025,9 @@ static int __lock_acquire(struct lockdep #endif hlock->pin_count = pin_count; + if (check_wait_context(curr, hlock)) + return 0; + /* Initialize the lock usage bit */ if (!mark_usage(curr, hlock, check)) return 0; @@ -4139,7 +4263,9 @@ static int return 0; } - lockdep_init_map(lock, name, key, 0); + lockdep_init_map_waits(lock, name, key, 0, + lock->wait_type_inner, + lock->wait_type_outer); class = register_lock_class(lock, subclass, 0); hlock->class_idx = class - lock_classes; --- a/kernel/locking/mutex-debug.c +++ b/kernel/locking/mutex-debug.c @@ -85,7 +85,7 @@ void debug_mutex_init(struct mutex *lock * Make sure we are not reinitializing a held lock: */ debug_check_no_locks_freed((void *)lock, sizeof(*lock)); - lockdep_init_map(&lock->dep_map, name, key, 0); + lockdep_init_map_wait(&lock->dep_map, name, key, 0, LD_WAIT_SLEEP); #endif lock->magic = lock; } --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -329,7 +329,7 @@ void __init_rwsem(struct rw_semaphore *s * Make sure we are not reinitializing a held semaphore: */ debug_check_no_locks_freed((void *)sem, sizeof(*sem)); - lockdep_init_map(&sem->dep_map, name, key, 0); + lockdep_init_map_wait(&sem->dep_map, name, key, 0, LD_WAIT_SLEEP); #endif #ifdef CONFIG_DEBUG_RWSEMS sem->magic = sem; --- a/kernel/locking/spinlock_debug.c +++ b/kernel/locking/spinlock_debug.c @@ -14,14 +14,14 @@ #include void __raw_spin_lock_init(raw_spinlock_t *lock, const char *name, - struct lock_class_key *key) + struct lock_class_key *key, short inner) { #ifdef CONFIG_DEBUG_LOCK_ALLOC /* * Make sure we are not reinitializing a held lock: */ debug_check_no_locks_freed((void *)lock, sizeof(*lock)); - lockdep_init_map(&lock->dep_map, name, key, 0); + lockdep_init_map_wait(&lock->dep_map, name, key, 0, inner); #endif lock->raw_lock = (arch_spinlock_t)__ARCH_SPIN_LOCK_UNLOCKED; lock->magic = SPINLOCK_MAGIC; @@ -39,7 +39,7 @@ void __rwlock_init(rwlock_t *lock, const * Make sure we are not reinitializing a held lock: */ debug_check_no_locks_freed((void *)lock, sizeof(*lock)); - lockdep_init_map(&lock->dep_map, name, key, 0); + lockdep_init_map_wait(&lock->dep_map, name, key, 0, LD_WAIT_CONFIG); #endif lock->raw_lock = (arch_rwlock_t) __ARCH_RW_LOCK_UNLOCKED; lock->magic = RWLOCK_MAGIC; --- a/kernel/rcu/update.c +++ b/kernel/rcu/update.c @@ -227,18 +227,30 @@ core_initcall(rcu_set_runtime_mode); #ifdef CONFIG_DEBUG_LOCK_ALLOC static struct lock_class_key rcu_lock_key; -struct lockdep_map rcu_lock_map = - STATIC_LOCKDEP_MAP_INIT("rcu_read_lock", &rcu_lock_key); +struct lockdep_map rcu_lock_map = { + .name = "rcu_read_lock", + .key = &rcu_lock_key, + .wait_type_outer = LD_WAIT_FREE, + .wait_type_inner = LD_WAIT_CONFIG, /* XXX PREEMPT_RCU ? */ +}; EXPORT_SYMBOL_GPL(rcu_lock_map); static struct lock_class_key rcu_bh_lock_key; -struct lockdep_map rcu_bh_lock_map = - STATIC_LOCKDEP_MAP_INIT("rcu_read_lock_bh", &rcu_bh_lock_key); +struct lockdep_map rcu_bh_lock_map = { + .name = "rcu_read_lock_bh", + .key = &rcu_bh_lock_key, + .wait_type_outer = LD_WAIT_FREE, + .wait_type_inner = LD_WAIT_CONFIG, /* PREEMPT_LOCK also makes BH preemptible */ +}; EXPORT_SYMBOL_GPL(rcu_bh_lock_map); static struct lock_class_key rcu_sched_lock_key; -struct lockdep_map rcu_sched_lock_map = - STATIC_LOCKDEP_MAP_INIT("rcu_read_lock_sched", &rcu_sched_lock_key); +struct lockdep_map rcu_sched_lock_map = { + .name = "rcu_read_lock_sched", + .key = &rcu_sched_lock_key, + .wait_type_outer = LD_WAIT_FREE, + .wait_type_inner = LD_WAIT_SPIN, +}; EXPORT_SYMBOL_GPL(rcu_sched_lock_map); static struct lock_class_key rcu_callback_key; --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1086,6 +1086,23 @@ config PROVE_LOCKING For more details, see Documentation/locking/lockdep-design.rst. +config PROVE_RAW_LOCK_NESTING + bool "Enable raw_spinlock - spinlock nesting checks" + depends on PROVE_LOCKING + default n + help + Enable the raw_spinlock vs. spinlock nesting checks which ensure + that the lock nesting rules for PREEMPT_RT enabled kernels are + not violated. + + NOTE: There are known nesting problems. So if you enable this + option expect lockdep splats until these problems have been fully + addressed which is work in progress. This config switch allows to + identify and analyze these problems. It will be removed and the + check permanentely enabled once the main issues have been fixed. + + If unsure, select N. + config LOCK_STAT bool "Lock usage statistics" depends on DEBUG_KERNEL && LOCK_DEBUGGING_SUPPORT From patchwork Sat Mar 21 11:26:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11450973 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EE473913 for ; Sat, 21 Mar 2020 11:36:25 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D697F20780 for ; Sat, 21 Mar 2020 11:36:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728579AbgCULgS (ORCPT ); Sat, 21 Mar 2020 07:36:18 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38497 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728526AbgCULe6 (ORCPT ); Sat, 21 Mar 2020 07:34:58 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOb-0002LP-6D; Sat, 21 Mar 2020 12:34:29 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id E04FC1040C9; Sat, 21 Mar 2020 12:34:21 +0100 (CET) Message-Id: <20200321113242.534508206@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:26:02 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Greg Kroah-Hartman , Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, kbuild test robot , Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 18/20] lockdep: Add hrtimer context tracing bits References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Sebastian Andrzej Siewior Set current->irq_config = 1 for hrtimers which are not marked to expire in hard interrupt context during hrtimer_init(). These timers will expire in softirq context on PREEMPT_RT. Setting this allows lockdep to differentiate these timers. If a timer is marked to expire in hard interrupt context then the timer callback is not supposed to acquire a regular spinlock instead of a raw_spinlock in the expiry callback. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner --- include/linux/irqflags.h | 15 +++++++++++++++ include/linux/sched.h | 1 + kernel/locking/lockdep.c | 2 +- kernel/time/hrtimer.c | 6 +++++- 4 files changed, 22 insertions(+), 2 deletions(-) --- a/include/linux/irqflags.h +++ b/include/linux/irqflags.h @@ -56,6 +56,19 @@ do { \ do { \ current->softirq_context--; \ } while (0) + +# define lockdep_hrtimer_enter(__hrtimer) \ + do { \ + if (!__hrtimer->is_hard) \ + current->irq_config = 1; \ + } while (0) + +# define lockdep_hrtimer_exit(__hrtimer) \ + do { \ + if (!__hrtimer->is_hard) \ + current->irq_config = 0; \ + } while (0) + #else # define trace_hardirqs_on() do { } while (0) # define trace_hardirqs_off() do { } while (0) @@ -68,6 +81,8 @@ do { \ # define trace_hardirq_exit() do { } while (0) # define lockdep_softirq_enter() do { } while (0) # define lockdep_softirq_exit() do { } while (0) +# define lockdep_hrtimer_enter(__hrtimer) do { } while (0) +# define lockdep_hrtimer_exit(__hrtimer) do { } while (0) #endif #if defined(CONFIG_IRQSOFF_TRACER) || \ --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -983,6 +983,7 @@ struct task_struct { unsigned int softirq_enable_event; int softirqs_enabled; int softirq_context; + int irq_config; #endif #ifdef CONFIG_LOCKDEP --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -3759,7 +3759,7 @@ static int check_wait_context(struct tas /* * Check if force_irqthreads will run us threaded. */ - if (curr->hardirq_threaded) + if (curr->hardirq_threaded || curr->irq_config) curr_inner = LD_WAIT_CONFIG; else curr_inner = LD_WAIT_SPIN; --- a/kernel/time/hrtimer.c +++ b/kernel/time/hrtimer.c @@ -1404,7 +1404,7 @@ static void __hrtimer_init(struct hrtime base = softtimer ? HRTIMER_MAX_CLOCK_BASES / 2 : 0; base += hrtimer_clockid_to_base(clock_id); timer->is_soft = softtimer; - timer->is_hard = !softtimer; + timer->is_hard = !!(mode & HRTIMER_MODE_HARD); timer->base = &cpu_base->clock_base[base]; timerqueue_init(&timer->node); } @@ -1514,7 +1514,11 @@ static void __run_hrtimer(struct hrtimer */ raw_spin_unlock_irqrestore(&cpu_base->lock, flags); trace_hrtimer_expire_entry(timer, now); + lockdep_hrtimer_enter(timer); + restart = fn(timer); + + lockdep_hrtimer_exit(timer); trace_hrtimer_expire_exit(timer); raw_spin_lock_irq(&cpu_base->lock); From patchwork Sat Mar 21 11:26:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11450865 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1D61118C6 for ; Sat, 21 Mar 2020 11:35:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0488F20789 for ; Sat, 21 Mar 2020 11:35:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728614AbgCULe6 (ORCPT ); Sat, 21 Mar 2020 07:34:58 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38487 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728492AbgCULe5 (ORCPT ); Sat, 21 Mar 2020 07:34:57 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOd-0002LQ-Fh; Sat, 21 Mar 2020 12:34:31 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id 2B9001040CB; Sat, 21 Mar 2020 12:34:22 +0100 (CET) Message-Id: <20200321113242.643576700@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:26:03 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Greg Kroah-Hartman , Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, kbuild test robot , Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 19/20] lockdep: Annotate irq_work References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Sebastian Andrzej Siewior Mark irq_work items with IRQ_WORK_HARD_IRQ which should be invoked in hardirq context even on PREEMPT_RT. IRQ_WORK without this flag will be invoked in softirq context on PREEMPT_RT. Set ->irq_config to 1 for the IRQ_WORK items which are invoked in softirq context so lockdep knows that these can safely acquire a spinlock_t. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner --- include/linux/irq_work.h | 2 ++ include/linux/irqflags.h | 13 +++++++++++++ kernel/irq_work.c | 2 ++ kernel/rcu/tree.c | 1 + kernel/time/tick-sched.c | 1 + 5 files changed, 19 insertions(+) --- a/include/linux/irq_work.h +++ b/include/linux/irq_work.h @@ -18,6 +18,8 @@ /* Doesn't want IPI, wait for tick: */ #define IRQ_WORK_LAZY BIT(2) +/* Run hard IRQ context, even on RT */ +#define IRQ_WORK_HARD_IRQ BIT(3) #define IRQ_WORK_CLAIMED (IRQ_WORK_PENDING | IRQ_WORK_BUSY) --- a/include/linux/irqflags.h +++ b/include/linux/irqflags.h @@ -69,6 +69,17 @@ do { \ current->irq_config = 0; \ } while (0) +# define lockdep_irq_work_enter(__work) \ + do { \ + if (!(atomic_read(&__work->flags) & IRQ_WORK_HARD_IRQ))\ + current->irq_config = 1; \ + } while (0) +# define lockdep_irq_work_exit(__work) \ + do { \ + if (!(atomic_read(&__work->flags) & IRQ_WORK_HARD_IRQ))\ + current->irq_config = 0; \ + } while (0) + #else # define trace_hardirqs_on() do { } while (0) # define trace_hardirqs_off() do { } while (0) @@ -83,6 +94,8 @@ do { \ # define lockdep_softirq_exit() do { } while (0) # define lockdep_hrtimer_enter(__hrtimer) do { } while (0) # define lockdep_hrtimer_exit(__hrtimer) do { } while (0) +# define lockdep_irq_work_enter(__work) do { } while (0) +# define lockdep_irq_work_exit(__work) do { } while (0) #endif #if defined(CONFIG_IRQSOFF_TRACER) || \ --- a/kernel/irq_work.c +++ b/kernel/irq_work.c @@ -153,7 +153,9 @@ static void irq_work_run_list(struct lli */ flags = atomic_fetch_andnot(IRQ_WORK_PENDING, &work->flags); + lockdep_irq_work_enter(work); work->func(work); + lockdep_irq_work_exit(work); /* * Clear the BUSY bit and return to the free state if * no-one else claimed it meanwhile. --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -1113,6 +1113,7 @@ static int rcu_implicit_dynticks_qs(stru !rdp->rcu_iw_pending && rdp->rcu_iw_gp_seq != rnp->gp_seq && (rnp->ffmask & rdp->grpmask)) { init_irq_work(&rdp->rcu_iw, rcu_iw_handler); + atomic_set(&rdp->rcu_iw.flags, IRQ_WORK_HARD_IRQ); rdp->rcu_iw_pending = true; rdp->rcu_iw_gp_seq = rnp->gp_seq; irq_work_queue_on(&rdp->rcu_iw, rdp->cpu); --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -245,6 +245,7 @@ static void nohz_full_kick_func(struct i static DEFINE_PER_CPU(struct irq_work, nohz_full_kick_work) = { .func = nohz_full_kick_func, + .flags = ATOMIC_INIT(IRQ_WORK_HARD_IRQ), }; /* From patchwork Sat Mar 21 11:26:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11450855 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AAD9E1894 for ; Sat, 21 Mar 2020 11:34:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 88457208DB for ; Sat, 21 Mar 2020 11:34:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728490AbgCULey (ORCPT ); Sat, 21 Mar 2020 07:34:54 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38423 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726652AbgCULex (ORCPT ); Sat, 21 Mar 2020 07:34:53 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOZ-0002Re-66; Sat, 21 Mar 2020 12:34:27 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id 692651040CC; Sat, 21 Mar 2020 12:34:22 +0100 (CET) Message-Id: <20200321113242.751182723@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:26:04 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Greg Kroah-Hartman , Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, kbuild test robot , Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 20/20] lockdep: Add posixtimer context tracing bits References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-pm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org From: Sebastian Andrzej Siewior Splitting run_posix_cpu_timers() into two parts is work in progress which is stuck on other entry code related problems. The heavy lifting which involves locking of sighand lock will be moved into task context so the necessary execution time is burdened on the task and not on interrupt context. Until this work completes lockdep with the spinlock nesting rules enabled would emit warnings for this known context. Prevent it by setting "->irq_config = 1" for the invocation of run_posix_cpu_timers() so lockdep does not complain when sighand lock is acquried. This will be removed once the split is completed. Signed-off-by: Sebastian Andrzej Siewior Signed-off-by: Thomas Gleixner --- include/linux/irqflags.h | 12 ++++++++++++ kernel/time/posix-cpu-timers.c | 6 +++++- 2 files changed, 17 insertions(+), 1 deletion(-) --- a/include/linux/irqflags.h +++ b/include/linux/irqflags.h @@ -69,6 +69,16 @@ do { \ current->irq_config = 0; \ } while (0) +# define lockdep_posixtimer_enter() \ + do { \ + current->irq_config = 1; \ + } while (0) + +# define lockdep_posixtimer_exit() \ + do { \ + current->irq_config = 0; \ + } while (0) + # define lockdep_irq_work_enter(__work) \ do { \ if (!(atomic_read(&__work->flags) & IRQ_WORK_HARD_IRQ))\ @@ -94,6 +104,8 @@ do { \ # define lockdep_softirq_exit() do { } while (0) # define lockdep_hrtimer_enter(__hrtimer) do { } while (0) # define lockdep_hrtimer_exit(__hrtimer) do { } while (0) +# define lockdep_posixtimer_enter() do { } while (0) +# define lockdep_posixtimer_exit() do { } while (0) # define lockdep_irq_work_enter(__work) do { } while (0) # define lockdep_irq_work_exit(__work) do { } while (0) #endif --- a/kernel/time/posix-cpu-timers.c +++ b/kernel/time/posix-cpu-timers.c @@ -1126,8 +1126,11 @@ void run_posix_cpu_timers(void) if (!fastpath_timer_check(tsk)) return; - if (!lock_task_sighand(tsk, &flags)) + lockdep_posixtimer_enter(); + if (!lock_task_sighand(tsk, &flags)) { + lockdep_posixtimer_exit(); return; + } /* * Here we take off tsk->signal->cpu_timers[N] and * tsk->cpu_timers[N] all the timers that are firing, and @@ -1169,6 +1172,7 @@ void run_posix_cpu_timers(void) cpu_timer_fire(timer); spin_unlock(&timer->it_lock); } + lockdep_posixtimer_exit(); } /*