From patchwork Sat Mar 21 11:25:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11451055 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CD1EF139A for ; Sat, 21 Mar 2020 11:37:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B358120658 for ; Sat, 21 Mar 2020 11:37:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727100AbgCULh2 (ORCPT ); Sat, 21 Mar 2020 07:37:28 -0400 Received: from Galois.linutronix.de ([193.142.43.55]:38469 "EHLO Galois.linutronix.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727052AbgCULez (ORCPT ); Sat, 21 Mar 2020 07:34:55 -0400 Received: from p5de0bf0b.dip0.t-ipconnect.de ([93.224.191.11] helo=nanos.tec.linutronix.de) by Galois.linutronix.de with esmtpsa (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1jFcOW-0002EM-Vk; Sat, 21 Mar 2020 12:34:25 +0100 Received: from nanos.tec.linutronix.de (localhost [IPv6:::1]) by nanos.tec.linutronix.de (Postfix) with ESMTP id 1C9FE1040C6; Sat, 21 Mar 2020 12:34:21 +0100 (CET) Message-Id: <20200321113242.228481202@linutronix.de> User-Agent: quilt/0.65 Date: Sat, 21 Mar 2020 12:25:59 +0100 From: Thomas Gleixner To: LKML Cc: Peter Zijlstra , Ingo Molnar , Sebastian Siewior , Linus Torvalds , Joel Fernandes , Oleg Nesterov , Davidlohr Bueso , Logan Gunthorpe , Bjorn Helgaas , Kurt Schwemmer , linux-pci@vger.kernel.org, Greg Kroah-Hartman , Felipe Balbi , linux-usb@vger.kernel.org, Kalle Valo , "David S. Miller" , linux-wireless@vger.kernel.org, netdev@vger.kernel.org, Darren Hart , Andy Shevchenko , platform-driver-x86@vger.kernel.org, Zhang Rui , "Rafael J. Wysocki" , linux-pm@vger.kernel.org, Len Brown , linux-acpi@vger.kernel.org, kbuild test robot , Nick Hu , Greentime Hu , Vincent Chen , Guo Ren , linux-csky@vger.kernel.org, Brian Cain , linux-hexagon@vger.kernel.org, Tony Luck , Fenghua Yu , linux-ia64@vger.kernel.org, Michal Simek , Michael Ellerman , Arnd Bergmann , Geoff Levand , linuxppc-dev@lists.ozlabs.org, "Paul E . McKenney" , Jonathan Corbet , Randy Dunlap , Davidlohr Bueso Subject: [patch V3 15/20] sched/swait: Prepare usage in completions References: <20200321112544.878032781@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Linutronix-Spam-Score: -1.0 X-Linutronix-Spam-Level: - X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,SHORTCIRCUIT=-0.0001 Sender: linux-acpi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org From: Thomas Gleixner As a preparation to use simple wait queues for completions: - Provide swake_up_all_locked() to support complete_all() - Make __prepare_to_swait() public available This is done to enable the usage of complete() within truly atomic contexts on a PREEMPT_RT enabled kernel. Signed-off-by: Thomas Gleixner Cc: Linus Torvalds --- V2: Add comment to swake_up_all_locked() --- kernel/sched/sched.h | 3 +++ kernel/sched/swait.c | 15 ++++++++++++++- 2 files changed, 17 insertions(+), 1 deletion(-) --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -2492,3 +2492,6 @@ static inline bool is_per_cpu_kthread(st return true; } #endif + +void swake_up_all_locked(struct swait_queue_head *q); +void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait); --- a/kernel/sched/swait.c +++ b/kernel/sched/swait.c @@ -32,6 +32,19 @@ void swake_up_locked(struct swait_queue_ } EXPORT_SYMBOL(swake_up_locked); +/* + * Wake up all waiters. This is an interface which is solely exposed for + * completions and not for general usage. + * + * It is intentionally different from swake_up_all() to allow usage from + * hard interrupt context and interrupt disabled regions. + */ +void swake_up_all_locked(struct swait_queue_head *q) +{ + while (!list_empty(&q->task_list)) + swake_up_locked(q); +} + void swake_up_one(struct swait_queue_head *q) { unsigned long flags; @@ -69,7 +82,7 @@ void swake_up_all(struct swait_queue_hea } EXPORT_SYMBOL(swake_up_all); -static void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait) +void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait) { wait->task = current; if (list_empty(&wait->task_list))