From patchwork Tue Jan 29 18:48:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 10786937 X-Patchwork-Delegate: rjw@sisk.pl Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 54BA191E for ; Tue, 29 Jan 2019 18:50:24 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 456A22D21A for ; Tue, 29 Jan 2019 18:50:24 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 396072D50A; Tue, 29 Jan 2019 18:50:24 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A8C672D21A for ; Tue, 29 Jan 2019 18:50:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727269AbfA2SuX (ORCPT ); Tue, 29 Jan 2019 13:50:23 -0500 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:42250 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727198AbfA2SuW (ORCPT ); Tue, 29 Jan 2019 13:50:22 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 722A61650; Tue, 29 Jan 2019 10:50:22 -0800 (PST) Received: from eglon.cambridge.arm.com (eglon.cambridge.arm.com [10.1.196.105]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C530E3F557; Tue, 29 Jan 2019 10:50:19 -0800 (PST) From: James Morse To: linux-acpi@vger.kernel.org Cc: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, Borislav Petkov , Marc Zyngier , Christoffer Dall , Will Deacon , Catalin Marinas , Naoya Horiguchi , Rafael Wysocki , Len Brown , Tony Luck , Dongjiu Geng , Xie XiuQi , james.morse@arm.com Subject: [PATCH v8 15/26] ACPI / APEI: Move locking to the notification helper Date: Tue, 29 Jan 2019 18:48:51 +0000 Message-Id: <20190129184902.102850-16-james.morse@arm.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20190129184902.102850-1-james.morse@arm.com> References: <20190129184902.102850-1-james.morse@arm.com> MIME-Version: 1.0 Sender: linux-acpi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP ghes_copy_tofrom_phys() takes different locks depending on in_nmi(). This doesn't work if there are multiple NMI-like notifications, that can interrupt each other. Now that NOTIFY_SEA is always called in the same context, move the lock-taking to the notification helper. The helper will always know which lock to take. This avoids ghes_copy_tofrom_phys() taking a guess based on in_nmi(). This splits NOTIFY_NMI and NOTIFY_SEA to use different locks. All the other notifications use ghes_proc(), and are called in process or IRQ context. Move the spin_lock_irqsave() around their ghes_proc() calls. Signed-off-by: James Morse Reviewed-by: Borislav Petkov --- Changes since v7: * Moved the locks into the function that uses them to make it clearer this is only use in_nmi(). Changes since v6: * Tinkered with the commit message * Lock definitions have moved due to the #ifdefs --- drivers/acpi/apei/ghes.c | 34 +++++++++++++++++++++++++--------- 1 file changed, 25 insertions(+), 9 deletions(-) diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c index ab794ab29554..c6bc73281d6a 100644 --- a/drivers/acpi/apei/ghes.c +++ b/drivers/acpi/apei/ghes.c @@ -114,11 +114,10 @@ static DEFINE_MUTEX(ghes_list_mutex); * handler, but general ioremap can not be used in atomic context, so * the fixmap is used instead. * - * These 2 spinlocks are used to prevent the fixmap entries from being used + * This spinlock is used to prevent the fixmap entry from being used * simultaneously. */ -static DEFINE_RAW_SPINLOCK(ghes_ioremap_lock_nmi); -static DEFINE_SPINLOCK(ghes_ioremap_lock_irq); +static DEFINE_SPINLOCK(ghes_notify_lock_irq); static struct gen_pool *ghes_estatus_pool; static unsigned long ghes_estatus_pool_size_request; @@ -287,7 +286,6 @@ static void ghes_copy_tofrom_phys(void *buffer, u64 paddr, u32 len, int from_phys) { void __iomem *vaddr; - unsigned long flags = 0; int in_nmi = in_nmi(); u64 offset; u32 trunk; @@ -295,10 +293,8 @@ static void ghes_copy_tofrom_phys(void *buffer, u64 paddr, u32 len, while (len > 0) { offset = paddr - (paddr & PAGE_MASK); if (in_nmi) { - raw_spin_lock(&ghes_ioremap_lock_nmi); vaddr = ghes_ioremap_pfn_nmi(paddr >> PAGE_SHIFT); } else { - spin_lock_irqsave(&ghes_ioremap_lock_irq, flags); vaddr = ghes_ioremap_pfn_irq(paddr >> PAGE_SHIFT); } trunk = PAGE_SIZE - offset; @@ -312,10 +308,8 @@ static void ghes_copy_tofrom_phys(void *buffer, u64 paddr, u32 len, buffer += trunk; if (in_nmi) { ghes_iounmap_nmi(); - raw_spin_unlock(&ghes_ioremap_lock_nmi); } else { ghes_iounmap_irq(); - spin_unlock_irqrestore(&ghes_ioremap_lock_irq, flags); } } } @@ -729,8 +723,11 @@ static void ghes_add_timer(struct ghes *ghes) static void ghes_poll_func(struct timer_list *t) { struct ghes *ghes = from_timer(ghes, t, timer); + unsigned long flags; + spin_lock_irqsave(&ghes_notify_lock_irq, flags); ghes_proc(ghes); + spin_unlock_irqrestore(&ghes_notify_lock_irq, flags); if (!(ghes->flags & GHES_EXITING)) ghes_add_timer(ghes); } @@ -738,9 +735,12 @@ static void ghes_poll_func(struct timer_list *t) static irqreturn_t ghes_irq_func(int irq, void *data) { struct ghes *ghes = data; + unsigned long flags; int rc; + spin_lock_irqsave(&ghes_notify_lock_irq, flags); rc = ghes_proc(ghes); + spin_unlock_irqrestore(&ghes_notify_lock_irq, flags); if (rc) return IRQ_NONE; @@ -751,14 +751,17 @@ static int ghes_notify_hed(struct notifier_block *this, unsigned long event, void *data) { struct ghes *ghes; + unsigned long flags; int ret = NOTIFY_DONE; + spin_lock_irqsave(&ghes_notify_lock_irq, flags); rcu_read_lock(); list_for_each_entry_rcu(ghes, &ghes_hed, list) { if (!ghes_proc(ghes)) ret = NOTIFY_OK; } rcu_read_unlock(); + spin_unlock_irqrestore(&ghes_notify_lock_irq, flags); return ret; } @@ -913,7 +916,14 @@ static LIST_HEAD(ghes_sea); */ int ghes_notify_sea(void) { - return ghes_in_nmi_spool_from_list(&ghes_sea); + static DEFINE_RAW_SPINLOCK(ghes_notify_lock_sea); + int rv; + + raw_spin_lock(&ghes_notify_lock_sea); + rv = ghes_in_nmi_spool_from_list(&ghes_sea); + raw_spin_unlock(&ghes_notify_lock_sea); + + return rv; } static void ghes_sea_add(struct ghes *ghes) @@ -946,14 +956,17 @@ static LIST_HEAD(ghes_nmi); static int ghes_notify_nmi(unsigned int cmd, struct pt_regs *regs) { + static DEFINE_RAW_SPINLOCK(ghes_notify_lock_nmi); int err, ret = NMI_DONE; if (!atomic_add_unless(&ghes_in_nmi, 1, 1)) return ret; + raw_spin_lock(&ghes_notify_lock_nmi); err = ghes_in_nmi_spool_from_list(&ghes_nmi); if (!err) ret = NMI_HANDLED; + raw_spin_unlock(&ghes_notify_lock_nmi); atomic_dec(&ghes_in_nmi); return ret; @@ -995,6 +1008,7 @@ static int ghes_probe(struct platform_device *ghes_dev) { struct acpi_hest_generic *generic; struct ghes *ghes = NULL; + unsigned long flags; int rc = -EINVAL; @@ -1097,7 +1111,9 @@ static int ghes_probe(struct platform_device *ghes_dev) ghes_edac_register(ghes, &ghes_dev->dev); /* Handle any pending errors right away */ + spin_lock_irqsave(&ghes_notify_lock_irq, flags); ghes_proc(ghes); + spin_unlock_irqrestore(&ghes_notify_lock_irq, flags); return 0;