From patchwork Thu Feb 15 17:56:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Morse X-Patchwork-Id: 10222987 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id D25EC6055C for ; Thu, 15 Feb 2018 18:00:52 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C13C1294BE for ; Thu, 15 Feb 2018 18:00:52 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AFACB294D1; Thu, 15 Feb 2018 18:00:52 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 422E2294BE for ; Thu, 15 Feb 2018 18:00:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:References:Subject:To: MIME-Version:From:Date:Message-ID:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=zTHmvS0Wq1R8povHwxBAPJsbd20YRgwrvoTes2tdNNw=; b=se4ZegCHzdTAoH A3BXMYX04A0KQF1hPUXK+so/8yC6JbdBS5xtLQiAf0OgMvJKkB31d/F4RwRE0HI/vXoB5x+kFq5yq gy/3pFwY/ml2D0yuonZWZzVZ+Yu53WI1zCyn1MN7es0Nf9d+2yVENLn/D8pX80xV+uLlXl95tDPf2 e1IMVObcFB0e5PtXFDRCwHAJKt0oMiw7G0MUJLw0pp9ubp+c+K7G8+4c/x2kVuhS34HwMgczpXFP4 z32jDRWQZSlYpldsAzObZRrSe4erUxnR60ae1dlL5IF7aoHaY5R5ZFTmlni3cwZCUKSSeQjEl4WWn hYWUG1k1y76NeefMtZBw==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1emNpl-0008LH-Tv; Thu, 15 Feb 2018 18:00:37 +0000 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70] helo=foss.arm.com) by bombadil.infradead.org with esmtp (Exim 4.89 #1 (Red Hat Linux)) id 1emNoH-00060c-29 for linux-arm-kernel@lists.infradead.org; Thu, 15 Feb 2018 17:59:16 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9EE1D15AD; Thu, 15 Feb 2018 09:58:54 -0800 (PST) Received: from [10.1.207.55] (melchizedek.cambridge.arm.com [10.1.207.55]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 58ED63F487; Thu, 15 Feb 2018 09:58:50 -0800 (PST) Message-ID: <5A85C9C7.9060701@arm.com> Date: Thu, 15 Feb 2018 17:56:23 +0000 From: James Morse User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Icedove/31.6.0 MIME-Version: 1.0 To: Xie XiuQi Subject: Re: [PATCH v5 1/3] arm64/ras: support sea error recovery References: <1516969885-150532-1-git-send-email-xiexiuqi@huawei.com> <1516969885-150532-2-git-send-email-xiexiuqi@huawei.com> <5A70C536.7040208@arm.com> <5A7B4D87.9020207@arm.com> <7dacf375-4645-ba34-62d1-96d9f67dbcc2@huawei.com> In-Reply-To: <7dacf375-4645-ba34-62d1-96d9f67dbcc2@huawei.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20180215_095905_762618_5D0DACA2 X-CRM114-Status: GOOD ( 17.57 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: mark.rutland@arm.com, wangkefeng.wang@huawei.com, cj.chengjian@huawei.com, julien.thierry@arm.com, catalin.marinas@arm.com, stephen.boyd@linaro.org, will.deacon@arm.com, lijinyue@huawei.com, huawei.libin@huawei.com, guohanjun@huawei.com, wangxiongfeng2@huawei.com, takahiro.akashi@linaro.org, zjzhang@codeaurora.org, gengdongjiu@huawei.com, linux-acpi@vger.kernel.org, mingo@redhat.com, bp@suse.de, Dave.Martin@arm.com, tbaicar@codeaurora.org, zhengqiang10@huawei.com, linux-arm-kernel@lists.infradead.org, ard.biesheuvel@linaro.org, linux-kernel@vger.kernel.org, hanjun.guo@linaro.org, shiju.jose@huawei.com Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP Hi Xie XiuQi, On 08/02/18 08:35, Xie XiuQi wrote: > I am very glad that you are trying to solve the problem, which is very helpful. > I agree with your proposal, and I'll test it on by box latter. > > Indeed, we're in precess context when we are in sea handler. I was thought we > can't call schedule() in the exception handler before. While testing this I've come to the conclusion that the memory_failure_queue_kick() approach I suggested makes arm64 behave slightly differently with APEI, and would need re-inventing if we support kernel-first too. The same race exists with memory-failure notifications signalled by SDEI, and to a lesser extent IRQ. So by fixing this in arch-code, we actually making our lives harder. Instead, I have the patch below. This is smaller, and not arch specific. It also saves the arch code secretly knowing that APEI calls memory_failure_queue(). I will post this as part of that series shortly... Thanks, James ---------------%<--------------- [PATCH] mm/memory-failure: increase queued recovery work's priority arm64 can take an NMI-like error notification when user-space steps in some corrupt memory. APEI's GHES code will call memory_failure_queue() to schedule the recovery work. We then return to user-space, possibly taking the fault again. Currently the arch code unconditionally signals user-space from this path, so we don't get stuck in this loop, but the affected process never benefits from memory_failure()s recovery work. To fix this we need to know the recovery work will run before we get back to user-space. Increase the priority of the recovery work by scheduling it on the system_highpri_wq, then try to bump the current task off this CPU so that the recover work starts immediately. Reported-by: Xie XiuQi Signed-off-by: James Morse CC: Xie XiuQi CC: gengdongjiu --- mm/memory-failure.c | 11 ++++++++--- 1 file changed, 8 insertions(+), 3 deletions(-) struct memory_failure_entry entry = { @@ -1328,11 +1330,14 @@ void memory_failure_queue(unsigned long pfn, int flags) mf_cpu = &get_cpu_var(memory_failure_cpu); spin_lock_irqsave(&mf_cpu->lock, proc_flags); - if (kfifo_put(&mf_cpu->fifo, entry)) - schedule_work_on(smp_processor_id(), &mf_cpu->work); - else + if (kfifo_put(&mf_cpu->fifo, entry)) { + queue_work_on(cpu, system_highpri_wq, &mf_cpu->work); + set_tsk_need_resched(current); + preempt_set_need_resched(); + } else { pr_err("Memory failure: buffer overflow when queuing memory failure at %#lx\n", pfn); + } spin_unlock_irqrestore(&mf_cpu->lock, proc_flags); put_cpu_var(memory_failure_cpu); } ---------------%<--------------- diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 4b80ccee4535..14f44d841e8b 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -55,6 +55,7 @@ #include #include #include +#include #include #include #include "internal.h" @@ -1319,6 +1320,7 @@ static DEFINE_PER_CPU(struct memory_failure_cpu, memory_failure_cpu); */ void memory_failure_queue(unsigned long pfn, int flags) { + int cpu = smp_processor_id(); struct memory_failure_cpu *mf_cpu; unsigned long proc_flags;