From patchwork Thu Mar 30 10:31:17 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xie XiuQi X-Patchwork-Id: 9653745 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 09ED76034C for ; Thu, 30 Mar 2017 10:40:57 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E8D7D2857B for ; Thu, 30 Mar 2017 10:40:56 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DD8E228583; Thu, 30 Mar 2017 10:40:56 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 7074D2857B for ; Thu, 30 Mar 2017 10:40:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933447AbdC3Kex (ORCPT ); Thu, 30 Mar 2017 06:34:53 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:4821 "EHLO dggrg01-dlp.huawei.com" rhost-flags-OK-FAIL-OK-FAIL) by vger.kernel.org with ESMTP id S933355AbdC3Keu (ORCPT ); Thu, 30 Mar 2017 06:34:50 -0400 Received: from 172.30.72.53 (EHLO DGGEML403-HUB.china.huawei.com) ([172.30.72.53]) by dggrg01-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id ALT60682; Thu, 30 Mar 2017 18:34:47 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEML403-HUB.china.huawei.com (10.3.17.33) with Microsoft SMTP Server id 14.3.301.0; Thu, 30 Mar 2017 18:34:38 +0800 From: Xie XiuQi To: , , , , , , , , CC: , , , , , , , , , , Wang Xiongfeng Subject: [PATCH v3 8/8] arm64: exception: check shared writable page in SEI handler Date: Thu, 30 Mar 2017 18:31:17 +0800 Message-ID: <1490869877-118713-18-git-send-email-xiexiuqi@huawei.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1490869877-118713-1-git-send-email-xiexiuqi@huawei.com> References: <1490869877-118713-1-git-send-email-xiexiuqi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020204.58DCDF47.00FF, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 14a62bbe19cb441eaa36d259ff377d39 Sender: linux-acpi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-acpi@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Wang Xiongfeng Since SEI is asynchronous, the error data has been consumed. So we must suppose that all the memory data current process can write are contaminated. If the process doesn't have shared writable pages, the process will be killed, and the system will continue running normally. Otherwise, the system must be terminated, because the error has been propagated to other processes running on other cores, and recursively the error may be propagated to several another processes. Signed-off-by: Wang Xiongfeng Signed-off-by: Xie XiuQi --- arch/arm64/kernel/traps.c | 149 ++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 144 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 99be6d8..b222589 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -34,6 +34,8 @@ #include #include #include +#include +#include #include #include @@ -662,7 +664,144 @@ asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr) [ESR_ELx_AET_CE] = "Corrected", }; +static void shared_writable_pte_entry(pte_t *pte, unsigned long addr, + struct mm_walk *walk) +{ + int *is_shared_writable = walk->private; + struct vm_area_struct *vma = walk->vma; + struct page *page = NULL; + int mapcount = -1; + + if (!pte_write(__pte(pgprot_val(vma->vm_page_prot)))) + return; + + if (pte_present(*pte)) { + page = vm_normal_page(vma, addr, *pte); + } else if (is_swap_pte(*pte)) { + swp_entry_t swpent = pte_to_swp_entry(*pte); + + if (!non_swap_entry(swpent)) + mapcount = swp_swapcount(swpent); + else if (is_migration_entry(swpent)) + page = migration_entry_to_page(swpent); + } + + if (mapcount == -1 && page) + mapcount = page_mapcount(page); + if (mapcount >= 2) + *is_shared_writable = 1; +} + +static void shared_writable_pmd_entry(pmd_t *pmd, unsigned long addr, + struct mm_walk *walk) +{ + struct page *page; + int mapcount; + int *is_shared_writable = walk->private; + + if (!pmd_write(*pmd)) + return; + + page = pmd_page(*pmd); + if (page) { + mapcount = page_mapcount(page); + if (mapcount >= 2) + *is_shared_writable = 1; + } +} + +static int shared_writable_pte_range(pmd_t *pmd, unsigned long addr, + unsigned long end, struct mm_walk *walk) +{ + pte_t *pte; + + if (pmd_trans_huge(*pmd)) { + shared_writable_pmd_entry(pmd, addr, walk); + return 0; + } + + if (pmd_trans_unstable(pmd)) + return 0; + + pte = pte_offset_map(pmd, addr); + for (; addr != end; pte++, addr += PAGE_SIZE) + shared_writable_pte_entry(pte, addr, walk); + return 0; +} + +#ifdef CONFIG_HUGETLB_PAGE +static int shared_writable_hugetlb_range(pte_t *pte, unsigned long hmask, + unsigned long addr, unsigned long end, + struct mm_walk *walk) +{ + struct vm_area_struct *vma = walk->vma; + int *is_shared_writable = walk->private; + struct page *page = NULL; + int mapcount; + + if (!pte_write(*pte)) + return 0; + + if (pte_present(*pte)) { + page = vm_normal_page(vma, addr, *pte); + } else if (is_swap_pte(*pte)) { + swp_entry_t swpent = pte_to_swp_entry(*pte); + + if (is_migration_entry(swpent)) + page = migration_entry_to_page(swpent); + } + + if (page) { + mapcount = page_mapcount(page); + + if (mapcount >= 2) + *is_shared_writable = 1; + } + return 0; +} +#endif + +/* + *Check whether there exists a page in mm_struct which is shared with other + process and writable (not COW) at the same time. 0 means existing such a page. + */ +int mm_shared_writable(struct mm_struct *mm) +{ + struct vm_area_struct *vma; + int is_shared_writable = 0; + struct mm_walk shared_writable_walk = { + .pmd_entry = shared_writable_pte_range, +#ifdef CONFIG_HUGETLB_PAGE + .hugetlb_entry = shared_writable_hugetlb_range, +#endif + .mm = mm, + .private = &is_shared_writable, + }; + + if (!mm) + return -EPERM; + + vma = mm->mmap; + while (vma) { + walk_page_vma(vma, &shared_writable_walk); + if (is_shared_writable) + return 1; + vma = vma->vm_next; + } + return 0; +} + DEFINE_PER_CPU(int, sei_in_process); + +/* + * Since SEI is asynchronous, the error data has been consumed. So we must + * suppose that all the memory data current process can write are + * contaminated. If the process doesn't have shared writable pages, the + * process will be killed, and the system will continue running normally. + * Otherwise, the system must be terminated, because the error has been + * propagated to other processes running on other cores, and recursively + * the error may be propagated to several another processes. + */ asmlinkage void do_sei(struct pt_regs *regs, unsigned int esr, int el) { int aet = ESR_ELx_AET(esr); @@ -684,16 +823,16 @@ asmlinkage void do_sei(struct pt_regs *regs, unsigned int esr, int el) if (el == 0 && IS_ENABLED(CONFIG_ARM64_ESB) && cpus_have_cap(ARM64_HAS_RAS_EXTN)) { siginfo_t info; - void __user *pc = (void __user *)instruction_pointer(regs); if (aet >= ESR_ELx_AET_UEO) return; - if (aet == ESR_ELx_AET_UEU) { - info.si_signo = SIGILL; + if (aet == ESR_ELx_AET_UEU && + !mm_shared_writable(current->mm)) { + info.si_signo = SIGKILL; info.si_errno = 0; - info.si_code = ILL_ILLOPC; - info.si_addr = pc; + info.si_code = 0; + info.si_addr = 0; current->thread.fault_address = 0; current->thread.fault_code = 0;