From patchwork Thu Mar 30 10:31:08 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xie XiuQi X-Patchwork-Id: 9653865 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 50F966034C for ; Thu, 30 Mar 2017 10:51:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4109926224 for ; Thu, 30 Mar 2017 10:51:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3573B28574; Thu, 30 Mar 2017 10:51:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 7828126224 for ; Thu, 30 Mar 2017 10:51:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=XielIkzqXrFQz+rtVJRR+Yq/i7qM8D3sNrofIUPHtt4=; b=ammSs7fO2W+xeB TxwEujX4Cy5pI4fPI1RA3XVELgw3CD+N7vAqPERhgt3Fx/lMzY+7D8K3t+8BLdh+IODohYIKVs36J +00Ul36LBCsZYCEjnDc9mfi/Tuf5aFtwAIl1tMrirv79fG7bFLViA8Wadn04mlDuTDAVleDrOfe5c 0IwTvVZhpNLWq2bCeukxKml4ldbdBh0in3dQpZ4ooWZlaK8r1Apwu7NQCw3Mfz7v3VJT3kTtFL22/ 2bECGqHPiPVqEWMVWbCWZE1D/8KwNaA3yMJb11y/3S0Q+FMTzTrubgEyWBXtAPveA53Yl0Dh6bTAQ GgQp9K0iNBplZW1xDppQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1ctXfa-0000Nh-1C; Thu, 30 Mar 2017 10:51:10 +0000 Received: from [45.249.212.187] (helo=dggrg01-dlp.huawei.com) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1ctXQJ-0006Ek-TL for linux-arm-kernel@lists.infradead.org; Thu, 30 Mar 2017 10:35:38 +0000 Received: from 172.30.72.54 (EHLO DGGEML403-HUB.china.huawei.com) ([172.30.72.54]) by dggrg01-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id ALT60667; Thu, 30 Mar 2017 18:34:39 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEML403-HUB.china.huawei.com (10.3.17.33) with Microsoft SMTP Server id 14.3.301.0; Thu, 30 Mar 2017 18:34:29 +0800 From: Xie XiuQi To: , , , , , , , , Subject: [PATCH v3 8/8] arm64: exception: check shared writable page in SEI handler Date: Thu, 30 Mar 2017 18:31:08 +0800 Message-ID: <1490869877-118713-9-git-send-email-xiexiuqi@huawei.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1490869877-118713-1-git-send-email-xiexiuqi@huawei.com> References: <1490869877-118713-1-git-send-email-xiexiuqi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020204.58DCDF41.006C, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 14a62bbe19cb441eaa36d259ff377d39 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170330_033524_505805_2E6CE23B X-CRM114-Status: GOOD ( 15.18 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: wuquanming@huawei.com, kvm@vger.kernel.org, xiexiuqi@huawei.com, linux-kernel@vger.kernel.org, gengdongjiu@huawei.com, wangxiongfeng2@huawei.com, linux-acpi@vger.kernel.org, Wang Xiongfeng , zhengqiang10@huawei.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Wang Xiongfeng Since SEI is asynchronous, the error data has been consumed. So we must suppose that all the memory data current process can write are contaminated. If the process doesn't have shared writable pages, the process will be killed, and the system will continue running normally. Otherwise, the system must be terminated, because the error has been propagated to other processes running on other cores, and recursively the error may be propagated to several another processes. Signed-off-by: Wang Xiongfeng Signed-off-by: Xie XiuQi --- arch/arm64/kernel/traps.c | 149 ++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 144 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 99be6d8..b222589 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -34,6 +34,8 @@ #include #include #include +#include +#include #include #include @@ -662,7 +664,144 @@ asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr) [ESR_ELx_AET_CE] = "Corrected", }; +static void shared_writable_pte_entry(pte_t *pte, unsigned long addr, + struct mm_walk *walk) +{ + int *is_shared_writable = walk->private; + struct vm_area_struct *vma = walk->vma; + struct page *page = NULL; + int mapcount = -1; + + if (!pte_write(__pte(pgprot_val(vma->vm_page_prot)))) + return; + + if (pte_present(*pte)) { + page = vm_normal_page(vma, addr, *pte); + } else if (is_swap_pte(*pte)) { + swp_entry_t swpent = pte_to_swp_entry(*pte); + + if (!non_swap_entry(swpent)) + mapcount = swp_swapcount(swpent); + else if (is_migration_entry(swpent)) + page = migration_entry_to_page(swpent); + } + + if (mapcount == -1 && page) + mapcount = page_mapcount(page); + if (mapcount >= 2) + *is_shared_writable = 1; +} + +static void shared_writable_pmd_entry(pmd_t *pmd, unsigned long addr, + struct mm_walk *walk) +{ + struct page *page; + int mapcount; + int *is_shared_writable = walk->private; + + if (!pmd_write(*pmd)) + return; + + page = pmd_page(*pmd); + if (page) { + mapcount = page_mapcount(page); + if (mapcount >= 2) + *is_shared_writable = 1; + } +} + +static int shared_writable_pte_range(pmd_t *pmd, unsigned long addr, + unsigned long end, struct mm_walk *walk) +{ + pte_t *pte; + + if (pmd_trans_huge(*pmd)) { + shared_writable_pmd_entry(pmd, addr, walk); + return 0; + } + + if (pmd_trans_unstable(pmd)) + return 0; + + pte = pte_offset_map(pmd, addr); + for (; addr != end; pte++, addr += PAGE_SIZE) + shared_writable_pte_entry(pte, addr, walk); + return 0; +} + +#ifdef CONFIG_HUGETLB_PAGE +static int shared_writable_hugetlb_range(pte_t *pte, unsigned long hmask, + unsigned long addr, unsigned long end, + struct mm_walk *walk) +{ + struct vm_area_struct *vma = walk->vma; + int *is_shared_writable = walk->private; + struct page *page = NULL; + int mapcount; + + if (!pte_write(*pte)) + return 0; + + if (pte_present(*pte)) { + page = vm_normal_page(vma, addr, *pte); + } else if (is_swap_pte(*pte)) { + swp_entry_t swpent = pte_to_swp_entry(*pte); + + if (is_migration_entry(swpent)) + page = migration_entry_to_page(swpent); + } + + if (page) { + mapcount = page_mapcount(page); + + if (mapcount >= 2) + *is_shared_writable = 1; + } + return 0; +} +#endif + +/* + *Check whether there exists a page in mm_struct which is shared with other + process and writable (not COW) at the same time. 0 means existing such a page. + */ +int mm_shared_writable(struct mm_struct *mm) +{ + struct vm_area_struct *vma; + int is_shared_writable = 0; + struct mm_walk shared_writable_walk = { + .pmd_entry = shared_writable_pte_range, +#ifdef CONFIG_HUGETLB_PAGE + .hugetlb_entry = shared_writable_hugetlb_range, +#endif + .mm = mm, + .private = &is_shared_writable, + }; + + if (!mm) + return -EPERM; + + vma = mm->mmap; + while (vma) { + walk_page_vma(vma, &shared_writable_walk); + if (is_shared_writable) + return 1; + vma = vma->vm_next; + } + return 0; +} + DEFINE_PER_CPU(int, sei_in_process); + +/* + * Since SEI is asynchronous, the error data has been consumed. So we must + * suppose that all the memory data current process can write are + * contaminated. If the process doesn't have shared writable pages, the + * process will be killed, and the system will continue running normally. + * Otherwise, the system must be terminated, because the error has been + * propagated to other processes running on other cores, and recursively + * the error may be propagated to several another processes. + */ asmlinkage void do_sei(struct pt_regs *regs, unsigned int esr, int el) { int aet = ESR_ELx_AET(esr); @@ -684,16 +823,16 @@ asmlinkage void do_sei(struct pt_regs *regs, unsigned int esr, int el) if (el == 0 && IS_ENABLED(CONFIG_ARM64_ESB) && cpus_have_cap(ARM64_HAS_RAS_EXTN)) { siginfo_t info; - void __user *pc = (void __user *)instruction_pointer(regs); if (aet >= ESR_ELx_AET_UEO) return; - if (aet == ESR_ELx_AET_UEU) { - info.si_signo = SIGILL; + if (aet == ESR_ELx_AET_UEU && + !mm_shared_writable(current->mm)) { + info.si_signo = SIGKILL; info.si_errno = 0; - info.si_code = ILL_ILLOPC; - info.si_addr = pc; + info.si_code = 0; + info.si_addr = 0; current->thread.fault_address = 0; current->thread.fault_code = 0;