From patchwork Thu Mar 30 10:31:17 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Xie XiuQi X-Patchwork-Id: 9653863 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 2912E6034C for ; Thu, 30 Mar 2017 10:51:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0C31728574 for ; Thu, 30 Mar 2017 10:51:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id F1E4C28597; Thu, 30 Mar 2017 10:51:01 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.9 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID autolearn=unavailable version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 6632428574 for ; Thu, 30 Mar 2017 10:51:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=XielIkzqXrFQz+rtVJRR+Yq/i7qM8D3sNrofIUPHtt4=; b=Rl+t0fZ672LWSS oFauRJ+vbxJ9rJ6BmGvLOTb8de9B0+5N+Pa00o+tesDdZFwX4rtVZroL4CKx9y3LnT4LhxLuSI4kn 9ifF6RplCa5kIkFeZ1B7GMv5dK2hCEZe9xC+WdXL+aZLP/DY9lv7Xuhr8RLC0XaZlSl3w3/S0tt5M CyeF9PrL8ErPOAODOr+AGs6vAY9jGNHEMItJti/S+/JlOswzsuzXCVcARf3mAu2bUEAaOt5KtPnHh aCxdV57VbL3+0wERnwiFYTVaHz7I+LQn4Q9+hsE43fo0IZaP+P0SFEa/z8/RFSizkUSA12RNr/NXk RDrLyH4GH2kVYUSoMREQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1ctXfQ-00008d-9S; Thu, 30 Mar 2017 10:51:00 +0000 Received: from [45.249.212.187] (helo=dggrg01-dlp.huawei.com) by bombadil.infradead.org with esmtps (Exim 4.87 #1 (Red Hat Linux)) id 1ctXQK-0006Jk-9c for linux-arm-kernel@lists.infradead.org; Thu, 30 Mar 2017 10:35:36 +0000 Received: from 172.30.72.53 (EHLO DGGEML403-HUB.china.huawei.com) ([172.30.72.53]) by dggrg01-dlp.huawei.com (MOS 4.4.6-GA FastPath queued) with ESMTP id ALT60682; Thu, 30 Mar 2017 18:34:47 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by DGGEML403-HUB.china.huawei.com (10.3.17.33) with Microsoft SMTP Server id 14.3.301.0; Thu, 30 Mar 2017 18:34:38 +0800 From: Xie XiuQi To: , , , , , , , , Subject: [PATCH v3 8/8] arm64: exception: check shared writable page in SEI handler Date: Thu, 30 Mar 2017 18:31:17 +0800 Message-ID: <1490869877-118713-18-git-send-email-xiexiuqi@huawei.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1490869877-118713-1-git-send-email-xiexiuqi@huawei.com> References: <1490869877-118713-1-git-send-email-xiexiuqi@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020204.58DCDF47.00FF, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2014-11-16 11:51:01, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 14a62bbe19cb441eaa36d259ff377d39 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170330_033524_930749_49886330 X-CRM114-Status: GOOD ( 15.18 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: wuquanming@huawei.com, kvm@vger.kernel.org, xiexiuqi@huawei.com, linux-kernel@vger.kernel.org, gengdongjiu@huawei.com, wangxiongfeng2@huawei.com, linux-acpi@vger.kernel.org, Wang Xiongfeng , zhengqiang10@huawei.com, kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From: Wang Xiongfeng Since SEI is asynchronous, the error data has been consumed. So we must suppose that all the memory data current process can write are contaminated. If the process doesn't have shared writable pages, the process will be killed, and the system will continue running normally. Otherwise, the system must be terminated, because the error has been propagated to other processes running on other cores, and recursively the error may be propagated to several another processes. Signed-off-by: Wang Xiongfeng Signed-off-by: Xie XiuQi --- arch/arm64/kernel/traps.c | 149 ++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 144 insertions(+), 5 deletions(-) diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c index 99be6d8..b222589 100644 --- a/arch/arm64/kernel/traps.c +++ b/arch/arm64/kernel/traps.c @@ -34,6 +34,8 @@ #include #include #include +#include +#include #include #include @@ -662,7 +664,144 @@ asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr) [ESR_ELx_AET_CE] = "Corrected", }; +static void shared_writable_pte_entry(pte_t *pte, unsigned long addr, + struct mm_walk *walk) +{ + int *is_shared_writable = walk->private; + struct vm_area_struct *vma = walk->vma; + struct page *page = NULL; + int mapcount = -1; + + if (!pte_write(__pte(pgprot_val(vma->vm_page_prot)))) + return; + + if (pte_present(*pte)) { + page = vm_normal_page(vma, addr, *pte); + } else if (is_swap_pte(*pte)) { + swp_entry_t swpent = pte_to_swp_entry(*pte); + + if (!non_swap_entry(swpent)) + mapcount = swp_swapcount(swpent); + else if (is_migration_entry(swpent)) + page = migration_entry_to_page(swpent); + } + + if (mapcount == -1 && page) + mapcount = page_mapcount(page); + if (mapcount >= 2) + *is_shared_writable = 1; +} + +static void shared_writable_pmd_entry(pmd_t *pmd, unsigned long addr, + struct mm_walk *walk) +{ + struct page *page; + int mapcount; + int *is_shared_writable = walk->private; + + if (!pmd_write(*pmd)) + return; + + page = pmd_page(*pmd); + if (page) { + mapcount = page_mapcount(page); + if (mapcount >= 2) + *is_shared_writable = 1; + } +} + +static int shared_writable_pte_range(pmd_t *pmd, unsigned long addr, + unsigned long end, struct mm_walk *walk) +{ + pte_t *pte; + + if (pmd_trans_huge(*pmd)) { + shared_writable_pmd_entry(pmd, addr, walk); + return 0; + } + + if (pmd_trans_unstable(pmd)) + return 0; + + pte = pte_offset_map(pmd, addr); + for (; addr != end; pte++, addr += PAGE_SIZE) + shared_writable_pte_entry(pte, addr, walk); + return 0; +} + +#ifdef CONFIG_HUGETLB_PAGE +static int shared_writable_hugetlb_range(pte_t *pte, unsigned long hmask, + unsigned long addr, unsigned long end, + struct mm_walk *walk) +{ + struct vm_area_struct *vma = walk->vma; + int *is_shared_writable = walk->private; + struct page *page = NULL; + int mapcount; + + if (!pte_write(*pte)) + return 0; + + if (pte_present(*pte)) { + page = vm_normal_page(vma, addr, *pte); + } else if (is_swap_pte(*pte)) { + swp_entry_t swpent = pte_to_swp_entry(*pte); + + if (is_migration_entry(swpent)) + page = migration_entry_to_page(swpent); + } + + if (page) { + mapcount = page_mapcount(page); + + if (mapcount >= 2) + *is_shared_writable = 1; + } + return 0; +} +#endif + +/* + *Check whether there exists a page in mm_struct which is shared with other + process and writable (not COW) at the same time. 0 means existing such a page. + */ +int mm_shared_writable(struct mm_struct *mm) +{ + struct vm_area_struct *vma; + int is_shared_writable = 0; + struct mm_walk shared_writable_walk = { + .pmd_entry = shared_writable_pte_range, +#ifdef CONFIG_HUGETLB_PAGE + .hugetlb_entry = shared_writable_hugetlb_range, +#endif + .mm = mm, + .private = &is_shared_writable, + }; + + if (!mm) + return -EPERM; + + vma = mm->mmap; + while (vma) { + walk_page_vma(vma, &shared_writable_walk); + if (is_shared_writable) + return 1; + vma = vma->vm_next; + } + return 0; +} + DEFINE_PER_CPU(int, sei_in_process); + +/* + * Since SEI is asynchronous, the error data has been consumed. So we must + * suppose that all the memory data current process can write are + * contaminated. If the process doesn't have shared writable pages, the + * process will be killed, and the system will continue running normally. + * Otherwise, the system must be terminated, because the error has been + * propagated to other processes running on other cores, and recursively + * the error may be propagated to several another processes. + */ asmlinkage void do_sei(struct pt_regs *regs, unsigned int esr, int el) { int aet = ESR_ELx_AET(esr); @@ -684,16 +823,16 @@ asmlinkage void do_sei(struct pt_regs *regs, unsigned int esr, int el) if (el == 0 && IS_ENABLED(CONFIG_ARM64_ESB) && cpus_have_cap(ARM64_HAS_RAS_EXTN)) { siginfo_t info; - void __user *pc = (void __user *)instruction_pointer(regs); if (aet >= ESR_ELx_AET_UEO) return; - if (aet == ESR_ELx_AET_UEU) { - info.si_signo = SIGILL; + if (aet == ESR_ELx_AET_UEU && + !mm_shared_writable(current->mm)) { + info.si_signo = SIGKILL; info.si_errno = 0; - info.si_code = ILL_ILLOPC; - info.si_addr = pc; + info.si_code = 0; + info.si_addr = 0; current->thread.fault_address = 0; current->thread.fault_code = 0;