From patchwork Tue Dec 13 12:05:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13071971 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0065C4332F for ; Tue, 13 Dec 2022 11:48:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 57D198E0003; Tue, 13 Dec 2022 06:48:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 52D3D8E0002; Tue, 13 Dec 2022 06:48:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 41BF18E0003; Tue, 13 Dec 2022 06:48:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 2DF218E0002 for ; Tue, 13 Dec 2022 06:48:50 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id E438AA0A9F for ; Tue, 13 Dec 2022 11:48:49 +0000 (UTC) X-FDA: 80237111178.18.74EFCD8 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf04.hostedemail.com (Postfix) with ESMTP id 3BF9F40007 for ; Tue, 13 Dec 2022 11:48:46 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf04.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1670932128; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=bTN1IuB3HjQLh+AyNrI7bd3zBFX7KBAKxo4DE6w0TuM=; b=lMZgMmaMocxFQ4rySHaZpihvVfgSuIw/VsapdtzQG1CygHVQGrL+7CVDZCUPNi5yvxm4pT 4tbwcs+mUoTGzgmHbBa0c+mPjUDeEteNVZj7jFCA9SWJu+uYGkS0MVYnodFr2qsejHOZsK NmuFLNBmGOKzzrtAlSjsnqIf7tThZz8= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf04.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1670932128; a=rsa-sha256; cv=none; b=ywhz5T2+4iMMZ4AwlIKxmy8o49hEHAU9e2JvI/tIGOUz8MLqCtlf0Uz1QWTBX/BCzEmNkN /+c9zALo77jancZMF01IyZLVOHGKLVZK+a4+rzB9J7cQP+pqyICtnJOMhhgWceTeQ1OqDE q/e8SzmaTZSSWGjhsT0WYezKY6ex4Qg= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.55]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4NWcDz0cZyz15NKj; Tue, 13 Dec 2022 19:47:47 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Tue, 13 Dec 2022 19:48:43 +0800 From: Kefeng Wang To: , , CC: , , , David Hildenbrand , Kefeng Wang Subject: [PATCH -next resend v3] mm: hwposion: support recovery from ksm_might_need_to_copy() Date: Tue, 13 Dec 2022 20:05:23 +0800 Message-ID: <20221213120523.141588-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20221213030557.143432-1-wangkefeng.wang@huawei.com> References: <20221213030557.143432-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 3BF9F40007 X-Rspam-User: X-Stat-Signature: ioshw5ptkan6ak4ujirt6hfew5uyzbfp X-HE-Tag: 1670932126-355712 X-HE-Meta: U2FsdGVkX19uStMpaUtx5U03TFlTSFsFYwdU8Es/Yow6EtxxzociqnUqhfzcGiO8Mu3eAHSllfx6MgeOiSC/XIhU8iBZ6UXPfd9n9py4Nz8XypBNEhd9m0sWkK6OND6p7Lfz/8H5fwtz7D971qp3VoaBnXCAkt/8JHdFGKYKVLVWNRpdw3Ehl0ss39zEzr+5WkLC/6bpdQConajoNM5bKuclRSV2QjTK9ui+Yb/qCX4xIfcNTfH65IZNAJfiB8/kCBmVyq7zC507lTOmEt7n1zjhuSAq+6Pexl+SB2cjS+e8+ROTSdmhrqhqNfGDdkjY5fZ3AwtK+ic4+/jevDECONGRyvZ6cFe1b5l0+3/2Ij/vJ2gLgKhCK8rXtx9TJ7MGYhbsDLN5QKm9B9U2eI2mf7M5DB3+R0UwXVm4AO6GrvxWlV3wPGy1n9J6W0LtvPEzDMou44YnjNnqBqcsPIlomiOq0o9hG/MpACEIINHZ3Q2p8Oxg3tMPfRxnedzFaueQTGuxvVA2zizi1BNEADdiR+GK9iHUwI6o2z/ibZcWYHjnelBN+yO6NHDUL/4CmrQORrhLIOwwh27oD1A4NWl+PeeeIQrQIPQfFx6S9kcLsFNKLbklGb5aed1ghs+VRBKRskJgHcOdCz/xmevHLyB5ImCq4OnfAwan36VlXsVtgfxGkxLMtY1HV5OIBjtiGDcgIGWhFlVbH2eCGa87tPSgYddx3AzozNNLMvLT4BrKqKy51Qe0zjDweB4YUFSSIWonHyBmFzK9DbXgD/ZYBky+bAsdOlHmc+Kmob6oxrkfilUp1H/tDjbnhJP7w2a5ER+M6Sxl7MDamnNdkWpfc3/21OB3PiZ9WovTriwwhmxEjmraaI+YB2z2I0/TKI8re9Hc1FoZpS6TDfJtZnN5m5VQGA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When the kernel copy a page from ksm_might_need_to_copy(), but runs into an uncorrectable error, it will crash since poisoned page is consumed by kernel, this is similar to Copy-on-write poison recovery, When an error is detected during the page copy, return VM_FAULT_HWPOISON in do_swap_page(), and install a hwpoison entry in unuse_pte() when swapoff, which help us to avoid system crash. Note, memory failure on a KSM page will be skipped, but still call memory_failure_queue() to be consistent with general memory failure process. Signed-off-by: Kefeng Wang --- v3 resend: - enhance unuse_pte() if ksm_might_need_to_copy() return -EHWPOISON - fix issue found by lkp mm/ksm.c | 8 ++++++-- mm/memory.c | 3 +++ mm/swapfile.c | 20 ++++++++++++++------ 3 files changed, 23 insertions(+), 8 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index dd02780c387f..83e2f74ae7da 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2629,8 +2629,12 @@ struct page *ksm_might_need_to_copy(struct page *page, new_page = NULL; } if (new_page) { - copy_user_highpage(new_page, page, address, vma); - + if (copy_mc_user_highpage(new_page, page, address, vma)) { + put_page(new_page); + new_page = ERR_PTR(-EHWPOISON); + memory_failure_queue(page_to_pfn(page), 0); + return new_page; + } SetPageDirty(new_page); __SetPageUptodate(new_page); __SetPageLocked(new_page); diff --git a/mm/memory.c b/mm/memory.c index aad226daf41b..5b2c137dfb2a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3840,6 +3840,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (unlikely(!page)) { ret = VM_FAULT_OOM; goto out_page; + } else if (unlikely(PTR_ERR(page) == -EHWPOISON)) { + ret = VM_FAULT_HWPOISON; + goto out_page; } folio = page_folio(page); diff --git a/mm/swapfile.c b/mm/swapfile.c index 908a529bca12..0efb1c2c2415 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1763,12 +1763,15 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, struct page *swapcache; spinlock_t *ptl; pte_t *pte, new_pte; + bool hwposioned = false; int ret = 1; swapcache = page; page = ksm_might_need_to_copy(page, vma, addr); if (unlikely(!page)) return -ENOMEM; + else if (unlikely(PTR_ERR(page) == -EHWPOISON)) + hwposioned = true; pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); if (unlikely(!pte_same_as_swp(*pte, swp_entry_to_pte(entry)))) { @@ -1776,15 +1779,19 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, goto out; } - if (unlikely(!PageUptodate(page))) { - pte_t pteval; + if (hwposioned || !PageUptodate(page)) { + swp_entry_t swp_entry; dec_mm_counter(vma->vm_mm, MM_SWAPENTS); - pteval = swp_entry_to_pte(make_swapin_error_entry()); - set_pte_at(vma->vm_mm, addr, pte, pteval); - swap_free(entry); + if (hwposioned) { + swp_entry = make_hwpoison_entry(swapcache); + page = swapcache; + } else { + swp_entry = make_swapin_error_entry(); + } + new_pte = swp_entry_to_pte(swp_entry); ret = 0; - goto out; + goto setpte; } /* See do_swap_page() */ @@ -1816,6 +1823,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, new_pte = pte_mksoft_dirty(new_pte); if (pte_swp_uffd_wp(*pte)) new_pte = pte_mkuffd_wp(new_pte); +setpte: set_pte_at(vma->vm_mm, addr, pte, new_pte); swap_free(entry); out: