From patchwork Wed Feb 1 07:44:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13123866 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26696C05027 for ; Wed, 1 Feb 2023 07:22:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 91AB26B0071; Wed, 1 Feb 2023 02:22:24 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8CAFC6B0075; Wed, 1 Feb 2023 02:22:24 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7BA526B0078; Wed, 1 Feb 2023 02:22:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 6B1326B0071 for ; Wed, 1 Feb 2023 02:22:24 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 3F7F1A0105 for ; Wed, 1 Feb 2023 07:22:24 +0000 (UTC) X-FDA: 80417879808.07.621209E Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf06.hostedemail.com (Postfix) with ESMTP id 97626180009 for ; Wed, 1 Feb 2023 07:22:20 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1675236141; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=wOE2rNOQeC9MALN/LAk6BAmeeVNDkJWtTLHzRF+Df0A=; b=o0U4pckuluiT2y8Uh9Py4Wa70kzRRqgrhuF+/qDqZTfpqr9K0gJEUqPIZX5syRV1W5h7Wz jovNajS0pV6fELfNac0l9MeuBzp2ey3xrSrFqOZHJcSdDg/fFZzkNpU+hpn5rtG5IvtE07 lBxF7eJJV3hJY3U5yxydAO2VzI0zfm0= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1675236141; a=rsa-sha256; cv=none; b=IU0pQGR62G5ifOw3fhcCglQmNu5PIOos6KUsIcl9t3/dqu6RsvO82DjuYXMJpSPfPdb2hg YmomZUumc3nD9x2yD5uJqrkiGkHyCkb7Ur/ZV3Ny3Y1E4/PZa3taAmTBesDohYtT+WCcrM AzdPwcSPQtqr/5sQe/h9+7/G3EEVbLk= Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.57]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4P6CtK18jBzJqtB; Wed, 1 Feb 2023 15:17:45 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Wed, 1 Feb 2023 15:22:14 +0800 From: Kefeng Wang To: CC: , , , , , Kefeng Wang Subject: [PATCH v4] mm: hwposion: support recovery from ksm_might_need_to_copy() Date: Wed, 1 Feb 2023 15:44:33 +0800 Message-ID: <20230201074433.96641-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 MIME-Version: 1.0 X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected X-Stat-Signature: au7m59sei6yugx87siryyjmt1ejyyadh X-Rspam-User: X-Rspamd-Queue-Id: 97626180009 X-Rspamd-Server: rspam06 X-HE-Tag: 1675236140-757389 X-HE-Meta: U2FsdGVkX18MM7xxZXP1rIJNRDybscSw14pEGpkQ25CmbHGY5Gd1PhPPcWFzjh34/6LzMYGfWcdOMMVeOedHv3OHuu/fna67zZjY6Frw5BG/EEoM4Cq4yZdygmLvO+O1i14G1YhvGBttrQQOLY2qrzl0Hndwoq2XTVUJlywyEXs1hkc+dapc12Qs+6kKxvQdVRMoSm88M+KDoTDRIHk8DgbccYeKmEbfmW2g6rkIJogActEzSSxqwt3maHpiGoc7Q8TddNkyGbp2mnCAAtmVBE846xkdVtgbhyjQHOhNRXziyMBaTDPF8v4B9YYzN/JHGeQWXFHXRJjrPduY4V8eIYyaDzDvH+BdKTXNZ32dGpei8lQoVOMYyIkSE+UV9K/n1xMUR244Omh4n8wQVMkk6R4gtsoQMSTbReMp7YX2AYieG4rbGQqEbz6ZHbgoyh96jSohp4mmlVsqbAmiTHgTgWDMUfkVA5sgVOlcbZeCfF+QG0M3HaCmTR3mPsMhEVjCb+LN/jRFVDM9rr2LVErUkNF426UgG1jIjRrgwJ8rxk60oUXZGNYBh93xQAZ5lljds5Ne8G42DT4TdDOH2oB0Pnm8L0ms5y3Ip9l+qIJkN/9380lsxauWYBNJNwgEZQcbA3dyLyCX1VdXfTRpCEIbPiV63fSKNaIUSkZmtXiGQR86dc4AgcL0srf7rs1nCkFHRbIu11TMLNnILAsDBl6uz+6a3HIlf+m6uhcgWPdwOSXMFW8mP//rEH32EIHkVXMfu0B6Fk6S0PTDj+/1EGW9uLKtSRE9Opy7PdYMuUkNmUjWAl26nZkI42jUVu15VXy5xVAA8CKqFvD1yiRP1+A5eUsBgZWWiCOCDR2ycAfGUBea4RwyWL76Lx3Fz6i5vXStmymhAg3dgOlWR0G/2Jw/rdYo/st4kviR1qaUq9GBHp7hBmX518iiI6V7qnIOdXV61EtPA9WCTxi65m/9xnY gKm5+baC p/h5TAPbuc+uZjMBcYuV9t72mPqbE64+S5fUxdw6w+xakwOKhjUQGwUkzu5FO1uVb0vaJg5Sjmweahnfp17B8tTWJmUPEvpyNf3fx65MYN4nugbCIU2pZlMVSbfWloQ4OkcmIkl1W5jC6OGbisiX/3CjXwGDfSZclEpLs7v1D4DBWxbgJbFgdNyGMoXRFTyfi/8dQwFZelcVywdTA7JiwN/bCAw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: When the kernel copy a page from ksm_might_need_to_copy(), but runs into an uncorrectable error, it will crash since poisoned page is consumed by kernel, this is similar to the issue recently fixed by Copy-on-write poison recovery. When an error is detected during the page copy, return VM_FAULT_HWPOISON in do_swap_page(), and install a hwpoison entry in unuse_pte() when swapoff, which help us to avoid system crash. Note, memory failure on a KSM page will be skipped, but still call memory_failure_queue() to be consistent with general memory failure process, and we could support KSM page recovery in the feature. Signed-off-by: Kefeng Wang Reviewed-by: Naoya Horiguchi --- v4: - update changelog and directly return ERR_PTR(-EHWPOISON) in ksm_might_need_to_copy() suggested HORIGUCHI NAOYA - add back unlikely in unuse_pte() mm/ksm.c | 7 +++++-- mm/memory.c | 3 +++ mm/swapfile.c | 20 ++++++++++++++------ 3 files changed, 22 insertions(+), 8 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index dd02780c387f..addf490da146 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -2629,8 +2629,11 @@ struct page *ksm_might_need_to_copy(struct page *page, new_page = NULL; } if (new_page) { - copy_user_highpage(new_page, page, address, vma); - + if (copy_mc_user_highpage(new_page, page, address, vma)) { + put_page(new_page); + memory_failure_queue(page_to_pfn(page), 0); + return ERR_PTR(-EHWPOISON); + } SetPageDirty(new_page); __SetPageUptodate(new_page); __SetPageLocked(new_page); diff --git a/mm/memory.c b/mm/memory.c index aad226daf41b..5b2c137dfb2a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3840,6 +3840,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (unlikely(!page)) { ret = VM_FAULT_OOM; goto out_page; + } else if (unlikely(PTR_ERR(page) == -EHWPOISON)) { + ret = VM_FAULT_HWPOISON; + goto out_page; } folio = page_folio(page); diff --git a/mm/swapfile.c b/mm/swapfile.c index 908a529bca12..3ef2468d7130 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1763,12 +1763,15 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, struct page *swapcache; spinlock_t *ptl; pte_t *pte, new_pte; + bool hwposioned = false; int ret = 1; swapcache = page; page = ksm_might_need_to_copy(page, vma, addr); if (unlikely(!page)) return -ENOMEM; + else if (unlikely(PTR_ERR(page) == -EHWPOISON)) + hwposioned = true; pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); if (unlikely(!pte_same_as_swp(*pte, swp_entry_to_pte(entry)))) { @@ -1776,15 +1779,19 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, goto out; } - if (unlikely(!PageUptodate(page))) { - pte_t pteval; + if (unlikely(hwposioned || !PageUptodate(page))) { + swp_entry_t swp_entry; dec_mm_counter(vma->vm_mm, MM_SWAPENTS); - pteval = swp_entry_to_pte(make_swapin_error_entry()); - set_pte_at(vma->vm_mm, addr, pte, pteval); - swap_free(entry); + if (hwposioned) { + swp_entry = make_hwpoison_entry(swapcache); + page = swapcache; + } else { + swp_entry = make_swapin_error_entry(); + } + new_pte = swp_entry_to_pte(swp_entry); ret = 0; - goto out; + goto setpte; } /* See do_swap_page() */ @@ -1816,6 +1823,7 @@ static int unuse_pte(struct vm_area_struct *vma, pmd_t *pmd, new_pte = pte_mksoft_dirty(new_pte); if (pte_swp_uffd_wp(*pte)) new_pte = pte_mkuffd_wp(new_pte); +setpte: set_pte_at(vma->vm_mm, addr, pte, new_pte); swap_free(entry); out: