From patchwork Mon Jun 3 09:24:34 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13683598 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D7E58C25B75 for ; Mon, 3 Jun 2024 10:46:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 042FA6B008A; Mon, 3 Jun 2024 06:46:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F0DD06B0092; Mon, 3 Jun 2024 06:46:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D95326B008A; Mon, 3 Jun 2024 06:46:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id ACEFB6B0092 for ; Mon, 3 Jun 2024 06:46:06 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 4CB1DA1FD4 for ; Mon, 3 Jun 2024 10:46:06 +0000 (UTC) X-FDA: 82189247532.17.14F4AC1 Received: from szxga05-in.huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf01.hostedemail.com (Postfix) with ESMTP id 360274001A; Mon, 3 Jun 2024 10:46:02 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf01.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717411564; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=C/nwspZXk749iQv0YMsEX+ziQeCcT9+0XfjX55Ig9+c=; b=JZ/IZLM1ot8qo2/PxTAe3935PVtROMZFJMucCY9Pbt/VXs2jldeex5Ah0hhNH3ACVe1xG3 /fH8wsWrX3J4XE/QeUi5R3t2B/s0Cowfm159Xo69KymDQt9aQJ113txlV9gGYW88eJTAKp Q9m2A3QCtEK23PtA1arzHoW0rcZyWUo= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf01.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.191 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717411564; a=rsa-sha256; cv=none; b=bPw1EE/2qMWcoNji9R2Fm5+uPPj2qI87XqoSiXV6GtOGeeEMN/qCyF7R04c0C3PS9Dc1qP 4/50qpn+CCyjB9y7r2Dt0zq/Q89MRzIp4f2EGRoZ3QwcHTZlLtbfZ4ZoDjr0xTvg1m/CcM BuctXtuY665tZhkfmit6Thb9Qqj6r/o= Received: from mail.maildlp.com (unknown [172.19.162.112]) by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4Vt9K76jq4z3554w; Mon, 3 Jun 2024 18:42:19 +0800 (CST) Received: from dggpemm100001.china.huawei.com (unknown [7.185.36.93]) by mail.maildlp.com (Postfix) with ESMTPS id 5EB36140136; Mon, 3 Jun 2024 18:45:59 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.1.2507.39; Mon, 3 Jun 2024 18:45:58 +0800 From: Kefeng Wang To: , CC: Tony Luck , Miaohe Lin , , Matthew Wilcox , David Hildenbrand , Muchun Song , Benjamin LaHaise , , Jiaqi Yan , Hugh Dickins , Vishal Moola , Alistair Popple , Jane Chu , Oscar Salvador , Lance Yang , Kefeng Wang Subject: [PATCH v4 1/6] mm: move memory_failure_queue() into copy_mc_[user]_highpage() Date: Mon, 3 Jun 2024 17:24:34 +0800 Message-ID: <20240603092439.3360652-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240603092439.3360652-1-wangkefeng.wang@huawei.com> References: <20240603092439.3360652-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm100001.china.huawei.com (7.185.36.93) X-Stat-Signature: 3zim7387k14mjidks7f1yjdsmd7xonwh X-Rspamd-Queue-Id: 360274001A X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1717411562-21943 X-HE-Meta: U2FsdGVkX18Rg3PNbygmxX2GbVNr4htsb9EPJlQuhVM2wWYvhTGGINIuGp7ICvHrmRzs7yUxY6ETeTRR3/fGQVODSx/6fv5eAoEI68hxTeQQUpB9RV5upehcrK51TSK5y4mJZvl5lnvEha52YiJdJKIofAondeEJK8zbe+7vl0SFNaIbmxQ1o99Y5ShUwVowV/TeriuOiREI5ucyDVQTnv7+q+0HCZRV4xKkEL2oEnjemDegqiVbMhPMBDdyCI21dd7ISH4x+O23DyhuxX6XraPrXXp6ciRl7Ur2HxzgUSbFAZDj0DlSfjUYJmxPq9x/PpSeUQeGZEzsQs0oUAC7pi5VG5qtG5oddsAcJwN86dIQSecsBFX5wFURauQXJHU9DudjvuIS1vOAWL0Uhqjj5b22nYI3jJ1CHqlmo2eP6nhYL8Jk9kcIAcCOUZvYJAO8Kql5ywigw4rRIe6AT1+RCJpIwtmXk4w5y+b/91bhPbgrtktlPfScu7JuFEKdEvzQCFer8P6ITLLlxDX0y2G7kqDPY0vKXej8hkx5HlqFzqr9I702Vr6HgQP5+7Nb1MXAqstDrg4N0nPST8kAUI0BE8Oa+MUR+B2esifqaw9aoe30SH5jzeu3qoksnz/Nr+KIjatAN6LXr1fiGjmgGgtuBCAmlemEr3iRFlo2Rg8bjsE/tA9fn3i3sebKkgxxFQRab73yrFE3W5Zf8B2VK73dXBcVU6e2tUH9O5prvndEdtdqhu9PcRbW/kJzbJvjifuWqj43rnje7lSFeZ63botD/mUNvKi5hdOjUgnssdsAxoRwz+qWPxmBDgCPCXjovf3Uet0yWj8YMToCAx3XCm1Ggi49hB6f6lgF8c282dYJ1P8bzo12U6J9D98u46u+IIzFVcp9BIfdeaupPWixQvlPmgul+mWiM/7vjsu4OTpJOykM3YF0XdlnCc429Ff19KuUT+HfGTLAxgOyVK4IBin bpZayRzX 5QdttiK9pDZybX7ZSordyGopvGqne7w/eZoV2YKoAmKrenQdc+Twj0Jwqv+S1ETUoQpSDPPQqY9WQmQJXMEuFL4rPz2t127HKdg3+9q1awK6hNy0PC3gT5HVXjSLpKxRSOhuYhxrJVGRx0BSQS/NsYvTxBrUaSknPfa0UtJra4l7phZy1JxZwepbteXrYGLCfJlFbQPLqDXG+AhEgawbk581q6iBx1Io85313ytcK4CMA4STCGQxXQyocrLYPkwwrOKuO+fsPMcGzkqs= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There is a memory_failure_queue() call after copy_mc_[user]_highpage(), see callers, eg, CoW/KSM page copy, it is used to mark the source page as h/w poisoned and unmap it from other tasks, and the upcomming poison recover from migrate folio will do the similar thing, so let's move the memory_failure_queue() into the copy_mc_[user]_highpage() instead of adding it into each user, this should also enhance the handling of poisoned page in khugepaged. Signed-off-by: Kefeng Wang Reviewed-by: Jane Chu Reviewed-by: Miaohe Lin --- include/linux/highmem.h | 6 ++++++ mm/ksm.c | 1 - mm/memory.c | 12 +++--------- 3 files changed, 9 insertions(+), 10 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 00341b56d291..6b0d6f3c8580 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -352,6 +352,9 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from, kunmap_local(vto); kunmap_local(vfrom); + if (ret) + memory_failure_queue(page_to_pfn(from), 0); + return ret; } @@ -368,6 +371,9 @@ static inline int copy_mc_highpage(struct page *to, struct page *from) kunmap_local(vto); kunmap_local(vfrom); + if (ret) + memory_failure_queue(page_to_pfn(from), 0); + return ret; } #else diff --git a/mm/ksm.c b/mm/ksm.c index 452ac8346e6e..3d95e5a9f301 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -3091,7 +3091,6 @@ struct folio *ksm_might_need_to_copy(struct folio *folio, if (copy_mc_user_highpage(folio_page(new_folio, 0), page, addr, vma)) { folio_put(new_folio); - memory_failure_queue(folio_pfn(folio), 0); return ERR_PTR(-EHWPOISON); } folio_set_dirty(new_folio); diff --git a/mm/memory.c b/mm/memory.c index 63f9f98b47bd..e06de844eaba 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3034,10 +3034,8 @@ static inline int __wp_page_copy_user(struct page *dst, struct page *src, unsigned long addr = vmf->address; if (likely(src)) { - if (copy_mc_user_highpage(dst, src, addr, vma)) { - memory_failure_queue(page_to_pfn(src), 0); + if (copy_mc_user_highpage(dst, src, addr, vma)) return -EHWPOISON; - } return 0; } @@ -6417,10 +6415,8 @@ static int copy_user_gigantic_page(struct folio *dst, struct folio *src, cond_resched(); if (copy_mc_user_highpage(dst_page, src_page, - addr + i*PAGE_SIZE, vma)) { - memory_failure_queue(page_to_pfn(src_page), 0); + addr + i*PAGE_SIZE, vma)) return -EHWPOISON; - } } return 0; } @@ -6437,10 +6433,8 @@ static int copy_subpage(unsigned long addr, int idx, void *arg) struct page *dst = nth_page(copy_arg->dst, idx); struct page *src = nth_page(copy_arg->src, idx); - if (copy_mc_user_highpage(dst, src, addr, copy_arg->vma)) { - memory_failure_queue(page_to_pfn(src), 0); + if (copy_mc_user_highpage(dst, src, addr, copy_arg->vma)) return -EHWPOISON; - } return 0; }