From patchwork Mon Oct 10 16:01:40 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Jiaqi Yan X-Patchwork-Id: 13002706 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36B25C433FE for ; Mon, 10 Oct 2022 16:02:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4289B6B0071; Mon, 10 Oct 2022 12:02:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B0488E0001; Mon, 10 Oct 2022 12:02:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 205AF6B0074; Mon, 10 Oct 2022 12:02:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 0A6686B0071 for ; Mon, 10 Oct 2022 12:02:48 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id C6723A01D3 for ; Mon, 10 Oct 2022 16:02:47 +0000 (UTC) X-FDA: 80005507974.28.2B96705 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) by imf21.hostedemail.com (Postfix) with ESMTP id 49DCA1C0028 for ; Mon, 10 Oct 2022 16:02:46 +0000 (UTC) Received: by mail-pf1-f202.google.com with SMTP id y15-20020aa78f2f000000b00562674456afso5906917pfr.9 for ; Mon, 10 Oct 2022 09:02:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id :mime-version:date:from:to:cc:subject:date:message-id:reply-to; bh=o2wCyxd7/3Cg2h0w/OMnzX7n7FZWwBgfNAuCttnb+/8=; b=DiEyF+uraO7PNpAOfrczlxypB7xnmJPne84kvR2Jzlcymbs7dQ17aZHJ882JgEiKkO zfgCCLeddtsKT+v2V94gt7oYhJdtlMxA0Q4ySIYMc0EEY0qQnWDWeMAeh8GrwSBEnY+/ QyoRykblsCp34maFIyxJHU4yTrsbwoVXSCOPXZrhZ6VNPopfnpWJnNUFDuRbimRlekSS UehxJJerJLv9Sc+7GmG8YRFM0cuHt2ZqKAxvCVBrcwEaOBKXH3dIx7J+aWhyXOaT1tX4 j1+NBh4Pac0dB2xr+pglWQQERYDrhfV+29kC5FZWJoOTwDy6EZpepZGjpjeBdiZ6qcLr bdvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:cc:to:from:subject:message-id :mime-version:date:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=o2wCyxd7/3Cg2h0w/OMnzX7n7FZWwBgfNAuCttnb+/8=; b=5gwp9O5pZmg9/l4pwkePhTTSlPquv0BqL9hEjpmXVqbouny7aQarsGCYEh/tpSuQoP +q0rhTBsrglB/HXTrm3Bz7XcRYH9mvkCWKKhwIrRXCIAwZ1cOyPr9eMND1sN3+SZvR+j zrAuYFJUFV98TBy0gH5wLQi/jB6r3ImeEQuw35TwqRSfmZu5Qx0Paxc9L6uQlpBjZzTB RynGtPPeeprxpZM6PF15XYa0JewxQSv9NP0A/NIZEKha7sSVOifW7qJ1Y2ec6RTyUbVw 4uAX0+9eAXobDVe+Nl4rVUxepAWrRUIvT7gK19jpiqfL4BpDRU4xvpkBzXjB7z9oNBYA YsMg== X-Gm-Message-State: ACrzQf3Gdj52UDi+W5NGZwzg12zjLZRkPuQ5f4Aato/qHUwyF7Z62JSK QYo0phGyDjY+E9UrIq0lXETMfYkOicQSaw== X-Google-Smtp-Source: AMsMyM5q65TnvLQ0D/ELV9A/aetsRxJ2nSVtTRhnIKepiEnvFx7M7sFQo2aeSFCuAqQH+RjAbUajHhQybJ57yg== X-Received: from yjqkernel.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:1837]) (user=jiaqiyan job=sendgmr) by 2002:a17:902:c410:b0:17f:8d9b:7e68 with SMTP id k16-20020a170902c41000b0017f8d9b7e68mr20412628plk.133.1665417765154; Mon, 10 Oct 2022 09:02:45 -0700 (PDT) Date: Mon, 10 Oct 2022 09:01:40 -0700 Mime-Version: 1.0 X-Mailer: git-send-email 2.38.0.rc1.362.ged0d419d3c-goog Message-ID: <20221010160142.1087120-1-jiaqiyan@google.com> Subject: [PATCH v5 0/2] Memory poison recovery in khugepaged collapsing From: Jiaqi Yan To: shy828301@gmail.com, tongtiangen@huawei.com Cc: tony.luck@intel.com, naoya.horiguchi@nec.com, kirill.shutemov@linux.intel.com, linmiaohe@huawei.com, linux-mm@kvack.org, akpm@linux-foundation.org, jiaqiyan@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1665417766; a=rsa-sha256; cv=none; b=J5EqPiToQSouHD7Hp9rAc1Q+l5pZ0NXEAnHvwEXh2ZSnYRr6tPOvJQprrnro6vaooxxdrE b8dpp8/ZALVQPoBhij0tkuiy42Qev8pZDEy7w5AUOEH8GoQxNTndey/P2oq5V5rH9aF6y4 /4QC0tYRGLTh9YJtcMBh4OmRR1iTsGM= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=DiEyF+ur; spf=pass (imf21.hostedemail.com: domain of 3JUJEYwgKCNoFE6MEU6JCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--jiaqiyan.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3JUJEYwgKCNoFE6MEU6JCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--jiaqiyan.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1665417766; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=o2wCyxd7/3Cg2h0w/OMnzX7n7FZWwBgfNAuCttnb+/8=; b=Jk+LWir260kmaiJhfqRadSgON0aShFrAeeRgVTU2PdD6V52y1VvYkl8w0BBL+QksO32KEQ fkt7nPgLz68yyDrFFQdHF33Sb7LrFDm3ixzBPy91zuQscpNLCxmZjhDja5cta8ghTq4ITb snuL40f5eIsrMDed/1dYX5iNMVkRiCM= Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=DiEyF+ur; spf=pass (imf21.hostedemail.com: domain of 3JUJEYwgKCNoFE6MEU6JCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--jiaqiyan.bounces.google.com designates 209.85.210.202 as permitted sender) smtp.mailfrom=3JUJEYwgKCNoFE6MEU6JCKKCHA.8KIHEJQT-IIGR68G.KNC@flex--jiaqiyan.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Stat-Signature: iggwq44g3qb1rup8hgawpfdyhkp5sm6p X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 49DCA1C0028 X-Rspam-User: X-HE-Tag: 1665417766-113799 X-Bogosity: Ham, tests=bogofilter, spamicity=0.001239, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Problem ======= Memory DIMMs are subject to multi-bit flips, i.e. memory errors. As memory size and density increase, the chances of and number of memory errors increase. The increasing size and density of server RAM in the data center and cloud have shown increased uncorrectable memory errors. There are already mechanisms in the kernel to recover from uncorrectable memory errors. This series of patches provides the recovery mechanism for the particular kernel agent khugepaged when it collapses memory pages. Impact ====== The main reason we chose to make khugepaged collapsing tolerant of memory failures was its high possibility of accessing poisoned memory while performing functionally optional compaction actions. Standard applications typically don't have strict requirements on the size of its pages. So they are given 4K pages by the kernel. The kernel is able to improve application performance by either 1) giving applications 2M pages to begin with, or 2) collapsing 4K pages into 2M pages when possible. This collapsing operation is done by khugepaged, a kernel agent that is constantly scanning memory. When collapsing 4K pages into a 2M page, it must copy the data from the 4K pages into a physically contiguous 2M page. Therefore, as long as there exists one poisoned cache line in collapsible 4K pages, khugepaged will eventually access it. The current impact to users is a machine check exception triggered kernel panic. However, khugepaged’s compaction operations are not functionally required kernel actions. Therefore making khugepaged tolerant to poisoned memory will greatly improve user experience. This patch series is for cases where khugepaged is the first guy that detects the memory errors on the poisoned pages. IOW, the pages are not known to have memory errors when khugepaged collapsing gets to them. In our observation, this happens frequently when the huge page ratio of the system is relatively low, which is fairly common in virtual machines running on cloud. Solution ======== As stated before, it is less desirable to crash the system only because khugepaged accesses poisoned pages while it is collapsing 4K pages. The high level idea of this patch series is to skip the group of pages (usually 512 4K-size pages) once khugepaged finds one of them is poisoned, as these pages have become ineligible to be collapsed. We are also careful to unwind operations khuagepaged has performed before it detects memory failures. For example, before copying and collapsing a group of anonymous pages into a huge page, the source pages will be isolated and their page table is unlinked from their PMD. These operations need to be undone in order to ensure these pages are not changed/lost from the perspective of other threads (both user and kernel space). As for file backed memory pages, there already exists a rollback case. This patch just extends it so that khugepaged also correctly rolls back when it fails to copy poisoned 4K pages. Changelog ========= v5 changes - Rebase patches to mm-unstable at commit ffb39098bf87 ("Merge tag 'linux-kselftest-kunit-6.1-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest") - Resolves conflicts with: commit 2f55f070e5b8 ("mm/khugepaged: minor cleanup for collapse_file") commit 1baec203b77c ("mm/khugepaged: try to free transhuge swapcache when possible") v4 changes - Incorporate feedbacks from Yang Shi - Remove tracepoint for __collapse_huge_page_copy, just keep SCAN_COPY_MC and let trace_mm_collapse_huge_page it - Remove unnecessary comments v3 changes - Incorporate feedbacks from Yang Shi - Add tracepoint for __collapse_huge_page_copy - Restore PMD in collapse_huge_page - Correct comment about mmap_read_lock v2 changes - Incorporate feedbacks from Yang Shi - Only keep copy_highpage_mc - Adding new scan_result SCAN_COPY_MC - Defer NR_FILE_THPS update until copying succeeded Jiaqi Yan (2): mm/khugepaged: recover from poisoned anonymous memory mm/khugepaged: recover from poisoned file-backed memory include/linux/highmem.h | 19 +++ include/trace/events/huge_memory.h | 3 +- mm/khugepaged.c | 204 ++++++++++++++++++++--------- 3 files changed, 166 insertions(+), 60 deletions(-)