From patchwork Wed Jan 8 07:48:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yu Zhao X-Patchwork-Id: 13930266 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25082E77188 for ; Wed, 8 Jan 2025 07:48:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7AB386B00A5; Wed, 8 Jan 2025 02:48:29 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 75ABB6B00AD; Wed, 8 Jan 2025 02:48:29 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5FBAC6B00AF; Wed, 8 Jan 2025 02:48:29 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 41A116B00A5 for ; Wed, 8 Jan 2025 02:48:29 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id B81F8444C6 for ; Wed, 8 Jan 2025 07:48:28 +0000 (UTC) X-FDA: 82983507096.10.2DBAFB2 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) by imf14.hostedemail.com (Postfix) with ESMTP id 07AFF100002 for ; Wed, 8 Jan 2025 07:48:26 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=f5hBBlzX; spf=pass (imf14.hostedemail.com: domain of 3yS1-ZwYKCGAWSXF8MEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--yuzhao.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3yS1-ZwYKCGAWSXF8MEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1736322507; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=JdJGPkkJCXYmdMBxytG3pxeX+4jo8f6ZaYz3JCIIH2I=; b=01PTsloXQyDCgQUoQgPtdasLurRpsGPz0a5o/pwtA7iar5wSvvuNNZbaFRNIhXK680jouA OFvhL5sHKc/b6ur0exswYfbJGB2ZKo15TIFV82dfIBZitCMF96fqPOTaKDM/ukV3hEMZGT Ri2TKff7u1m1Qj+/wwQ5fcBuDHepSBM= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1736322507; a=rsa-sha256; cv=none; b=4dpj/+ePGtjl9NZM/d2V3yDSCCocWj+o5wsZEZmXAPzmwl9QfsfI36Ilbwjc5WGYC7Vaix Vx6KLx1pWH+PQLhusBgCHBCQ7ybpcrBo1t6jZv4z+wmChP29TV/ouohlBPll+PI3T+Omx+ yd0rR9VxJylC5mI7I/L4aqUYuhhC2kc= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=f5hBBlzX; spf=pass (imf14.hostedemail.com: domain of 3yS1-ZwYKCGAWSXF8MEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--yuzhao.bounces.google.com designates 209.85.214.201 as permitted sender) smtp.mailfrom=3yS1-ZwYKCGAWSXF8MEMMEJC.AMKJGLSV-KKIT8AI.MPE@flex--yuzhao.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-21a7cbe3b56so20488435ad.0 for ; Tue, 07 Jan 2025 23:48:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1736322505; x=1736927305; darn=kvack.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=JdJGPkkJCXYmdMBxytG3pxeX+4jo8f6ZaYz3JCIIH2I=; b=f5hBBlzXS4MOqOWQfGAIOb8EkNFWzmkrdgE3ggUxf3cQFtGLB4VekocmRFfKOODBTc z4hNETma7kyGjD5MLz+5/IbqWD1wl6ziVIYbSnWD3RzOBiuq1/h8d5BwYkp8/zBf+X9V ujjSTLvR6/OoUxDGN/lEIKzx89GleW9BrSZ8Cu9WO/xxKdLgG2F+CWGD6HVfx54qhcFX dgC4eOzG9a/Z7Po+URHj/vX9CaF6CTuIcJxj+8mupK6kot1xyjYDsYVnkLHxYJhtq0Oo otfPti5alY3d3fE2b2HtMLdWe7xnH51XG7vb4MmcUrLqy/xv1v5ShwCjMM955iwPOKi5 9AhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1736322505; x=1736927305; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=JdJGPkkJCXYmdMBxytG3pxeX+4jo8f6ZaYz3JCIIH2I=; b=HLblTEzbFna9oJQZOnhMT2U1YKIPfOhiD3aMbJ0KPcBHUZ/ZZc6EaCIVbwAtj/UZFr +MK6saT2xXXiYxIHnKE1GCpY4ccMTnJv18GYt/F4abjHgaqjhov0+Ffx7CLvbU5Fw/kY s4JYimzlzJlohWFJZMBMUZ8eUa4LotQlJI6PMrT+MKHgrmJ6Ce0kqHbZhfaTlheR09Kz Ouw8MRFr/G5xnfJHoBgL2S9dPFUYm1vqULz0DAltyULh7WqGfx5+J+L+7MF31oBPP8dn /PP5GJ4GjapZrwolFdVgGbf1QpMpq0keyy7ZX/y9R52+yz18R8C9CFlg0Ukh4/N/arJS PLoQ== X-Forwarded-Encrypted: i=1; AJvYcCUeIESCe9q4RbpXl1VU68ZuHCxHQhjrc2AaB1MMaa4VoUQTuW5Zwgv2j/jyeo8gxAHmwWnlaP5+UA==@kvack.org X-Gm-Message-State: AOJu0YxruDUCkWJAjpg02pVjw/DW9tXhZJAk/xXD249Ot5F8f6n13TY/ jsVAj512SG4XHw9jpIJGuzG1BmlqdRzFrtn4N6J9bz96r8spfEDe6USbFkBbNIdfGB3GuJEV0Su tMw== X-Google-Smtp-Source: AGHT+IFQsxk2EDMbjJvOYs3aW7rWGAGuRHfZ20mqi4KHPPynQ/8QFI9T4pH/NHGSXeblWJ4nxg9dPHhNhCQ= X-Received: from pfwz2.prod.google.com ([2002:a05:6a00:1d82:b0:725:e84a:dd51]) (user=yuzhao job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:7fa1:b0:1e0:ce11:b0ce with SMTP id adf61e73a8af0-1e88d0a9418mr4189063637.35.1736322505597; Tue, 07 Jan 2025 23:48:25 -0800 (PST) Date: Wed, 8 Jan 2025 00:48:21 -0700 Mime-Version: 1.0 X-Mailer: git-send-email 2.47.1.613.gc27f4b7a9f-goog Message-ID: <20250108074822.722696-1-yuzhao@google.com> Subject: [PATCH mm-unstable v2] mm/hugetlb_vmemmap: fix memory loads ordering From: Yu Zhao To: Andrew Morton Cc: David Hildenbrand , Mateusz Guzik , "Matthew Wilcox (Oracle)" , Muchun Song , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yu Zhao , Will Deacon X-Rspamd-Queue-Id: 07AFF100002 X-Rspam-User: X-Rspamd-Server: rspam07 X-Stat-Signature: bkk68trzycatrr6rg93pmffbffhjmyoa X-HE-Tag: 1736322506-896790 X-HE-Meta: U2FsdGVkX1+XkW1yTjz4Kok/cIu3ka22vbYN7anH9AD7Mqo46GCzgWrGKScwkbgwoagaBQM3wuzY4Xq4XAqXqHPxfl03B/CVftXV+WLM4FmXY77dQeulOK7fUd1QlmROlAUeO6GurFwYrRohTCrcenwK8b/0oRy8nT3P1iV+sDARK7+SULMnSc6lxhbLPezPjN/bNqp4/Km3OvvOCCe1SMNMbEVap4mPLjvosfXJSzQZUiIrlkd1cYqMVWWvHqzY/tfgzvUMUz5LVMK6Z8+6TCrqaXewSeQXskNYvfTkAV1JUdZVna2WHvxDL523tClcAIZH56cN/Jg/Fx5jMXx/hSZ550qCrfhL6DdE0E7cVLxMSo2Hrzqnq8pstH58TBY8WwjJjK+5D+leLBe0MXhO2YxRRDQSEMUP3SZdk7SRGXjYXeQ/fUYnBv6ivtKUhvkD7cJPqVGvhw89sP+3dTwIXgSaNHFZgR6yUTYY5NShXSBr0zssF0dUtKuaZfbt4rGyfTbwJuKS2DmMH3tfpsfnGFFWF7mP6ClUFPsybKBEaS0VUUkOwOw/gXA4rfgle6HJ5JEq5AQq9kSPPDf8nd3j4CsCKUAQYAGkHdmTobpwIpCfGSzM5tjjH8vfrvj8wWhCTJlFFcdSIK2ykrvcU/0yNwjB6sC6Zjs3NefhqWE+101H2gkucO7xY1klBczUIASf6FTfaGF7jigG4etjwPHZarwScSTguWHJifNQrebaO1hxYqgiIBatlbZW6OcnOtHf5m2couMz0KGWdsfaU5SqdNAzfWJ05wOH0tygVPCnDOxdPhtmLXqYHGSjXXDwtAjBeWeUiN9f8VO+YSmY0na4osjIdoPk+JlwRfeAQTz4b4oBbeKIWuViBB9ZTa2Dll/6wSrqdogQHUmWNpF5m8zTJwh9GntWUwMGHVDk28EtyA7q2BCfq0A8NeCf56P4U+huZ6LQ86KUDZBEIBxcbCN 0KzyVghu p6vNkdhOTF4C+dzvG8rHLFWhBzW2C5wqVyOgg0Ss8fWOQMz6fEWx1IkS1e8I30v52KI+SxuoEcHVbSfDcE1r7V44V5zywsvqyzGboJUGyuW6CZIbmF7U7hygjOJclPbBQjvHM6S/fB1JYFVP7/DUO2NjkqTxAN+9HRx2pOLByUc4mIV1cw3c6Ch4nrfvESxrjxN4rh6atS/o3RluO2dgRhem+YqjeK1oIrMQkkcWtT9UM9UO63XEzcjz35DB1HYLkF/8x0InPKGVRoX37rQQUBrV9FZ3B6iskGHf0FyDUb5otiC6WMZEFHpOMO7t/19fpg3LGiTwzjR/opMOX2mSrXuaYH+pV/v5X2YWxEm4JRIESSGf0afzeOhkD4vWb54GACIMoXGEyHdhAMMFR4rouelLPYWxoUoPbW4phYmpFmOFOIXYmH6Fq/ISPJay8RShRVUmeeIGqTrsmuc/+7XrojtNzVPxpklPrX4RxLZBrSgen0TeTwNVkz4/rT7krd1iGJtZ3W+eysTpKAOxBXxF5x0DiCvymxGMoLCWzn7l30c3tEcJtu1ihq1qOFxEXVooroRkZ+e5E0vybVVIyread5nZg25N0MeNMlCpPYIISjvYJ7oPbxUvKd5E5V5uZWMxpQ8/67oH127bSm36BhjkddiegdiK6l0kBrmDq X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Using x86_64 as an example, for a 32KB struct page[] area describing a 2MB hugeTLB, HVO reduces the area to 4KB by the following steps: 1. Split the (r/w vmemmap) PMD mapping the area into 512 (r/w) PTEs; 2. For the 8 PTEs mapping the area, remap PTE 1-7 to the page mapped by PTE 0, and at the same time change the permission from r/w to r/o; 3. Free the pages PTE 1-7 used to map, hence the reduction from 32KB to 4KB. However, the following race can happen due to improperly memory loads ordering: CPU 1 (HVO) CPU 2 (speculative PFN walker) page_ref_freeze() synchronize_rcu() rcu_read_lock() page_is_fake_head() is false vmemmap_remap_pte() XXX: struct page[] becomes r/o page_ref_unfreeze() page_ref_count() is not zero atomic_add_unless(&page->_refcount) XXX: try to modify r/o struct page[] Specifically, page_is_fake_head() must be ordered after page_ref_count() on CPU 2 so that it can only return true for this case, to avoid the later attempt to modify r/o struct page[]. This patch adds the missing memory barrier and makes the tests on page_is_fake_head() and page_ref_count() done in the proper order. Fixes: bd225530a4c7 ("mm/hugetlb_vmemmap: fix race with speculative PFN walkers") Reported-by: Will Deacon Closes: https://lore.kernel.org/20241128142028.GA3506@willie-the-truck/ Signed-off-by: Yu Zhao Reviewed-by: David Hildenbrand Reviewed-by: Muchun Song --- include/linux/page-flags.h | 37 +++++++++++++++++++++++++++++++++++++ include/linux/page_ref.h | 2 +- 2 files changed, 38 insertions(+), 1 deletion(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 691506bdf2c5..16fa8f0cea02 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -225,11 +225,48 @@ static __always_inline const struct page *page_fixed_fake_head(const struct page } return page; } + +static __always_inline bool page_count_writable(const struct page *page, int u) +{ + if (!static_branch_unlikely(&hugetlb_optimize_vmemmap_key)) + return true; + + /* + * The refcount check is ordered before the fake-head check to prevent + * the following race: + * CPU 1 (HVO) CPU 2 (speculative PFN walker) + * + * page_ref_freeze() + * synchronize_rcu() + * rcu_read_lock() + * page_is_fake_head() is false + * vmemmap_remap_pte() + * XXX: struct page[] becomes r/o + * + * page_ref_unfreeze() + * page_ref_count() is not zero + * + * atomic_add_unless(&page->_refcount) + * XXX: try to modify r/o struct page[] + * + * The refcount check also prevents modification attempts to other (r/o) + * tail pages that are not fake heads. + */ + if (atomic_read_acquire(&page->_refcount) == u) + return false; + + return page_fixed_fake_head(page) == page; +} #else static inline const struct page *page_fixed_fake_head(const struct page *page) { return page; } + +static inline bool page_count_writable(const struct page *page, int u) +{ + return true; +} #endif static __always_inline int page_is_fake_head(const struct page *page) diff --git a/include/linux/page_ref.h b/include/linux/page_ref.h index 8c236c651d1d..544150d1d5fd 100644 --- a/include/linux/page_ref.h +++ b/include/linux/page_ref.h @@ -234,7 +234,7 @@ static inline bool page_ref_add_unless(struct page *page, int nr, int u) rcu_read_lock(); /* avoid writing to the vmemmap area being remapped */ - if (!page_is_fake_head(page) && page_ref_count(page) != u) + if (page_count_writable(page, u)) ret = atomic_add_unless(&page->_refcount, nr, u); rcu_read_unlock();