From patchwork Mon May 9 06:27:00 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Muchun Song X-Patchwork-Id: 12843061 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FB9CC433EF for ; Mon, 9 May 2022 06:29:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9C1C56B0073; Mon, 9 May 2022 02:29:24 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 971B46B0074; Mon, 9 May 2022 02:29:24 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 811E56B0075; Mon, 9 May 2022 02:29:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 725CA6B0073 for ; Mon, 9 May 2022 02:29:24 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 4B07C60EBD for ; Mon, 9 May 2022 06:29:24 +0000 (UTC) X-FDA: 79445227848.21.3E81A60 Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by imf08.hostedemail.com (Postfix) with ESMTP id 448B6160082 for ; Mon, 9 May 2022 06:29:11 +0000 (UTC) Received: by mail-pl1-f179.google.com with SMTP id i17so12933793pla.10 for ; Sun, 08 May 2022 23:29:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ODWBdYqZzg8lChYiYiJ2sSIUo4NFfWVrgG6YTxSVP1A=; b=5VaRvE+1b0+qecOZ4GJucoJxjsjPux4wEKZiPchnZSOlzntRm47r+/fNfyNO94gM33 b2lF5essaJ0F0u7Yc3+XcwJ3ydFrKhYUKiGKsRV8SfCbsL08+ZifjY0kGM1Bwg6pxtTr iW5uTN2ddKymbe65caoHqSSKFdZBSnCQXHSFzM4CASc2YO1ihZmxvh8RKvlNktIgYEXn 1tp6D4xItTZeORAmkea3Cm2vzwvGkoZZ2RvAMyr4bAENdGLj8/KCQD0sxRGO152aPijt X49U+u+mQOasEcUMV/c5XBsqLXy1AzmQaatvdPpF2zUyAS4aQG9SjLl+lu2F2jAjSjet qgRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ODWBdYqZzg8lChYiYiJ2sSIUo4NFfWVrgG6YTxSVP1A=; b=ujd5k8qUupTD32mCdDpAjYNidQtP+X7vfAJ/HtliYjRjnoMJzFO/XYyHhGt5JrtOQz eLi23oKInnzJk91kH5+NhA3Omt00adtL7bPxuCkbscar0/XqycznYViHuDJl3MJwweGu EMEDsHtgVUOS5RGivWoggcbZHrGv6pGt/cdceC3wEfwyT8TZ5naTqoApiJl27WmB9KsX k2ONRMNlkiM5YChnifZ/k6Xyo0YhvgwlVuncCHIgrsPjBXO3WmLNXsTrQ7iyuhKVwaf1 UamYAQXQEcZCIG38Mx0iHXNC765/7+J6SRmGSGxlH6bdJgvHc22LwKjGldOXQ0UCpBhu JAvA== X-Gm-Message-State: AOAM533qnPgbLh1vMaWdR/5GOIRErZmCA0bnNyFnfqXQuC+kEMntb6C0 1/zgujlO5694ed6A4OHaJCA7Xg== X-Google-Smtp-Source: ABdhPJwNiK5mpR/izG+cjL30Z+YU1i7fHNaGnW9U69VmZcwf6IsqUGfxeQRhRibbPJ5pQZmZ1kOS/A== X-Received: by 2002:a17:903:2341:b0:15e:c7c4:2bb9 with SMTP id c1-20020a170903234100b0015ec7c42bb9mr14922190plh.108.1652077762701; Sun, 08 May 2022 23:29:22 -0700 (PDT) Received: from FVFYT0MHHV2J.bytedance.net ([139.177.225.250]) by smtp.gmail.com with ESMTPSA id k17-20020a170902ba9100b0015e8d4eb2c2sm6162001pls.268.2022.05.08.23.29.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 08 May 2022 23:29:22 -0700 (PDT) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, akpm@linux-foundation.org, mcgrof@kernel.org, keescook@chromium.org, yzaikin@google.com, osalvador@suse.de, david@redhat.com, masahiroy@kernel.org Cc: linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, duanxiongchun@bytedance.com, smuchun@gmail.com, Muchun Song Subject: [PATCH v10 1/4] mm: hugetlb_vmemmap: disable hugetlb_optimize_vmemmap when struct page crosses page boundaries Date: Mon, 9 May 2022 14:27:00 +0800 Message-Id: <20220509062703.64249-2-songmuchun@bytedance.com> X-Mailer: git-send-email 2.32.0 (Apple Git-132) In-Reply-To: <20220509062703.64249-1-songmuchun@bytedance.com> References: <20220509062703.64249-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Stat-Signature: 9spj9g4hxnd1qh6tiwjkw15tp49bi3qy Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=bytedance-com.20210112.gappssmtp.com header.s=20210112 header.b=5VaRvE+1; spf=pass (imf08.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.214.179 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 448B6160082 X-HE-Tag: 1652077751-574170 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: If the size of "struct page" is not the power of two but with the feature of minimizing overhead of struct page associated with each HugeTLB is enabled, then the vmemmap pages of HugeTLB will be corrupted after remapping (panic is about to happen in theory). But this only exists when !CONFIG_MEMCG && !CONFIG_SLUB on x86_64. However, it is not a conventional configuration nowadays. So it is not a real word issue, just the result of a code review. But we cannot prevent anyone from configuring that combined configure. This hugetlb_optimize_vmemmap should be disable in this case to fix this issue. Signed-off-by: Muchun Song Reviewed-by: Mike Kravetz Acked-by: David Hildenbrand --- mm/hugetlb_vmemmap.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index 29554c6ef2ae..6254bb2d4ae5 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -28,12 +28,6 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key); static int __init hugetlb_vmemmap_early_param(char *buf) { - /* We cannot optimize if a "struct page" crosses page boundaries. */ - if (!is_power_of_2(sizeof(struct page))) { - pr_warn("cannot free vmemmap pages because \"struct page\" crosses page boundaries\n"); - return 0; - } - if (!buf) return -EINVAL; @@ -119,6 +113,12 @@ void __init hugetlb_vmemmap_init(struct hstate *h) if (!hugetlb_optimize_vmemmap_enabled()) return; + if (!is_power_of_2(sizeof(struct page))) { + pr_warn_once("cannot optimize vmemmap pages because \"struct page\" crosses page boundaries\n"); + static_branch_disable(&hugetlb_optimize_vmemmap_key); + return; + } + vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; /* * The head page is not to be freed to buddy allocator, the other tail