From patchwork Wed Aug 30 18:27:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13370660 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7D426C83F18 for ; Wed, 30 Aug 2023 18:56:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1343928AbjH3S4G (ORCPT ); Wed, 30 Aug 2023 14:56:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50024 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344205AbjH3S2U (ORCPT ); Wed, 30 Aug 2023 14:28:20 -0400 Received: from wout5-smtp.messagingengine.com (wout5-smtp.messagingengine.com [64.147.123.21]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51FE8A3; Wed, 30 Aug 2023 11:28:16 -0700 (PDT) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.west.internal (Postfix) with ESMTP id C602B3200900; Wed, 30 Aug 2023 14:28:12 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Wed, 30 Aug 2023 14:28:13 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:message-id:mime-version:reply-to:reply-to:sender :subject:subject:to:to; s=fm1; t=1693420092; x=1693506492; bh=fo Dp5yxOH6GyIQ67vpSZpUgHB1689dLzaIy9GEckfQ4=; b=VzSy6vz/3jEqLcnt9d v70cTqblMSUtCONPDq2FYgH9KbZ6oGQe1ANLeFnEgDOr5gWNNxTw5feWIR4SBX7h +4hFw0pFT8WwNFeFfwmaNrk3PiwuQsrEpmwSlhMR8J48ZSSGJg/XDBrMC5aAADJw 3UcCiWZspWV2Gw7spuWTq1xsaDTlP45JEeacMLmsXm675V2wjdMddi0WEiedGjbD rRizhHE3iwQAUHwImy4bMqWy09AtAfwywDjeXcQgWCkuL1wRGaH42LyhokgusYcn GX2UJ7b4PAjUjtiTTSRhwhNONqdOnww+tHfZkNMiHFpIkuQQWf/OZEtvLKcbDhRj wNGg== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:message-id:mime-version:reply-to:reply-to:sender :subject:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender :x-me-sender:x-sasl-enc; s=fm1; t=1693420092; x=1693506492; bh=f oDp5yxOH6GyIQ67vpSZpUgHB1689dLzaIy9GEckfQ4=; b=K0njS7rklou26kRAO Ptbz0hqL7fAXkVMBKaRcwtNpFezNy8PJ+nXQZNAYMMePSYqhe3wUB4DAdktZx0np Q+op8kthM/R+Z40rFmhR1wWiNtDfBJSE6Kd03BmyPjUGZEHtLgZr3mym6CyyUvHK ooGu45R4oxwURaZm632kYTlAejmvAC1A/WVDahOgCKxjFh6jEcS55rEIiU6JeO9n 0nHi7Vp3VYKq3JaVU+YPuU6ewr3rKUuQtEORW9JjBjcuf64KwGzIcVkBpQz/YG6e hJmCewlZS79KxxllqpZAs4ChV+VvXmtDg7FPwb7wb/xIyb30SJ8D8BvHnmSj+p7y 85kqw== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedviedrudefkedguddvjecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enogfuohhrthgvugftvggtihhpvdculdegtddmnecujfgurhephffvvefufffkofhrggfg sedtqhertdertddtnecuhfhrohhmpegkihcujggrnhcuoeiiihdrhigrnhesshgvnhhtrd gtohhmqeenucggtffrrghtthgvrhhnpedtgffhtdetledtkeeihfefueeuhedvudfhvdei feevtdektdetgfeiieejuefhtdenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmh epmhgrihhlfhhrohhmpeiiihdrhigrnhesshgvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 30 Aug 2023 14:28:11 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org Cc: Zi Yan , Andrew Morton , Thomas Bogendoerfer , "Matthew Wilcox (Oracle)" , David Hildenbrand , Mike Kravetz , Muchun Song , "Mike Rapoport (IBM)" Subject: [PATCH 0/3] Use nth_page() in place of direct struct page manipulation Date: Wed, 30 Aug 2023 14:27:50 -0400 Message-Id: <20230830182753.55367-1-zi.yan@sent.com> X-Mailer: git-send-email 2.40.1 Reply-To: Zi Yan MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: Zi Yan On SPARSEMEM without VMEMMAP, struct page is not guaranteed to be contiguous, since each memory section's memmap might be allocated independently. hugetlb pages can go beyond a memory section size, thus direct struct page manipulation on hugetlb pages/subpages might give wrong struct page. Kernel provides nth_page() to do the manipulation properly. Use that whenever code can see hugetlb pages. The patches are on top of next-20230830. Zi Yan (3): mm: use nth_page() in place of direct struct page manipulation. fs: use nth_page() in place of direct struct page manipulation. mips: use nth_page() in place of direct struct page manipulation. arch/mips/mm/cache.c | 2 +- fs/hugetlbfs/inode.c | 4 ++-- mm/cma.c | 2 +- mm/hugetlb.c | 2 +- mm/memory_hotplug.c | 2 +- 5 files changed, 6 insertions(+), 6 deletions(-)