From patchwork Wed Sep 6 15:03:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13375710 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2500BEE14A5 for ; Wed, 6 Sep 2023 15:03:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241373AbjIFPDd (ORCPT ); Wed, 6 Sep 2023 11:03:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54974 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232797AbjIFPDc (ORCPT ); Wed, 6 Sep 2023 11:03:32 -0400 Received: from wout3-smtp.messagingengine.com (wout3-smtp.messagingengine.com [64.147.123.19]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 51992172E; Wed, 6 Sep 2023 08:03:28 -0700 (PDT) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.west.internal (Postfix) with ESMTP id BE0B5320093F; Wed, 6 Sep 2023 11:03:26 -0400 (EDT) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Wed, 06 Sep 2023 11:03:27 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to; s=fm1; t= 1694012606; x=1694099006; bh=FRhlzq8UvIObngBkM6dyssvnlfJTeG6TGLo 3e9CVZKk=; b=ylb72mJgqmxN2pfmztXzBSYqUI0fCUryU9RZLnqhKRV6cdzQHPw WewNuJTadJnzAvfVVRxf5nAh2kQ+qRzYNJU3xPo/eWU0FHQRlpmtm8ojnyFRLaNG WAAE7vy0TSD4FClXqKHbrNYxYMUzHOS3nt39qBZiB3Qid8X8WaCpfF5YffMtZ4E0 lo5G79gkH6vB2HP52Yknle1/Zxz+TzayRAtcmBZ0BKaom8BxvQKURAx1sBOg4uiV JKOqVGNrlOm0oqAIjUJAaE/yULsfmxGRO6MzcRQz3qdCryUT6kvioH0ytebE8pU9 fpgKt1Czpg95KCYtQOcPxdYK1O325Bx10Nw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t= 1694012606; x=1694099006; bh=FRhlzq8UvIObngBkM6dyssvnlfJTeG6TGLo 3e9CVZKk=; b=RyFkVW0vu58VoPxEjNCtMjVVEPg2uyGQhsVQI8c4UW2TuZZo30I EwigMAAynz1rFI8G1RAqCKG7OC9YZ1wSQpcVQArDwmmDwwpXuq3STKkmTk+63ahu MvDcewe93JMI10bkI7o4l93bEDVgSCzcpClVOo6B6Su9Z5DpPcaDRbedPbDjahDB gyA+QzhEGSNCPgAuf9j1bQhFetOMKdqadlqHqiEahWQ6NvMOzboJa4SGCixqvuRV 4p0uedH/D/LWU2laHLcu423rHmnFEcg15LhSbgv5lb3VGacOGcARjQ35xVchFTYL RsDzarygujS44z4Wynen1EJMc/3e5ZYKbmA== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedviedrudehfedgkeefucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne goufhorhhtvggutfgvtghiphdvucdlgedtmdenucfjughrpefhvfevufffkffojghfrhgg gfestdhqredtredttdenucfhrhhomhepkghiucgjrghnuceoiihirdihrghnsehsvghnth drtghomheqnecuggftrfgrthhtvghrnhepgeeghedugfduuddvleehheetgeeltdetieev uefhffevkefhveeufeeiieejgedvnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrg hmpehmrghilhhfrhhomhepiihirdihrghnsehsvghnthdrtghomh X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 6 Sep 2023 11:03:24 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org Cc: Zi Yan , Andrew Morton , Thomas Bogendoerfer , "Matthew Wilcox (Oracle)" , David Hildenbrand , Mike Kravetz , Muchun Song , "Mike Rapoport (IBM)" , stable@vger.kernel.org, Muchun Song Subject: [PATCH v2 2/5] mm/hugetlb: use nth_page() in place of direct struct page manipulation. Date: Wed, 6 Sep 2023 11:03:06 -0400 Message-Id: <20230906150309.114360-3-zi.yan@sent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230906150309.114360-1-zi.yan@sent.com> References: <20230906150309.114360-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: Zi Yan When dealing with hugetlb pages, manipulating struct page pointers directly can get to wrong struct page, since struct page is not guaranteed to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle it properly. Fixes: 57a196a58421 ("hugetlb: simplify hugetlb handling in follow_page_mask") Cc: Signed-off-by: Zi Yan Reviewed-by: Muchun Song --- mm/hugetlb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 2e7188876672..2521cc694fd4 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6489,7 +6489,7 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, } } - page += ((address & ~huge_page_mask(h)) >> PAGE_SHIFT); + page = nth_page(page, ((address & ~huge_page_mask(h)) >> PAGE_SHIFT)); /* * Note that page may be a sub-page, and with vmemmap