From patchwork Wed Sep 13 20:12:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zi Yan X-Patchwork-Id: 13383740 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 78C75EE020B for ; Wed, 13 Sep 2023 20:13:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232167AbjIMUNX (ORCPT ); Wed, 13 Sep 2023 16:13:23 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33934 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232020AbjIMUNV (ORCPT ); Wed, 13 Sep 2023 16:13:21 -0400 Received: from wout3-smtp.messagingengine.com (wout3-smtp.messagingengine.com [64.147.123.19]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 150981BC6; Wed, 13 Sep 2023 13:13:18 -0700 (PDT) Received: from compute4.internal (compute4.nyi.internal [10.202.2.44]) by mailout.west.internal (Postfix) with ESMTP id 77567320039A; Wed, 13 Sep 2023 16:13:16 -0400 (EDT) Received: from mailfrontend2 ([10.202.2.163]) by compute4.internal (MEProxy); Wed, 13 Sep 2023 16:13:17 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sent.com; h=cc :cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to; s=fm2; t= 1694635995; x=1694722395; bh=mxKjLm7C5akdzlDNgIeEXW3U7M/tKbi4yzp ZmBIFkQc=; b=Q33sr6nx2erdmdA0tv9szaorSFCWd9oTgbOkZK628yvpAxE/Fa9 FyX0a3c04OIInmTb6z3uxuQbtG6IS5dHzgzxbK69YCFKOgIG2qfkwUS7eLsPJasE aiO6lQdwhpHuHUaGHIou4krQLlWzg607QRWrdOep0NAdCES5s5HooJgRYkKKiNMi nUVEhU5/8iBKyKUuLi3+MeM64aY7Kk01wqGkAhV/fVbHx6xa/C1Iu/EHwm2gS6SL 1uI2WOogXSeA4cPx0/QIDBiPyzmUB9bpsxV1hYuavNsRuICEKaGnveUuhzgPQgOi 0Fq+voBYJ/AD2PtGTEXlcvHMYso5Bb4zxMw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:feedback-id:feedback-id:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:reply-to:sender:subject:subject:to:to:x-me-proxy :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t= 1694635995; x=1694722395; bh=mxKjLm7C5akdzlDNgIeEXW3U7M/tKbi4yzp ZmBIFkQc=; b=nln4VLWKEOaRlKOXjSxM+LK+Tyjpnm7IUNmvfLA/g8VkGuNjPPN G0K5zRF1KTJLBd+CypPRybFrIVS8Ic6oYobIUDsIB8L1YeVz/75fetXVcgbjvLha AfMk5zpriqRiL0vH3y1eurV+1Ra5k0M+tePqkQKO8A/JATeKsYFvEQULIt2ijkMI uhhDhmMdzsQol6jeUdbtZVReuyJzfcnIl6TUgWXcSkJ8+zZwctipxjnw+gCJiM6c GFTS6pP7osGaiVfn5m6PMEe229B5qYRn71H/KfFp9Pt65ginaXsiXV7XnSduwP2s 8vRpLBKfaVx5XfR9Zd52R6LvLogYT92iXiQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedviedrudeikedgudegiecutefuodetggdotefrod ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd enogfuohhrthgvugftvggtihhpvdculdegtddmnecujfgurhephffvvefufffkofgjfhhr ggfgsedtqhertdertddtnecuhfhrohhmpegkihcujggrnhcuoeiiihdrhigrnhesshgvnh htrdgtohhmqeenucggtffrrghtthgvrhhnpeeggeehudfgudduvdelheehteegledtteei veeuhfffveekhfevueefieeijeegvdenucevlhhushhtvghrufhiiigvpedtnecurfgrrh grmhepmhgrihhlfhhrohhmpeiiihdrhigrnhesshgvnhhtrdgtohhm X-ME-Proxy: Feedback-ID: iccd040f4:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 13 Sep 2023 16:13:15 -0400 (EDT) From: Zi Yan To: linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org Cc: Zi Yan , Andrew Morton , Thomas Bogendoerfer , "Matthew Wilcox (Oracle)" , David Hildenbrand , Mike Kravetz , Muchun Song , "Mike Rapoport (IBM)" , stable@vger.kernel.org, Muchun Song Subject: [PATCH v3 2/5] mm/hugetlb: use nth_page() in place of direct struct page manipulation. Date: Wed, 13 Sep 2023 16:12:45 -0400 Message-Id: <20230913201248.452081-3-zi.yan@sent.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230913201248.452081-1-zi.yan@sent.com> References: <20230913201248.452081-1-zi.yan@sent.com> Reply-To: Zi Yan MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-mips@vger.kernel.org From: Zi Yan When dealing with hugetlb pages, manipulating struct page pointers directly can get to wrong struct page, since struct page is not guaranteed to be contiguous on SPARSEMEM without VMEMMAP. Use nth_page() to handle it properly. Fixes: 57a196a58421 ("hugetlb: simplify hugetlb handling in follow_page_mask") Cc: Signed-off-by: Zi Yan Reviewed-by: Muchun Song --- mm/hugetlb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index af74e83d92aa..8e68e6c53e66 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6469,7 +6469,7 @@ struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, } } - page += ((address & ~huge_page_mask(h)) >> PAGE_SHIFT); + page = nth_page(page, ((address & ~huge_page_mask(h)) >> PAGE_SHIFT)); /* * Note that page may be a sub-page, and with vmemmap