From patchwork Fri Oct 25 00:44:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13849873 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73199D1038E for ; Fri, 25 Oct 2024 00:45:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0D57D6B0095; Thu, 24 Oct 2024 20:45:11 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 085646B0096; Thu, 24 Oct 2024 20:45:11 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB6CB6B0098; Thu, 24 Oct 2024 20:45:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id D114A6B0095 for ; Thu, 24 Oct 2024 20:45:10 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id F158F401AF for ; Fri, 25 Oct 2024 00:44:59 +0000 (UTC) X-FDA: 82710279582.27.2EDEE70 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by imf15.hostedemail.com (Postfix) with ESMTP id ABEEDA000A for ; Fri, 25 Oct 2024 00:44:48 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1729817030; a=rsa-sha256; cv=none; b=FW8GZaMi3j3HVCROkvZs81OC1KbY6ZhC59aXrV/BgsJ+IvfJZPhmj4MorjnAkFj/TAi1XQ lPAD4fZfuTfUpUorjI+wsdPXoQBpZG2v18ggCJrSXdNeu0SvY7rYt6rxh2rOZohhUAujzS qjOITbHsxTIdgT7Ur/cfJFc90O9hkRE= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=none; spf=pass (imf15.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.255 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1729817030; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=8j4WA7wMdbA7Rcmj5egMbVImUC3qDYp9ZleA/A69KQI=; b=IMhMfbohnUCxdNG4Fb7Pk8nEU2/9+ndlVjCE3FppQsDC8MhO7DX5TTXzRrySCybjfaFORb KvbycQaQ2qi9MWZ1d6aR4uYB+424bJVKxu/AwB10OT/o4S12g0sL2fCM2jHznTe75A1OcH ltlu1PNh4RIr/jovgEPPJXwqsARhG/Y= Received: from mail.maildlp.com (unknown [172.19.162.254]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4XZPC835Q5z1T8bp; Fri, 25 Oct 2024 08:43:00 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id 2E75E18010F; Fri, 25 Oct 2024 08:45:04 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 25 Oct 2024 08:45:03 +0800 From: Kefeng Wang To: Andrew Morton CC: David Hildenbrand , Matthew Wilcox , Muchun Song , "Huang, Ying" , , Kefeng Wang Subject: [PATCH resend 1/2] mm: always use base address when clear gigantic page Date: Fri, 25 Oct 2024 08:44:55 +0800 Message-ID: <20241025004456.3435808-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf100008.china.huawei.com (7.185.36.138) X-Stat-Signature: kxcd9eut6opstdsg15gsfp6p5fi9tsqy X-Rspamd-Queue-Id: ABEEDA000A X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1729817088-697581 X-HE-Meta: U2FsdGVkX18HWHoQn4OiB5ZZVNM7QFmNfeW1a/y3PcADuiubRhdwE0HngXof9yUjIa1xxeAMx4Ct9S0LZudPIYpgXAKLm03nVa/ac22QWEBgc8+qX/s0XWY3zJDE8C7cNE0zuvfJyPJym2tKlR4Pl2UiE0JIlAAY0pPcmyforT9bDzIVmHjFVgsZsWWfNeUj3Ayo6J4AFvrwtg6DTAL8QsjivmdBxivigXzXK+txnZMWl4J2lyTJcYSXS36RgDmcovGaUm2rn6QjJLz0YgH1+vJ7ObpsBupt8wlyGYOwnmEqhKGcSRWLOL8yQ2BlptVj3iTwRwETERnzmCAgiPJC+xoK/RUnqpacV8pGV6mG8sXPJarSPOJ9rE2M0zR6mIJeb9E9JJYqN0WYm32iNH3tsEGOYuE0THtoMQmUgZM7kH9b2U1Dw6Uubmo2vRjQzPs7zlmt2ujbX5o+HAwZgyrfuzYhtrEoa1vbyM4lFhqFBVCz664501J8ARWquMsHffndb28HLWL8HacZFsia+BH3QeVfaDdA9WeZERPH8d2YHmqbgKY5Q1DI4uLKFGG5J9DFfcRzcZg5GoC4CyPtCPBuY9OunlN6ICCXuOYRtXvgRPHXYv/qQKbntc2T9F7tji/wL+TfEvUBH89H8GWysD8peRo9Fr/DvHFlmSdZK7ZHdob7g+nDhUUMOobf8ImmJl7b5E3XBo94P7EFFZc6Mh1sTqX+aPCfVCuUTzhQEnMoV/uF6jZbZYs9+e6EQTV1q+8cMo5E5/WHJ3i1UGJlHfDLQNlVvikf6E+IBN5PNA428SkGHL62nixfPLuaVAWos1a2E1S1ef819jbQkf5oLN/1BKonAy6jup21mRQM5+tEdvH7KzZBL9W6aP+Jpt/pzflIok/cETS0F/iV2UVQOB1rkg54JJL6yF+AUuu1x96EwRThxcJrIhPAj+0sYPuT8hBtz10nnNwg+hZ5gyE6N8y 6Mb/sHtv qH11uSyvGqMmndhLc7eMTka2jffBIPp/PbFvQiBT2vSiIcgoURk+hzgCTxKgOywH+iZnKLjcLemMvGP4Tk5njThK4n4tPnViUeEmgNQOLIneFGhHqiJgwmLkomU1qsK+xbnvKfVfu4rSDh9c= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: When clear gigantic page, it zeros page from the first subpage to the last subpage, that is, aligned base address is needed in it, and we don't need to aligned down the address in the caller as the real address will be passed to process_huge_page(). Fixes: 78fefd04c123 ("mm: memory: convert clear_huge_page() to folio_zero_user()") Signed-off-by: Kefeng Wang --- fs/hugetlbfs/inode.c | 2 +- mm/memory.c | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index a4441fb77f7c..a5ea006f403e 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -825,7 +825,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset, error = PTR_ERR(folio); goto out; } - folio_zero_user(folio, ALIGN_DOWN(addr, hpage_size)); + folio_zero_user(folio, addr); __folio_mark_uptodate(folio); error = hugetlb_add_to_page_cache(folio, mapping, index); if (unlikely(error)) { diff --git a/mm/memory.c b/mm/memory.c index 48e534aa939c..934ab5fff537 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6802,6 +6802,7 @@ static void clear_gigantic_page(struct folio *folio, unsigned long addr, int i; might_sleep(); + addr = ALIGN_DOWN(addr, folio_size(folio)); for (i = 0; i < nr_pages; i++) { cond_resched(); clear_user_highpage(folio_page(folio, i), addr + i * PAGE_SIZE);