From patchwork Tue Jun 18 09:12:39 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kefeng Wang X-Patchwork-Id: 13701998 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29A7EC27C4F for ; Tue, 18 Jun 2024 09:13:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6C0068D001E; Tue, 18 Jun 2024 05:13:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 670528D0002; Tue, 18 Jun 2024 05:13:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 55E408D001E; Tue, 18 Jun 2024 05:13:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 39A478D0002 for ; Tue, 18 Jun 2024 05:13:14 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id D4C0F14062D for ; Tue, 18 Jun 2024 09:13:13 +0000 (UTC) X-FDA: 82243445466.02.391A82C Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf06.hostedemail.com (Postfix) with ESMTP id 61D2218000B for ; Tue, 18 Jun 2024 09:13:11 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1718701985; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=DSFxSI3ZaXf25Jj4q7KMBQBPWCzMpg63k/mpFfoVNSQ=; b=f9Leevx2j9FOo9p1xZYj5o6Ksm6F0xfM07DFUseH+HDl4ZO1CiTGArIqjdaK6huNktLjvA TveupA8CFXDLAGN4I0IDHkI4VCYk5XcPeLvC/twPXgOytf2+2qDYJaZyWWHMEsS2XLbiPi TBEf0HrZXofi6DBZTISZfnCHjO4KFw0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1718701985; a=rsa-sha256; cv=none; b=oiEzfsZ8H7MKdUmor6J0VL+Rz/ZwwiYWgrrL1jwDksBSfFX9bcebz7I8Pmj3zGCNl0Qc5f MxSsl3xv+J2fgFnBZgIvX7516H2ABM1WjB3mTvOkK3LtYsQaoW6Wffc9dt2foWSdJB3jIs dlnQZkCPv5eTIEN3DzQJONZSLE4ud2c= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=none; spf=pass (imf06.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com; dmarc=pass (policy=quarantine) header.from=huawei.com Received: from mail.maildlp.com (unknown [172.19.163.48]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4W3LbY0PZDzdbwn; Tue, 18 Jun 2024 17:11:37 +0800 (CST) Received: from dggpemf100008.china.huawei.com (unknown [7.185.36.138]) by mail.maildlp.com (Postfix) with ESMTPS id 32BEF18007E; Tue, 18 Jun 2024 17:13:07 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemf100008.china.huawei.com (7.185.36.138) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Tue, 18 Jun 2024 17:13:06 +0800 From: Kefeng Wang To: Andrew Morton CC: Muchun Song , "Matthew Wilcox (Oracle)" , David Hildenbrand , , Huang Ying , Kefeng Wang Subject: [PATCH v2 1/4] mm: memory: convert clear_huge_page() to folio_zero_user() Date: Tue, 18 Jun 2024 17:12:39 +0800 Message-ID: <20240618091242.2140164-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20240618091242.2140164-1-wangkefeng.wang@huawei.com> References: <20240618091242.2140164-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemf100008.china.huawei.com (7.185.36.138) X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 61D2218000B X-Stat-Signature: kayj6g4y6bnp9a1kdeo9rju1eax7jow7 X-HE-Tag: 1718701991-544177 X-HE-Meta: U2FsdGVkX18tMe6lC3VoNfjHISZnU0klzV8lyrG/DIdPB6ScWIcfGaUKNX/KzbfjluY39u/EUnsnOFChPRRhS2O4MHdA3RpMFyvRiretYzAqmsjY5E3zLtgr64a5mYfayWBbdqE8T2rEtThVoxAsmwc5YJ+j9GmN1t5fe8bPHYBPb/egWtVja1+X1ymP0uj+eZ5R/4O7hA7/w/KTZ3ZYF2mmTlhlQOLOpE7blINzcoy4w9iFGT0XEubgxk7i08J/RSjaU2jav8PDDyOR7+TMfIlFWukvcSYbcl1PzrGpMXqibzWwbsUrMbZlSdoeoNmgeRA50HKN8gMdaXa7muc4YiDhc9j3FHUY/AIMKsYV8INLeWIOIhIyo2zuS0IexoM7bY+JwW1btyXIvYWeTEmadeT+eaaXCsUOUkUlKDxXmANRjyYc3123KtZQN52wyKOeQlGJtMpVtE3Z6wKjGYtBarAbZNqZSig2vHbZ+UbOXy9EiNG1p94FZiiq7Wc5atcOG73pW0dZmMGO3oJPB0T3HXiJkn5+eE0qMNfSuJwQVJjPnuJBDy45wKK4acLktfK/lB97PvNk34SzIbGA051y4Pf6xPxPdLe3W6OJlueWSJJhanXIAi1ziP7j+123WKDtEontjPZIAVJTBd/4woagD7rConr7Y6Xx+VMBpNgBZ8JQ2J4YiGKiBQVFU9TsCPEHQffqlOajzQtQjwNFgJWKhtLoiuGZUOYZ8l7gjI/KZsCNzG3D4pDEZKHAEIc5FyTm4T90zAegUR8cfOJhEyPi00VA3qKO8QkYEv+/5zgz4PjNRZ1lUVRjXufli4yqtCMrZ9z+vfjuhcAIiDFt5NGUYS6UaLlUdgC2ViN9d9gkYxfsrQtFna10hue7PCsRWisC/g8KSt7LmMrjpF2bvO8GXyEveMNJsmh8+1qaNN4Bwgqha6CgStz7vPNRWvrwWrFtgKzkRur2cDOKNGOnBmQ zMvfLmbw qNCS3zsT1vbFGhm2NFJFvkZBuyjTQKuPVhjz/Z5vI2Rxj6hmUNnDH/PVh+WPLYUbfumQ6zZn0UWc2HY2TA7tyOgJRG2/A5y8j8lz7SODlgo0KpfjjQtFD4TcUCBgk3JEfIn7EsJCcNHD7NDTdZrZUDNvlULlp1llrLFnzK2E+3vPKZejvsJ26TjAhDm8r+O5PwEF7 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Replace clear_huge_page() with folio_zero_user(), and take a folio instead of a page. Directly get number of pages by folio_nr_pages() to remove pages_per_huge_page argument, furthermore, move the address alignment from folio_zero_user() to the callers since the alignment is only needed when we don't know which address will be accessed. Signed-off-by: Kefeng Wang --- fs/hugetlbfs/inode.c | 2 +- include/linux/mm.h | 4 +--- mm/huge_memory.c | 4 ++-- mm/hugetlb.c | 3 +-- mm/memory.c | 34 ++++++++++++++++------------------ 5 files changed, 21 insertions(+), 26 deletions(-) diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c index 1107e5aa8343..ecad73a4f713 100644 --- a/fs/hugetlbfs/inode.c +++ b/fs/hugetlbfs/inode.c @@ -892,7 +892,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset, error = PTR_ERR(folio); goto out; } - clear_huge_page(&folio->page, addr, pages_per_huge_page(h)); + folio_zero_user(folio, ALIGN_DOWN(addr, hpage_size)); __folio_mark_uptodate(folio); error = hugetlb_add_to_page_cache(folio, mapping, index); if (unlikely(error)) { diff --git a/include/linux/mm.h b/include/linux/mm.h index 106bb0310352..ecbca6281a4e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4071,9 +4071,7 @@ enum mf_action_page_type { }; #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) -extern void clear_huge_page(struct page *page, - unsigned long addr_hint, - unsigned int pages_per_huge_page); +void folio_zero_user(struct folio *folio, unsigned long addr_hint); int copy_user_large_folio(struct folio *dst, struct folio *src, unsigned long addr_hint, struct vm_area_struct *vma); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f409ea9fcc18..85b852861610 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -943,10 +943,10 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, goto release; } - clear_huge_page(page, vmf->address, HPAGE_PMD_NR); + folio_zero_user(folio, vmf->address); /* * The memory barrier inside __folio_mark_uptodate makes sure that - * clear_huge_page writes become visible before the set_pmd_at() + * folio_zero_user writes become visible before the set_pmd_at() * write. */ __folio_mark_uptodate(folio); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 3518321f6598..58d8703a1065 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6296,8 +6296,7 @@ static vm_fault_t hugetlb_no_page(struct address_space *mapping, ret = 0; goto out; } - clear_huge_page(&folio->page, vmf->real_address, - pages_per_huge_page(h)); + folio_zero_user(folio, vmf->real_address); __folio_mark_uptodate(folio); new_folio = true; diff --git a/mm/memory.c b/mm/memory.c index 54d7d2acdf39..bc5d446d9174 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4477,7 +4477,7 @@ static struct folio *alloc_anon_folio(struct vm_fault *vmf) goto next; } folio_throttle_swaprate(folio, gfp); - clear_huge_page(&folio->page, vmf->address, 1 << order); + folio_zero_user(folio, vmf->address); return folio; } next: @@ -6420,41 +6420,39 @@ static inline int process_huge_page( return 0; } -static void clear_gigantic_page(struct page *page, - unsigned long addr, +static void clear_gigantic_page(struct folio *folio, unsigned long addr, unsigned int pages_per_huge_page) { int i; - struct page *p; might_sleep(); for (i = 0; i < pages_per_huge_page; i++) { - p = nth_page(page, i); cond_resched(); - clear_user_highpage(p, addr + i * PAGE_SIZE); + clear_user_highpage(folio_page(folio, i), addr + i * PAGE_SIZE); } } static int clear_subpage(unsigned long addr, int idx, void *arg) { - struct page *page = arg; + struct folio *folio = arg; - clear_user_highpage(nth_page(page, idx), addr); + clear_user_highpage(folio_page(folio, idx), addr); return 0; } -void clear_huge_page(struct page *page, - unsigned long addr_hint, unsigned int pages_per_huge_page) +/** + * folio_zero_user - Zero a folio which will be mapped to userspace. + * @folio: The folio to zero. + * @addr_hint: The address will be accessed or the base address if uncelar. + */ +void folio_zero_user(struct folio *folio, unsigned long addr_hint) { - unsigned long addr = addr_hint & - ~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1); + unsigned int nr_pages = folio_nr_pages(folio); - if (unlikely(pages_per_huge_page > MAX_ORDER_NR_PAGES)) { - clear_gigantic_page(page, addr, pages_per_huge_page); - return; - } - - process_huge_page(addr_hint, pages_per_huge_page, clear_subpage, page); + if (unlikely(nr_pages > MAX_ORDER_NR_PAGES)) + clear_gigantic_page(folio, addr_hint, nr_pages); + else + process_huge_page(addr_hint, nr_pages, clear_subpage, folio); } static int copy_user_gigantic_page(struct folio *dst, struct folio *src,