From patchwork Wed Dec 20 22:44:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13500614 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4FFA2C46CCD for ; Wed, 20 Dec 2023 22:45:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D97726B0082; Wed, 20 Dec 2023 17:45:16 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D48266B0083; Wed, 20 Dec 2023 17:45:16 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BC3096B0085; Wed, 20 Dec 2023 17:45:16 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id A7FA86B0082 for ; Wed, 20 Dec 2023 17:45:16 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 78368160482 for ; Wed, 20 Dec 2023 22:45:16 +0000 (UTC) X-FDA: 81588679032.23.A15BBBB Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf09.hostedemail.com (Postfix) with ESMTP id D9F5314000E for ; Wed, 20 Dec 2023 22:45:14 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KdQDlHyZ; spf=pass (imf09.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1703112314; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=G+bCjcGey/V7RsjKz554fGR3BU3YAAJC2JoClW1JlMc=; b=wro+UoX5jgFsJS3WE9ZtF0a5gMd7zCceSU/RGUdamHd4miENFnRc9FbNrf6ok01iuuVSXO z69YwWinAzScrM0pmMIe4XyntaGwaj4ZxSlhlsy+tiLTTqTshoNwEaESIP3bRMQEpsKUAA 3g338C13NT7W4OoeyHtzuKr+jOUd5zU= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KdQDlHyZ; spf=pass (imf09.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1703112314; a=rsa-sha256; cv=none; b=yfB7KwG6eDAhuxCFa8UcOjdd4cHRiUCAZuxI6WiyS3RSDjDj3df8Kl0m8ynO8FHf3rQ85u cKoJG5VOmmDA1dr0XL4hMPM1+at0lSkjXNIZcpgPq0ZZqlzJrRobijw1J7S+wcXD6AgIsU XS4fTwGJp/nX65hwSU0JOOs4yiKHLgI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1703112314; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=G+bCjcGey/V7RsjKz554fGR3BU3YAAJC2JoClW1JlMc=; b=KdQDlHyZ8qNzDTRIaDD42fYM3wf5ZK6u2BvQtht7cJ+J+NxhxWLHnpP1g0YnZjCLjEAOQa dfRIG+uwUw9U8TsK6kHtqtyvXPe+kP2vKXQIomXU45tv+v7YPwnHsay+6VdJUEdpD8hLet jNnScDUUheJAjw72plxEpnSR73ZTFJY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-425-Wt5IrIHnNeaZg63pnEl0Gw-1; Wed, 20 Dec 2023 17:45:11 -0500 X-MC-Unique: Wt5IrIHnNeaZg63pnEl0Gw-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BB553848C06; Wed, 20 Dec 2023 22:45:10 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.192.101]) by smtp.corp.redhat.com (Postfix) with ESMTP id 608DF40C6EB9; Wed, 20 Dec 2023 22:45:08 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Hugh Dickins , Ryan Roberts , Yin Fengwei , Mike Kravetz , Muchun Song , Peter Xu , Muchun Song Subject: [PATCH v2 01/40] mm/rmap: rename hugepage_add* to hugetlb_add* Date: Wed, 20 Dec 2023 23:44:25 +0100 Message-ID: <20231220224504.646757-2-david@redhat.com> In-Reply-To: <20231220224504.646757-1-david@redhat.com> References: <20231220224504.646757-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2 X-Rspamd-Queue-Id: D9F5314000E X-Rspam-User: X-Stat-Signature: f6jizeu4ps4ynsjmzd331cusncf561hq X-Rspamd-Server: rspam01 X-HE-Tag: 1703112314-27633 X-HE-Meta: U2FsdGVkX18bPLxKJJa6fVKgLyDW/ERiOu8IBveiTawTbc6Ww16NIpBSL1oJdMXdJTuE59IW9cXwWjO9KXMdY7E2XbVu0zcTxnlCKr1Uy+6rn8h0Ar2fS1LHI2PXxvfbM2rCV9u6AVWEoLynDwt9HdX+1+kFAGFMgp6T/lUgUzid+L+ZTL1nVcJZo/RgMQofuIcTiEowNnRgiq0IF4TZofEBfqP+Tg4VPdBumtCbDUYvDzP7P3W4dkA4bMDYz9p7ONrBCQSRnFyXlvubP856DeFFniYdxDiLL70T5SSLlopHBjC/WXzlkcOCHzb+JMSPvh028WkJ0FUQ97/EfluctAkGiLNXOJddkAxD2i2DhaNow2vUfXfGYv4CWGy1+TJQzNDJgXzeo1qz1tFchcjetUr4DF4M/mkVIRmFccpPEgT2VnCVdcl12V00vlqwnB4iMIVC5g2j8XR6oGR7P+v914ricLuPM4OeyTZf99eyUHXrqs5KgZ85v37xmCSHstJABzvD+pdaLLAMcimOqyFRqxymN1M7Xhk53SHvYGSrFqEX77gYHuJrY8VTRJXY5R8/sk7fcb9Q9pl/bcmCAAeMymLUvAH8R82tWvTFrh/hfria/K9v8o4fkGqU7aP4p6jKVIE/zjVfRnpBsBlwDbMeAKxjdLype82uOdSi4nM3vmgTREEvBm6yPcIQKcLNCUMQexV05F5MlHggv6MYe4xTFCajRUTT25Dh4H2YUxFlB8CAek40WeBAAmyAMytWrI7f9iJcZftzKl79c1XjZ6M3uU/OI+ErEV9cjGEpK4yv+LMU3oemg/lMnLGDPSDsDunsO2b625fhgRltx2XrFBmxmyX0peo0d/vpHyngNn8qVpfHYdoGeA4kPqSfv+k70Cc0vJ6rmzfD5JFVcPqu/JSWWkH8daB3QZw1VX6SKqUap7MUTX2Lfsejqeb3Qau53TRe X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's just call it "hugetlb_". Yes, it's all already inconsistent and confusing because we have a lot of "hugepage_" functions for legacy reasons. But "hugetlb" cannot possibly be confused with transparent huge pages, and it matches "hugetlb.c" and "folio_test_hugetlb()". So let's minimize confusion in rmap code. Reviewed-by: Muchun Song Signed-off-by: David Hildenbrand Reviewed-by: Matthew Wilcox (Oracle) --- include/linux/rmap.h | 4 ++-- mm/hugetlb.c | 8 ++++---- mm/migrate.c | 4 ++-- mm/rmap.c | 8 ++++---- 4 files changed, 12 insertions(+), 12 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 0ae2bb0e77f5d..36096ba69bdcd 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -206,9 +206,9 @@ void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr, void page_remove_rmap(struct page *, struct vm_area_struct *, bool compound); -void hugepage_add_anon_rmap(struct folio *, struct vm_area_struct *, +void hugetlb_add_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address, rmap_t flags); -void hugepage_add_new_anon_rmap(struct folio *, struct vm_area_struct *, +void hugetlb_add_new_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address); static inline void __page_dup_rmap(struct page *page, bool compound) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6feb3e0630d18..305f3ca1dee62 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5285,7 +5285,7 @@ hugetlb_install_folio(struct vm_area_struct *vma, pte_t *ptep, unsigned long add pte_t newpte = make_huge_pte(vma, &new_folio->page, 1); __folio_mark_uptodate(new_folio); - hugepage_add_new_anon_rmap(new_folio, vma, addr); + hugetlb_add_new_anon_rmap(new_folio, vma, addr); if (userfaultfd_wp(vma) && huge_pte_uffd_wp(old)) newpte = huge_pte_mkuffd_wp(newpte); set_huge_pte_at(vma->vm_mm, addr, ptep, newpte, sz); @@ -5988,7 +5988,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, /* Break COW or unshare */ huge_ptep_clear_flush(vma, haddr, ptep); page_remove_rmap(&old_folio->page, vma, true); - hugepage_add_new_anon_rmap(new_folio, vma, haddr); + hugetlb_add_new_anon_rmap(new_folio, vma, haddr); if (huge_pte_uffd_wp(pte)) newpte = huge_pte_mkuffd_wp(newpte); set_huge_pte_at(mm, haddr, ptep, newpte, huge_page_size(h)); @@ -6277,7 +6277,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, goto backout; if (anon_rmap) - hugepage_add_new_anon_rmap(folio, vma, haddr); + hugetlb_add_new_anon_rmap(folio, vma, haddr); else page_dup_file_rmap(&folio->page, true); new_pte = make_huge_pte(vma, &folio->page, ((vma->vm_flags & VM_WRITE) @@ -6732,7 +6732,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, if (folio_in_pagecache) page_dup_file_rmap(&folio->page, true); else - hugepage_add_new_anon_rmap(folio, dst_vma, dst_addr); + hugetlb_add_new_anon_rmap(folio, dst_vma, dst_addr); /* * For either: (1) CONTINUE on a non-shared VMA, or (2) UFFDIO_COPY diff --git a/mm/migrate.c b/mm/migrate.c index bad3039d165e6..7d1c3f292d24d 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -249,8 +249,8 @@ static bool remove_migration_pte(struct folio *folio, pte = arch_make_huge_pte(pte, shift, vma->vm_flags); if (folio_test_anon(folio)) - hugepage_add_anon_rmap(folio, vma, pvmw.address, - rmap_flags); + hugetlb_add_anon_rmap(folio, vma, pvmw.address, + rmap_flags); else page_dup_file_rmap(new, true); set_huge_pte_at(vma->vm_mm, pvmw.address, pvmw.pte, pte, diff --git a/mm/rmap.c b/mm/rmap.c index 23da5b1ac33b4..9845499b22f8f 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2625,8 +2625,8 @@ void rmap_walk_locked(struct folio *folio, struct rmap_walk_control *rwc) * * RMAP_COMPOUND is ignored. */ -void hugepage_add_anon_rmap(struct folio *folio, struct vm_area_struct *vma, - unsigned long address, rmap_t flags) +void hugetlb_add_anon_rmap(struct folio *folio, struct vm_area_struct *vma, + unsigned long address, rmap_t flags) { VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); @@ -2637,8 +2637,8 @@ void hugepage_add_anon_rmap(struct folio *folio, struct vm_area_struct *vma, PageAnonExclusive(&folio->page), folio); } -void hugepage_add_new_anon_rmap(struct folio *folio, - struct vm_area_struct *vma, unsigned long address) +void hugetlb_add_new_anon_rmap(struct folio *folio, + struct vm_area_struct *vma, unsigned long address) { BUG_ON(address < vma->vm_start || address >= vma->vm_end); /* increment count (starts at -1) */