From patchwork Sat Feb 18 00:28:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13145400 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3C865C05027 for ; Sat, 18 Feb 2023 00:29:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 433E228001E; Fri, 17 Feb 2023 19:29:21 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 395C3280002; Fri, 17 Feb 2023 19:29:21 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1C1D428001E; Fri, 17 Feb 2023 19:29:21 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 0D824280002 for ; Fri, 17 Feb 2023 19:29:21 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id E1F6280799 for ; Sat, 18 Feb 2023 00:29:20 +0000 (UTC) X-FDA: 80478528480.23.44F8710 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf08.hostedemail.com (Postfix) with ESMTP id 20FAE160008 for ; Sat, 18 Feb 2023 00:29:18 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=PyoCsPHZ; spf=pass (imf08.hostedemail.com: domain of 33hvwYwoKCPsmwkrxjkwrqjrrjoh.frpolqx0-ppnydfn.ruj@flex--jthoughton.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=33hvwYwoKCPsmwkrxjkwrqjrrjoh.frpolqx0-ppnydfn.ruj@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676680159; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=H5Ss3Q8p6Tm/C0evsWap3MC9xTDp36cPJKkNtoaVKGQ=; b=OVgnT2L/v1swsKN4d531ylbx9gmt9GQPhbDSVZWXIsBdvhIvt08/cwzmpuOiFJSAjUIa12 WxfIEFm9yTkY6cJenDHLF/0KTh5HL/LwcTnEDrjnOd2M5yFirw48Q8WvK9CfsmsJUVSU6t brf4Phu0zZU1oXaG0Mo1W58MB6i41cE= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=PyoCsPHZ; spf=pass (imf08.hostedemail.com: domain of 33hvwYwoKCPsmwkrxjkwrqjrrjoh.frpolqx0-ppnydfn.ruj@flex--jthoughton.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=33hvwYwoKCPsmwkrxjkwrqjrrjoh.frpolqx0-ppnydfn.ruj@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676680159; a=rsa-sha256; cv=none; b=EYwLZ7i8nkiYSBPpK9jxM60bSDQyVeQnZ/g1e6KvxgolSl6n3X5R2end661w/S2Rk2UjDY zJyKdtGGWu6Jjo47kmqP+IzdBRQJ2kheCuz18tlYUfO5zY5sK585f4RlVx23/GH8Nl17c0 4NsJ9+ISVRrRz/exthPyJGfN2e9EKOo= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-53659386dc8so20711357b3.6 for ; Fri, 17 Feb 2023 16:29:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=H5Ss3Q8p6Tm/C0evsWap3MC9xTDp36cPJKkNtoaVKGQ=; b=PyoCsPHZKLuSeDOLvIw+lkPHKwn65TPzd0CfsjOfssDvwh0+NtYbibsVfgVsSmNssl qradQv+EgXsSDZiodu5lljG1t2o7w6TQ698wIv5JgVXeUBOB3Wnz/Bcq5yevJLKg+nve VjJMx+Txfk3k/LgV2w2hnh91g802PdgQo2AseWuhEWbpXHvGj7DQ7qL06+fGAKkwmyeX eLgkDxIvUSbfleeEETs9PQLkB3WeTXxJkEgHYOVpxDZGz96scTefQeiZRq4DuIrveLRP xg9PkBMYr1jykjB8E7MRM4w0AuZAeJ4hrxxNWXPe2GYe6/W0zfWOPuKMd0nX/tiporx7 sscg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=H5Ss3Q8p6Tm/C0evsWap3MC9xTDp36cPJKkNtoaVKGQ=; b=fqvUnM7/3ges0iYT7rkPTPlfy/JG9x/EPkUukHgzwO1QrgJiTVrk8qkAGLsnu4dnkw f2qQE4yNxf2TvpY9eD/w4nQIC85DFGhQTpJxUkTLWWireFMP0uIxQk2ZYL53spPYkUJL 1c8Mvma/2AS7Pk67DCN6lD+cWVUEEiFCZe9y12VUPHd/0CyYvmJBFMZG8eQ4blFFP+fN QcpdWC7UsJFw7MS3+KamvC/YWv+zgQL8BGSInB8DpMo7ylQbpIJYbXumgskpNvEP7Rd4 +DzNklGcO7tklEVfeI5eIudKvxn1ZyDNZ7IC9jT3jl8DZjoeQ3f7tG++YgxUx7J8yrwO sIpw== X-Gm-Message-State: AO0yUKXZLaqe/607IqM5CWItBsPgp4jlAw0yDHY5mdLRvxicy3ACj0yI PR25hDaLJeNjddes0rOLJ7t9M6ewpCtgzx7P X-Google-Smtp-Source: AK7set9/STmUDdPiVjVMlyX9HVfyhKJ5jo2Gv2WTisaTY68C7uxLi9/y0Dvn93Bgqo0KqJ6aTwnqpO+gxhHqoPwi X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a05:6902:107:b0:914:1ef3:e98a with SMTP id o7-20020a056902010700b009141ef3e98amr168149ybh.213.1676680158302; Fri, 17 Feb 2023 16:29:18 -0800 (PST) Date: Sat, 18 Feb 2023 00:28:07 +0000 In-Reply-To: <20230218002819.1486479-1-jthoughton@google.com> Mime-Version: 1.0 References: <20230218002819.1486479-1-jthoughton@google.com> X-Mailer: git-send-email 2.39.2.637.g21b0678d19-goog Message-ID: <20230218002819.1486479-35-jthoughton@google.com> Subject: [PATCH v2 34/46] hugetlb: add MADV_COLLAPSE for hugetlb From: James Houghton To: Mike Kravetz , Muchun Song , Peter Xu , Andrew Morton Cc: David Hildenbrand , David Rientjes , Axel Rasmussen , Mina Almasry , "Zach O'Keefe" , Manish Mishra , Naoya Horiguchi , "Dr . David Alan Gilbert" , "Matthew Wilcox (Oracle)" , Vlastimil Babka , Baolin Wang , Miaohe Lin , Yang Shi , Frank van der Linden , Jiaqi Yan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, James Houghton X-Rspam-User: X-Rspamd-Server: rspam03 X-Stat-Signature: 6pupykhdj7msmbuk8diyxf417eire3fr X-Rspamd-Queue-Id: 20FAE160008 X-HE-Tag: 1676680158-104152 X-HE-Meta: U2FsdGVkX1/v0jljVEdCDwUc8XatRk7UGvm+JzW0TKe+/fFSjDcrSFv+zNqC32vE1FqH6BxtqZQE70dz+3TBqmk+ciVATxwr7u183SPEglDNjm3Ameql0jpNdr4q2cv9xjy92O0fhhaTJK41Q8Z0THGLaaktEZJ8CHX9gpBCk7QKoX9+zw6V55IuBzn0YogfMGLJAUcrFhaiANq2ogY5xRJNZYqAckDNpaUK30a0e3gPXE9rBBt9NnUdvdFsg5656rldY+k/aa5XeS5W1agblduGRpj7XiVM7G8Eyao/TGPv7tRBY8jUnJltQMqpEgqv4FUuBtCoimfJDsOdIeJaXaHi7TWX2qSQ9xpzDNO7EZ9ka3gPPriLfyA9cT4ACLFHd9aV1aJHo2XvqnYRhLKELMDDhXjOfzYKTAS/JyZzIe3N/rgF5bD8I6SBNbVIHXhUk7qrNGKzvuGpK84PJztvU/mYq7Of7ZZfTYwkkXEfu1DhexxiIe1A2qpt0ereshE94cYN1D7FVt2rMCilhEyay8CzSRL+/AlaIJXMAg5HRkBeGOA/EKa8aJXhxr3WxoJAhyziKL7wFOoXbnw8mBpEEoJVp68+uPDTwwJwGtnwtBMsXB+vrkorMuUOcQ7evbQFP4C1cY8cPdeKSdRtlE4VYrKQNLcMWxkZXn5VJqs00F4RrgaVHKMXKAkZ/vZTf+nbcy17OExBOr45S6wX6Z/M7kW4ZxYa5JXVmxjMzPC29VHgTrk7W9ZGSx8pw//OfuwpaC5mzgKsWjgIGtF8p/6WdpkdWZnyeKPL56S93SjG1zFMoI4ciwYSgeRnOS/V+xjKuoBwadevOvb3tybamOBGG7v6Yum6ZBV/XcSGoQPkQmr69K8PzvBUnbH/jm4RMVR3c+juqYKP1bA7k2PRJvhW7ElgWPOsbMu6nX/m8/9a5F9jY+AL0/o14bHbX4A+jl+TqJDI6Urf0XvH+69mIpt EPTWUNdx hKJS71EAC+XP+ThHDV3NO+CC7IzIWDwNRKyqadVgUzMq5xV3gQ2iHdrPjiB3Hsyh9/A5AjS7Mby+PQkDB+YrhPHumeo2XI82kIIxuvJ+agqqILrrxWrXZaUcGbbXn8xmfNvMVdkPSBdsg8Tl3xhv5bBSdN74WeN0CgKHD8I5sqMjxqmv5NcWDf2COnduhPzbqWsS2WTJDs4FW5dZJ/5KofjgVnQJjEqJM1d2HrMlFaMiJdEwNE8FF8gteZAfhy+431ZwqW0g0FCT2qJsBVDWdpzuDvOIiVpUqnjB4NTfGeTJ4CqFFWrQgLqFqj+nM8JIUCQoWZ0IJQB6zWQvli38fzZ/MEccJJi1yVd9vjiW09S1dKLW1SWAnl6uB/K/dmrQCErDjAGOT8fE6eO+/cH2cQPcXxY8jVIKYSHaoWYoY3YKknAjsfHmPiyV9+M20DX6tlhujIVuo64giJRxoCLCwR67ik72tEhxOp4aC07DCYyH6j1Hnhkp2JWCsB/dMpZ9A5Mhoz3pZ8XqkelPt6KUOZZPrMcRmjr9TSBLlapf54yu2ntYj6lv3OKv44CcBukdXoUv0/syJuhztjXTrph3R1iwPvsCX0RAKfOmY4K6M4q8ktjRbfloubHSdQXbu5zVqi/2jFTIaa8jIwruY2LxCOURWyQ/R+pf1ywyvZKNGTK0bZwFWUueIkaTyIN2n/faUMCuNo80r7oU+BIM= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This is a necessary extension to the UFFDIO_CONTINUE changes. When userspace finishes mapping an entire hugepage with UFFDIO_CONTINUE, the kernel has no mechanism to automatically collapse the page table to map the whole hugepage normally. We require userspace to inform us that they would like the mapping to be collapsed; they do this with MADV_COLLAPSE. If userspace has not mapped all of a hugepage with UFFDIO_CONTINUE, but only some, hugetlb_collapse will cause the requested range to be mapped as if it were UFFDIO_CONTINUE'd already. The effects of any UFFDIO_WRITEPROTECT calls may be undone by a call to MADV_COLLAPSE for intersecting address ranges. This commit is co-opting the same madvise mode that has been introduced to synchronously collapse THPs. The function that does THP collapsing has been renamed to madvise_collapse_thp. As with the rest of the high-granularity mapping support, MADV_COLLAPSE is only supported for shared VMAs right now. MADV_COLLAPSE for HugeTLB takes the mmap_lock for writing. It is important that we check PageHWPoison before checking !HPageMigratable, as PageHWPoison implies !HPageMigratable. !PageHWPoison && !HPageMigratable means that the page has been isolated for migration. Signed-off-by: James Houghton diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 70bd867eba94..fa63a56ebaf0 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -218,9 +218,9 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud, int hugepage_madvise(struct vm_area_struct *vma, unsigned long *vm_flags, int advice); -int madvise_collapse(struct vm_area_struct *vma, - struct vm_area_struct **prev, - unsigned long start, unsigned long end); +int madvise_collapse_thp(struct vm_area_struct *vma, + struct vm_area_struct **prev, + unsigned long start, unsigned long end); void vma_adjust_trans_huge(struct vm_area_struct *vma, unsigned long start, unsigned long end, long adjust_next); spinlock_t *__pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma); @@ -358,9 +358,9 @@ static inline int hugepage_madvise(struct vm_area_struct *vma, return -EINVAL; } -static inline int madvise_collapse(struct vm_area_struct *vma, - struct vm_area_struct **prev, - unsigned long start, unsigned long end) +static inline int madvise_collapse_thp(struct vm_area_struct *vma, + struct vm_area_struct **prev, + unsigned long start, unsigned long end) { return -EINVAL; } diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index e0e51bb06112..6cd4ae08d84d 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -1278,6 +1278,8 @@ bool hugetlb_hgm_eligible(struct vm_area_struct *vma); int hugetlb_alloc_largest_pte(struct hugetlb_pte *hpte, struct mm_struct *mm, struct vm_area_struct *vma, unsigned long start, unsigned long end); +int hugetlb_collapse(struct mm_struct *mm, unsigned long start, + unsigned long end); #else static inline bool hugetlb_hgm_enabled(struct vm_area_struct *vma) { @@ -1298,6 +1300,12 @@ int hugetlb_alloc_largest_pte(struct hugetlb_pte *hpte, struct mm_struct *mm, { return -EINVAL; } +static inline +int hugetlb_collapse(struct mm_struct *mm, unsigned long start, + unsigned long end) +{ + return -EINVAL; +} #endif static inline spinlock_t *huge_pte_lock(struct hstate *h, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index a00b4ac07046..c4d189e5f1fd 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -8014,6 +8014,158 @@ int hugetlb_alloc_largest_pte(struct hugetlb_pte *hpte, struct mm_struct *mm, return 0; } +/* + * Collapse the address range from @start to @end to be mapped optimally. + * + * This is only valid for shared mappings. The main use case for this function + * is following UFFDIO_CONTINUE. If a user UFFDIO_CONTINUEs an entire hugepage + * by calling UFFDIO_CONTINUE once for each 4K region, the kernel doesn't know + * to collapse the mapping after the final UFFDIO_CONTINUE. Instead, we leave + * it up to userspace to tell us to do so, via MADV_COLLAPSE. + * + * Any holes in the mapping will be filled. If there is no page in the + * pagecache for a region we're collapsing, the PTEs will be cleared. + * + * If high-granularity PTEs are uffd-wp markers, those markers will be dropped. + */ +static int __hugetlb_collapse(struct mm_struct *mm, struct vm_area_struct *vma, + unsigned long start, unsigned long end) +{ + struct hstate *h = hstate_vma(vma); + struct address_space *mapping = vma->vm_file->f_mapping; + struct mmu_notifier_range range; + struct mmu_gather tlb; + unsigned long curr = start; + int ret = 0; + struct folio *folio; + struct page *subpage; + pgoff_t idx; + bool writable = vma->vm_flags & VM_WRITE; + struct hugetlb_pte hpte; + pte_t entry; + spinlock_t *ptl; + + /* + * This is only supported for shared VMAs, because we need to look up + * the page to use for any PTEs we end up creating. + */ + if (!(vma->vm_flags & VM_MAYSHARE)) + return -EINVAL; + + /* If HGM is not enabled, there is nothing to collapse. */ + if (!hugetlb_hgm_enabled(vma)) + return 0; + + tlb_gather_mmu(&tlb, mm); + + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, start, end); + mmu_notifier_invalidate_range_start(&range); + + while (curr < end) { + ret = hugetlb_alloc_largest_pte(&hpte, mm, vma, curr, end); + if (ret) + goto out; + + entry = huge_ptep_get(hpte.ptep); + + /* + * There is no work to do if the PTE doesn't point to page + * tables. + */ + if (!pte_present(entry)) + goto next_hpte; + if (hugetlb_pte_present_leaf(&hpte, entry)) + goto next_hpte; + + idx = vma_hugecache_offset(h, vma, curr); + folio = filemap_get_folio(mapping, idx); + + if (folio && folio_test_hwpoison(folio)) { + /* + * Don't collapse a mapping to a page that is + * hwpoisoned. The entire page will be poisoned. + * + * When HugeTLB supports poisoning PAGE_SIZE bits of + * the hugepage, the logic here can be improved. + * + * Skip this page, and continue to collapse the rest + * of the mapping. + */ + folio_put(folio); + curr = (curr & huge_page_mask(h)) + huge_page_size(h); + continue; + } + + if (folio && !folio_test_hugetlb_migratable(folio)) { + /* + * Don't collapse a mapping to a page that is pending + * a migration. Migration swap entries may have placed + * in the page table. + */ + ret = -EBUSY; + folio_put(folio); + goto out; + } + + /* + * Clear all the PTEs, and drop ref/mapcounts + * (on tlb_finish_mmu). + */ + __unmap_hugepage_range(&tlb, vma, curr, + curr + hugetlb_pte_size(&hpte), + NULL, + ZAP_FLAG_DROP_MARKER); + /* Free the PTEs. */ + hugetlb_free_pgd_range(&tlb, + curr, curr + hugetlb_pte_size(&hpte), + curr, curr + hugetlb_pte_size(&hpte)); + + ptl = hugetlb_pte_lock(&hpte); + + if (!folio) { + huge_pte_clear(mm, curr, hpte.ptep, + hugetlb_pte_size(&hpte)); + spin_unlock(ptl); + goto next_hpte; + } + + subpage = hugetlb_find_subpage(h, folio, curr); + entry = make_huge_pte_with_shift(vma, subpage, + writable, hpte.shift); + hugetlb_add_file_rmap(subpage, hpte.shift, h, vma); + set_huge_pte_at(mm, curr, hpte.ptep, entry); + spin_unlock(ptl); +next_hpte: + curr += hugetlb_pte_size(&hpte); + } +out: + mmu_notifier_invalidate_range_end(&range); + tlb_finish_mmu(&tlb); + + return ret; +} + +int hugetlb_collapse(struct mm_struct *mm, unsigned long start, + unsigned long end) +{ + int ret = 0; + struct vm_area_struct *vma; + + mmap_write_lock(mm); + while (start < end || ret) { + vma = find_vma(mm, start); + if (!vma || !is_vm_hugetlb_page(vma)) { + ret = -EINVAL; + break; + } + ret = __hugetlb_collapse(mm, vma, start, + end < vma->vm_end ? end : vma->vm_end); + start = vma->vm_end; + } + mmap_write_unlock(mm); + return ret; +} + #endif /* CONFIG_HUGETLB_HIGH_GRANULARITY_MAPPING */ /* diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 8dbc39896811..58cda5020537 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -2750,8 +2750,8 @@ static int madvise_collapse_errno(enum scan_result r) } } -int madvise_collapse(struct vm_area_struct *vma, struct vm_area_struct **prev, - unsigned long start, unsigned long end) +int madvise_collapse_thp(struct vm_area_struct *vma, struct vm_area_struct **prev, + unsigned long start, unsigned long end) { struct collapse_control *cc; struct mm_struct *mm = vma->vm_mm; diff --git a/mm/madvise.c b/mm/madvise.c index 8c004c678262..e121d135252a 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -1028,6 +1028,24 @@ static int madvise_split(struct vm_area_struct *vma, #endif } +static int madvise_collapse(struct vm_area_struct *vma, + struct vm_area_struct **prev, + unsigned long start, unsigned long end) +{ + if (is_vm_hugetlb_page(vma)) { + struct mm_struct *mm = vma->vm_mm; + int ret; + + *prev = NULL; /* tell sys_madvise we dropped the mmap lock */ + mmap_read_unlock(mm); + ret = hugetlb_collapse(mm, start, end); + mmap_read_lock(mm); + return ret; + } + + return madvise_collapse_thp(vma, prev, start, end); +} + /* * Apply an madvise behavior to a region of a vma. madvise_update_vma * will handle splitting a vm area into separate areas, each area with its own @@ -1204,6 +1222,9 @@ madvise_behavior_valid(int behavior) #ifdef CONFIG_TRANSPARENT_HUGEPAGE case MADV_HUGEPAGE: case MADV_NOHUGEPAGE: +#endif +#if defined(CONFIG_HUGETLB_HIGH_GRANULARITY_MAPPING) || \ + defined(CONFIG_TRANSPARENT_HUGEPAGE) case MADV_COLLAPSE: #endif #ifdef CONFIG_HUGETLB_HIGH_GRANULARITY_MAPPING @@ -1397,7 +1418,8 @@ int madvise_set_anon_name(struct mm_struct *mm, unsigned long start, * MADV_NOHUGEPAGE - mark the given range as not worth being backed by * transparent huge pages so the existing pages will not be * coalesced into THP and new pages will not be allocated as THP. - * MADV_COLLAPSE - synchronously coalesce pages into new THP. + * MADV_COLLAPSE - synchronously coalesce pages into new THP, or, for HugeTLB + * pages, collapse the mapping. * MADV_SPLIT - allow HugeTLB pages to be mapped at PAGE_SIZE. This allows * UFFDIO_CONTINUE to accept PAGE_SIZE-aligned regions. * MADV_DONTDUMP - the application wants to prevent pages in the given range