From patchwork Fri Oct 8 18:32:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 12546183 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BD9EC433EF for ; Fri, 8 Oct 2021 18:33:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D4CA160F58 for ; Fri, 8 Oct 2021 18:33:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org D4CA160F58 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 607EA900002; Fri, 8 Oct 2021 14:33:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 5B88E6B0072; Fri, 8 Oct 2021 14:33:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 480ED900002; Fri, 8 Oct 2021 14:33:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0058.hostedemail.com [216.40.44.58]) by kanga.kvack.org (Postfix) with ESMTP id 3ADD46B0071 for ; Fri, 8 Oct 2021 14:33:01 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id E778D3A2DA for ; Fri, 8 Oct 2021 18:33:00 +0000 (UTC) X-FDA: 78674116920.01.F04E49D Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf23.hostedemail.com (Postfix) with ESMTP id 85E2090018D4 for ; Fri, 8 Oct 2021 18:33:00 +0000 (UTC) Received: by mail-yb1-f201.google.com with SMTP id x15-20020a056902102f00b005ba71cd7dbfso3580804ybt.8 for ; Fri, 08 Oct 2021 11:33:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:message-id:mime-version:subject:from:cc; bh=Q4GuSXVp/Poegy0eHadlKa1eWjTVCDLXhxi9gR8tEnI=; b=RUOEO4cGu50aUOJErLijo3+FhYOCKHcI1E8Ns6bhjsep1G0Ygc74835+NZJxwiHIkj GakklPoa8OS4Ud5IqafXDzSP/eCKMiT89H4Pj3OfvsvpNcqwLHseizg9RSVT/eMU8Phz MpZt3dml5SZ6DbsQ11rsDHkdUysKz24Ysg0BQ9OsD2tgSmcvA9qov+RR7kI6niL8n6qz lvATnf8B53Ifr+im6m5TN07hsBFd1ThO1vRRGmE2X7oh9apJO8K0iQF+7QAHe5ghfe1H aEv8JqOVmv3tslC9Jg0HjoogBUPRdkECE22677iVS3vFbiOlrqzUQ/sMu8843g4wuWYy zVFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:message-id:mime-version:subject:from:cc; bh=Q4GuSXVp/Poegy0eHadlKa1eWjTVCDLXhxi9gR8tEnI=; b=1CIWR+FWN9D47tfTpfVX0Qhl+0v4Z26pPxyU1I8EaWf2tLMsVrQvapcb11IorW1zpL bjPCvfVT0gBpuSIOmUbsThW64X3CWJ1+7C08AqfvcY3gZtwTXzFO0T0bOcOtN+FLPysT Zo/G+H0EiLIdLcLgGdwVvr8czV4jUSRTPmYJIQEXleLaqLkbykScOB9QywX/sj6pZ+uR A7k0XpAhUM/UGEttkxUax6BEPHG3Oi3xI185pHKIWJ93pXOeMmkLja+HGsLg+0eGAaVO aMaM4Cm7HYEa2p5SX/u8MD/eR74pB0bUmPtNR6upxAS3pOurE3IN3A127S1iOjmoauqb lmrw== X-Gm-Message-State: AOAM533dajTolu15KOcLlEgFGvgcyFx0GIc6nvAGUuxWAm9J0KYU2g/I 8azyC7BEK6Ifb9J7/4BpdVTQOxVQZyPGTtELCw== X-Google-Smtp-Source: ABdhPJw/MJ1LecpO4Yw7jXLT9qAzIxwvNuY+YWgg1Fj/iUqUu37BqxwP+RywYsD7Ex2o6WZy7JYkn/KNVr4YExnUuQ== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2cd:202:a56d:7f0d:a7e0:32d0]) (user=almasrymina job=sendgmr) by 2002:a25:4016:: with SMTP id n22mr5537817yba.309.1633717979702; Fri, 08 Oct 2021 11:32:59 -0700 (PDT) Date: Fri, 8 Oct 2021 11:32:55 -0700 Message-Id: <20211008183256.1558105-1-almasrymina@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH v5 1/2] mm, hugepages: add mremap() support for hugepage backed vma From: Mina Almasry Cc: Mina Almasry , Mike Kravetz , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Ken Chen , Chris Kennelly , Michal Hocko , Vlastimil Babka , Kirill Shutemov Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=RUOEO4cG; spf=pass (imf23.hostedemail.com: domain of 3245gYQsKCIwq12q87E2y3qw44w1u.s421y3AD-220Bqs0.47w@flex--almasrymina.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3245gYQsKCIwq12q87E2y3qw44w1u.s421y3AD-220Bqs0.47w@flex--almasrymina.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 85E2090018D4 X-Stat-Signature: dz5tg37rcngk8saz1m6whqt71raphz76 X-HE-Tag: 1633717980-252633 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Support mremap() for hugepage backed vma segment by simply repositioning page table entries. The page table entries are repositioned to the new virtual address on mremap(). Hugetlb mremap() support is of course generic; my motivating use case is a library (hugepage_text), which reloads the ELF text of executables in hugepages. This significantly increases the execution performance of said executables. Restricts the mremap operation on hugepages to up to the size of the original mapping as the underlying hugetlb reservation is not yet capable of handling remapping to a larger size. During the mremap() operation we detect pmd_share'd mappings and we unshare those during the mremap(). On access and fault the sharing is established again. Signed-off-by: Mina Almasry Cc: Mike Kravetz Cc: Andrew Morton Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Cc: Ken Chen Cc: Chris Kennelly Cc: Michal Hocko Cc: Vlastimil Babka Cc: Kirill Shutemov Reported-by: kernel test robot --- Changes in v5: - Remove hugetlb_vma_shareable and huge_pmd_shared dummy definitions for !CONFIG_HUGETLB_PAGE config, since they are not used and were causing added warning and build errors. Changes in v4: - Added addr, new_addr, old_len, and new_len hugepage alignment. Changes in v3: - Addressed review comments from Mike. - Separated tests into their own patch. Changes in v2: - Re-wrote comment around clear_vma_resv_huge_pages() to make it clear that the resv_map has been moved to the new VMA and why we need to clear it from the current VMA. - We detect huge_pmd_shared() pte's and unshare those rather than bug on hugetlb_vma_shareable(). - This case now returns EFAULT: if (!vma || vma->vm_start > addr) goto out; - Added kselftests for mremap() support. --- include/linux/hugetlb.h | 17 ++++++ mm/hugetlb.c | 131 +++++++++++++++++++++++++++++++++++++--- mm/mremap.c | 32 +++++++++- 3 files changed, 169 insertions(+), 11 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index ebaba02706c87..9a9d207fe7edb 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -124,6 +124,7 @@ struct hugepage_subpool *hugepage_new_subpool(struct hstate *h, long max_hpages, void hugepage_put_subpool(struct hugepage_subpool *spool); void reset_vma_resv_huge_pages(struct vm_area_struct *vma); +void clear_vma_resv_huge_pages(struct vm_area_struct *vma); int hugetlb_sysctl_handler(struct ctl_table *, int, void *, size_t *, loff_t *); int hugetlb_overcommit_handler(struct ctl_table *, int, void *, size_t *, loff_t *); @@ -132,6 +133,10 @@ int hugetlb_treat_movable_handler(struct ctl_table *, int, void *, size_t *, int hugetlb_mempolicy_sysctl_handler(struct ctl_table *, int, void *, size_t *, loff_t *); +int move_hugetlb_page_tables(struct vm_area_struct *vma, + struct vm_area_struct *new_vma, + unsigned long old_addr, unsigned long new_addr, + unsigned long len); int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *, struct vm_area_struct *); long follow_hugetlb_page(struct mm_struct *, struct vm_area_struct *, struct page **, struct vm_area_struct **, @@ -187,6 +192,7 @@ pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr, unsigned long sz); int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long *addr, pte_t *ptep); +int huge_pmd_shared(struct vm_area_struct *vma, pte_t *ptep); void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, unsigned long *start, unsigned long *end); struct page *follow_huge_addr(struct mm_struct *mm, unsigned long address, @@ -208,6 +214,7 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma, bool is_hugetlb_entry_migration(pte_t pte); void hugetlb_unshare_all_pmds(struct vm_area_struct *vma); +bool hugetlb_vma_shareable(struct vm_area_struct *vma, unsigned long addr); #else /* !CONFIG_HUGETLB_PAGE */ @@ -215,6 +222,10 @@ static inline void reset_vma_resv_huge_pages(struct vm_area_struct *vma) { } +static inline void clear_vma_resv_huge_pages(struct vm_area_struct *vma) +{ +} + static inline unsigned long hugetlb_total_pages(void) { return 0; @@ -262,6 +273,12 @@ static inline int copy_hugetlb_page_range(struct mm_struct *dst, return 0; } +#define move_hugetlb_page_tables(vma, new_vma, old_addr, new_addr, len) \ + ({ \ + BUG(); \ + 0; \ + }) + static inline void hugetlb_report_meminfo(struct seq_file *m) { } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6d2f4c25dd9fb..8200b4c8d09d8 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1015,6 +1015,35 @@ void reset_vma_resv_huge_pages(struct vm_area_struct *vma) vma->vm_private_data = (void *)0; } +/* + * Reset and decrement one ref on hugepage private reservation. + * Called with mm->mmap_sem writer semaphore held. + * This function should be only used by move_vma() and operate on + * same sized vma. It should never come here with last ref on the + * reservation. + */ +void clear_vma_resv_huge_pages(struct vm_area_struct *vma) +{ + /* + * Clear the old hugetlb private page reservation. + * It has already been transferred to new_vma. + * + * During a mremap() operation of a hugetlb vma we call move_vma() + * which copies *vma* into *new_vma* and unmaps *vma*. After the copy + * operation both *new_vma* and *vma* share a reference to the resv_map + * struct, and at that point *vma* is about to be unmapped. We don't + * want to return the reservation to the pool at unmap of *vma* because + * the reservation still lives on in new_vma, so simply decrement the + * ref here and remove the resv_map reference from this vma. + */ + struct resv_map *reservations = vma_resv_map(vma); + + if (reservations && is_vma_resv_set(vma, HPAGE_RESV_OWNER)) + kref_put(&reservations->refs, resv_map_release); + + reset_vma_resv_huge_pages(vma); +} + /* Returns true if the VMA has associated reserve pages */ static bool vma_has_reserves(struct vm_area_struct *vma, long chg) { @@ -4800,6 +4829,82 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, return ret; } +static void move_huge_pte(struct vm_area_struct *vma, unsigned long old_addr, + unsigned long new_addr, pte_t *src_pte) +{ + struct hstate *h = hstate_vma(vma); + struct mm_struct *mm = vma->vm_mm; + pte_t *dst_pte, pte; + spinlock_t *src_ptl, *dst_ptl; + + dst_pte = huge_pte_offset(mm, new_addr, huge_page_size(h)); + dst_ptl = huge_pte_lock(h, mm, dst_pte); + src_ptl = huge_pte_lockptr(h, mm, src_pte); + + /* + * We don't have to worry about the ordering of src and dst ptlocks + * because exclusive mmap_sem (or the i_mmap_lock) prevents deadlock. + */ + if (src_ptl != dst_ptl) + spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING); + + pte = huge_ptep_get_and_clear(mm, old_addr, src_pte); + set_huge_pte_at(mm, new_addr, dst_pte, pte); + + if (src_ptl != dst_ptl) + spin_unlock(src_ptl); + spin_unlock(dst_ptl); +} + +int move_hugetlb_page_tables(struct vm_area_struct *vma, + struct vm_area_struct *new_vma, + unsigned long old_addr, unsigned long new_addr, + unsigned long len) +{ + struct hstate *h = hstate_vma(vma); + struct address_space *mapping = vma->vm_file->f_mapping; + unsigned long sz = huge_page_size(h); + struct mm_struct *mm = vma->vm_mm; + unsigned long old_end = old_addr + len; + unsigned long old_addr_copy; + pte_t *src_pte, *dst_pte; + struct mmu_notifier_range range; + + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, mm, old_addr, + old_end); + adjust_range_if_pmd_sharing_possible(vma, &range.start, &range.end); + mmu_notifier_invalidate_range_start(&range); + /* Prevent race with file truncation */ + i_mmap_lock_write(mapping); + for (; old_addr < old_end; old_addr += sz, new_addr += sz) { + src_pte = huge_pte_offset(mm, old_addr, sz); + if (!src_pte) + continue; + if (huge_pte_none(huge_ptep_get(src_pte))) + continue; + + /* old_addr arg to huge_pmd_unshare() is a pointer and so the + * arg may be modified. Pass a copy instead to preserve the + * value in old_arg. + */ + old_addr_copy = old_addr; + + if (huge_pmd_unshare(mm, vma, &old_addr_copy, src_pte)) + continue; + + dst_pte = huge_pte_alloc(mm, new_vma, new_addr, sz); + if (!dst_pte) + break; + + move_huge_pte(vma, old_addr, new_addr, src_pte); + } + i_mmap_unlock_write(mapping); + flush_tlb_range(vma, old_end - len, old_end); + mmu_notifier_invalidate_range_end(&range); + + return len + old_addr - old_end; +} + static void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long start, unsigned long end, struct page *ref_page) @@ -6280,7 +6385,7 @@ static unsigned long page_table_shareable(struct vm_area_struct *svma, return saddr; } -static bool vma_shareable(struct vm_area_struct *vma, unsigned long addr) +bool hugetlb_vma_shareable(struct vm_area_struct *vma, unsigned long addr) { unsigned long base = addr & PUD_MASK; unsigned long end = base + PUD_SIZE; @@ -6299,7 +6404,7 @@ bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr) if (uffd_disable_huge_pmd_share(vma)) return false; #endif - return vma_shareable(vma, addr); + return hugetlb_vma_shareable(vma, addr); } /* @@ -6339,12 +6444,6 @@ void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, * sharing is possible. For hugetlbfs, this prevents removal of any page * table entries associated with the address space. This is important as we * are setting up sharing based on existing page table entries (mappings). - * - * NOTE: This routine is only called from huge_pte_alloc. Some callers of - * huge_pte_alloc know that sharing is not possible and do not take - * i_mmap_rwsem as a performance optimization. This is handled by the - * if !vma_shareable check at the beginning of the routine. i_mmap_rwsem is - * only required for subsequent processing. */ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, pud_t *pud) @@ -6422,7 +6521,23 @@ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma, return 1; } +int huge_pmd_shared(struct vm_area_struct *vma, pte_t *ptep) +{ + i_mmap_assert_write_locked(vma->vm_file->f_mapping); + BUG_ON(page_count(virt_to_page(ptep)) == 0); + if (page_count(virt_to_page(ptep)) == 1) + return 0; + + return 1; +} + #else /* !CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ +static bool hugetlb_vma_shareable(struct vm_area_struct *vma, + unsigned long addr) +{ + return false; +} + pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, pud_t *pud) { diff --git a/mm/mremap.c b/mm/mremap.c index c0b6c41b7b78f..6a3f7d38b7539 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -489,6 +489,10 @@ unsigned long move_page_tables(struct vm_area_struct *vma, old_end = old_addr + len; flush_cache_range(vma, old_addr, old_end); + if (is_vm_hugetlb_page(vma)) + return move_hugetlb_page_tables(vma, new_vma, old_addr, + new_addr, len); + mmu_notifier_range_init(&range, MMU_NOTIFY_UNMAP, 0, vma, vma->vm_mm, old_addr, old_end); mmu_notifier_invalidate_range_start(&range); @@ -646,6 +650,10 @@ static unsigned long move_vma(struct vm_area_struct *vma, mremap_userfaultfd_prep(new_vma, uf); } + if (is_vm_hugetlb_page(vma)) { + clear_vma_resv_huge_pages(vma); + } + /* Conceal VM_ACCOUNT so old reservation is not undone */ if (vm_flags & VM_ACCOUNT && !(flags & MREMAP_DONTUNMAP)) { vma->vm_flags &= ~VM_ACCOUNT; @@ -739,9 +747,6 @@ static struct vm_area_struct *vma_to_resize(unsigned long addr, (vma->vm_flags & (VM_DONTEXPAND | VM_PFNMAP))) return ERR_PTR(-EINVAL); - if (is_vm_hugetlb_page(vma)) - return ERR_PTR(-EINVAL); - /* We can't remap across vm area boundaries */ if (old_len > vma->vm_end - addr) return ERR_PTR(-EFAULT); @@ -937,6 +942,27 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len, if (mmap_write_lock_killable(current->mm)) return -EINTR; + vma = find_vma(mm, addr); + if (!vma || vma->vm_start > addr) { + ret = EFAULT; + goto out; + } + + if (is_vm_hugetlb_page(vma)) { + struct hstate *h __maybe_unused = hstate_vma(vma); + + old_len = ALIGN(old_len, huge_page_size(h)); + new_len = ALIGN(new_len, huge_page_size(h)); + addr = ALIGN(addr, huge_page_size(h)); + new_addr = ALIGN(new_addr, huge_page_size(h)); + + /* + * Don't allow remap expansion, because the underlying hugetlb + * reservation is not yet capable to handle split reservation. + */ + if (new_len > old_len) + goto out; + } if (flags & (MREMAP_FIXED | MREMAP_DONTUNMAP)) { ret = mremap_to(addr, old_len, new_addr, new_len, From patchwork Fri Oct 8 18:32:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mina Almasry X-Patchwork-Id: 12546185 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDC21C433EF for ; Fri, 8 Oct 2021 18:33:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 80C5660F58 for ; Fri, 8 Oct 2021 18:33:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 80C5660F58 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 238BA6B0072; Fri, 8 Oct 2021 14:33:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C02D940008; Fri, 8 Oct 2021 14:33:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 088C4940007; Fri, 8 Oct 2021 14:33:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0035.hostedemail.com [216.40.44.35]) by kanga.kvack.org (Postfix) with ESMTP id EDE866B0072 for ; Fri, 8 Oct 2021 14:33:04 -0400 (EDT) Received: from smtpin37.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id B340F1842DD67 for ; Fri, 8 Oct 2021 18:33:04 +0000 (UTC) X-FDA: 78674117088.37.9AE9F57 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf20.hostedemail.com (Postfix) with ESMTP id 6E078D003E2F for ; Fri, 8 Oct 2021 18:33:04 +0000 (UTC) Received: by mail-yb1-f202.google.com with SMTP id y16-20020a2586d0000000b005b752db8f97so13552601ybm.18 for ; Fri, 08 Oct 2021 11:33:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:cc; bh=33T3kN8uQn16VjXFihSM9nFCVFBJPamljnDuGFM1188=; b=FBmgADH3n3QLE4ygYn5fZxYWzmDXdiLFKYG7+lhAkhKFZYKol65sXonssLGWJvvwVJ 4xqdxHzfRRGaxzEzw7U7kFcvTHNvhCr/y/sQ3xA84J5NkWq0qHT/TDW6JXv9Y/7mp5Lo I4/XsU5LE0ixab7+zF0Rt0zedlZPyXmrxbvYaRNZ1APAXKDjGr7jmwYjDfc8XpcSFyrS QWhkfv5y6PJ4EhSYNjuv8Mou/tjDodvnGBDZQ5DKDJ3+54zqQyn4N2XwGfO7Svm5w+0Y zznNrPpoH6ySRUM1oN9QLzeQCbL600ibBYiYgoQU4EhI+e83LWVK3pB74WZxd0i3aU3q Rb/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:cc; bh=33T3kN8uQn16VjXFihSM9nFCVFBJPamljnDuGFM1188=; b=rHb1uZY1erTw4AOmN5kWzI4lrb1FcAvP0aAIs/A3H8nEEl00SpQfKN6f7HXZxmlHyA JlGLIJ0W8gkmnlixtS2zksgbkpnYwsSJ6LiWqPklfWE5pmuH8302E6lydHkF6PaTNf49 ByMSTj1UGUxNkmCXrqd6++5n8NEQ5pZ8vvtf399w5NvE1d3YGAwMwprmlYlz0AZU++N4 wKCugKsaIUUICCtMlyYPmJWSDgLEGWLYH6AP0pCmGuIugBDURg3/FW/alksUQTH8XssL CAcW+n0anDzdC2IvGHBN48wyzsIBoc9p1kpGmHdqXPQenALEyoq+kC43I6jACHr8dLIR xmuQ== X-Gm-Message-State: AOAM533h+bAWal+ImrjadWYCwlNWs2/Y61GCiMls6I0lppcooW0OO26T cFp0tuHYZ6zNmuV05O1vTB25nZP9wXi1Yk1BIA== X-Google-Smtp-Source: ABdhPJzwCFEs5JvzdHp5pvvyqCWmjt3ps3TDfGLwdFiSdY/xdXP8kNHQ6WoeY+s6ILQTloH7XVSIv6NVFfYIp4bfXA== X-Received: from almasrymina.svl.corp.google.com ([2620:15c:2cd:202:a56d:7f0d:a7e0:32d0]) (user=almasrymina job=sendgmr) by 2002:a25:2205:: with SMTP id i5mr5471222ybi.203.1633717983571; Fri, 08 Oct 2021 11:33:03 -0700 (PDT) Date: Fri, 8 Oct 2021 11:32:56 -0700 In-Reply-To: <20211008183256.1558105-1-almasrymina@google.com> Message-Id: <20211008183256.1558105-2-almasrymina@google.com> Mime-Version: 1.0 References: <20211008183256.1558105-1-almasrymina@google.com> X-Mailer: git-send-email 2.33.0.882.g93a45727a2-goog Subject: [PATCH v5 2/2] mm, hugepages: Add hugetlb vma mremap() test From: Mina Almasry Cc: Mina Almasry , Mike Kravetz , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Ken Chen , Chris Kennelly , Michal Hocko , Vlastimil Babka , Kirill Shutemov X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 6E078D003E2F X-Stat-Signature: cu1yhr87bsu7upkuho9kado9wiz6g43i Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=FBmgADH3; spf=pass (imf20.hostedemail.com: domain of 3345gYQsKCJAu56uCBI627u08805y.w86527EH-664Fuw4.8B0@flex--almasrymina.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3345gYQsKCJAu56uCBI627u08805y.w86527EH-664Fuw4.8B0@flex--almasrymina.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1633717984-407622 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Signed-off-by: Mina Almasry Cc: Mike Kravetz Cc: Andrew Morton Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Cc: Ken Chen Cc: Chris Kennelly Cc: Michal Hocko Cc: Vlastimil Babka Cc: Kirill Shutemov --- Changes in v4: - Added comments to make test output clearer. - Modified test case slightly to test hugepage alignment of new_addr. --- tools/testing/selftests/vm/.gitignore | 1 + tools/testing/selftests/vm/Makefile | 1 + tools/testing/selftests/vm/hugepage-mremap.c | 168 +++++++++++++++++++ 3 files changed, 170 insertions(+) create mode 100644 tools/testing/selftests/vm/hugepage-mremap.c -- 2.33.0.882.g93a45727a2-goog diff --git a/tools/testing/selftests/vm/.gitignore b/tools/testing/selftests/vm/.gitignore index b02eac613fdda..2e7e86e852828 100644 --- a/tools/testing/selftests/vm/.gitignore +++ b/tools/testing/selftests/vm/.gitignore @@ -1,5 +1,6 @@ # SPDX-License-Identifier: GPL-2.0-only hugepage-mmap +hugepage-mremap hugepage-shm khugepaged map_hugetlb diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile index d9605bd10f2de..1607322a112c9 100644 --- a/tools/testing/selftests/vm/Makefile +++ b/tools/testing/selftests/vm/Makefile @@ -29,6 +29,7 @@ TEST_GEN_FILES = compaction_test TEST_GEN_FILES += gup_test TEST_GEN_FILES += hmm-tests TEST_GEN_FILES += hugepage-mmap +TEST_GEN_FILES += hugepage-mremap TEST_GEN_FILES += hugepage-shm TEST_GEN_FILES += khugepaged TEST_GEN_FILES += madv_populate diff --git a/tools/testing/selftests/vm/hugepage-mremap.c b/tools/testing/selftests/vm/hugepage-mremap.c new file mode 100644 index 0000000000000..ba35b5b13c52c --- /dev/null +++ b/tools/testing/selftests/vm/hugepage-mremap.c @@ -0,0 +1,168 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * hugepage-mremap: + * + * Example of remapping huge page memory in a user application using the + * mremap system call. Before running this application, make sure that the + * administrator has mounted the hugetlbfs filesystem (on some directory + * like /mnt) using the command mount -t hugetlbfs nodev /mnt. In this + * example, the app is requesting memory of size 10MB that is backed by + * huge pages. + * + */ + +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include +#include /* Definition of O_* constants */ +#include /* Definition of SYS_* constants */ +#include +#include +#include + +#define LENGTH (1UL * 1024 * 1024 * 1024) + +#define PROTECTION (PROT_READ | PROT_WRITE | PROT_EXEC) +#define FLAGS (MAP_SHARED | MAP_ANONYMOUS) + +static void check_bytes(char *addr) +{ + printf("First hex is %x\n", *((unsigned int *)addr)); +} + +static void write_bytes(char *addr) +{ + unsigned long i; + + for (i = 0; i < LENGTH; i++) + *(addr + i) = (char)i; +} + +static int read_bytes(char *addr) +{ + unsigned long i; + + check_bytes(addr); + for (i = 0; i < LENGTH; i++) + if (*(addr + i) != (char)i) { + printf("Mismatch at %lu\n", i); + return 1; + } + return 0; +} + +static void register_region_with_uffd(char *addr, size_t len) +{ + long uffd; /* userfaultfd file descriptor */ + struct uffdio_api uffdio_api; + struct uffdio_register uffdio_register; + + /* Create and enable userfaultfd object. */ + + uffd = syscall(__NR_userfaultfd, O_CLOEXEC | O_NONBLOCK); + if (uffd == -1) { + perror("userfaultfd"); + exit(1); + } + + uffdio_api.api = UFFD_API; + uffdio_api.features = 0; + if (ioctl(uffd, UFFDIO_API, &uffdio_api) == -1) { + perror("ioctl-UFFDIO_API"); + exit(1); + } + + /* Create a private anonymous mapping. The memory will be + * demand-zero paged--that is, not yet allocated. When we + * actually touch the memory, it will be allocated via + * the userfaultfd. + */ + + addr = mmap(NULL, len, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); + if (addr == MAP_FAILED) { + perror("mmap"); + exit(1); + } + + printf("Address returned by mmap() = %p\n", addr); + + /* Register the memory range of the mapping we just created for + * handling by the userfaultfd object. In mode, we request to track + * missing pages (i.e., pages that have not yet been faulted in). + */ + + uffdio_register.range.start = (unsigned long)addr; + uffdio_register.range.len = len; + uffdio_register.mode = UFFDIO_REGISTER_MODE_MISSING; + if (ioctl(uffd, UFFDIO_REGISTER, &uffdio_register) == -1) { + perror("ioctl-UFFDIO_REGISTER"); + exit(1); + } +} + +int main(void) +{ + int ret = 0; + + int fd = open("/mnt/huge/test", O_CREAT | O_RDWR, 0755); + + if (fd < 0) { + perror("Open failed"); + exit(1); + } + + /* mmap to a PUD aligned address to hopefully trigger pmd sharing. */ + unsigned long suggested_addr = 0x7eaa40000000; + void *haddr = mmap((void *)suggested_addr, LENGTH, PROTECTION, + MAP_HUGETLB | MAP_SHARED | MAP_POPULATE, fd, 0); + printf("Map haddr: Returned address is %p\n", haddr); + if (haddr == MAP_FAILED) { + perror("mmap1"); + exit(1); + } + + /* mmap again to a dummy address to hopefully trigger pmd sharing. */ + suggested_addr = 0x7daa40000000; + void *daddr = mmap((void *)suggested_addr, LENGTH, PROTECTION, + MAP_HUGETLB | MAP_SHARED | MAP_POPULATE, fd, 0); + printf("Map daddr: Returned address is %p\n", daddr); + if (daddr == MAP_FAILED) { + perror("mmap3"); + exit(1); + } + + /* Note vaddr is not hugepage aligned. Mremap should hugepage align the + * vaddr on remap. + */ + suggested_addr = 0x7faa4002000; + void *vaddr = + mmap((void *)suggested_addr, LENGTH, PROTECTION, FLAGS, -1, 0); + printf("Map vaddr: Returned address is %p\n", vaddr); + if (vaddr == MAP_FAILED) { + perror("mmap2"); + exit(1); + } + + register_region_with_uffd(haddr, LENGTH); + + void *addr = mremap(haddr, LENGTH, LENGTH, + MREMAP_MAYMOVE | MREMAP_FIXED, vaddr); + if (addr == MAP_FAILED) { + perror("mremap"); + exit(1); + } + + printf("Mremap: Returned address is %p\n", addr); + check_bytes(addr); + write_bytes(addr); + ret = read_bytes(addr); + + munmap(addr, LENGTH); + + return ret; +}