From patchwork Sat Feb 18 00:27:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13145379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 008B1C636D6 for ; Sat, 18 Feb 2023 00:29:11 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4A925280009; Fri, 17 Feb 2023 19:28:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 43135280002; Fri, 17 Feb 2023 19:28:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 28423280009; Fri, 17 Feb 2023 19:28:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 1BE02280002 for ; Fri, 17 Feb 2023 19:28:59 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id E9CDD140875 for ; Sat, 18 Feb 2023 00:28:58 +0000 (UTC) X-FDA: 80478527556.27.1A7326E Received: from mail-vk1-f202.google.com (mail-vk1-f202.google.com [209.85.221.202]) by imf01.hostedemail.com (Postfix) with ESMTP id 2B37640008 for ; Sat, 18 Feb 2023 00:28:56 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=aY3F+EdX; spf=pass (imf01.hostedemail.com: domain of 3yBvwYwoKCOUQaOVbNOaVUNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--jthoughton.bounces.google.com designates 209.85.221.202 as permitted sender) smtp.mailfrom=3yBvwYwoKCOUQaOVbNOaVUNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676680137; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uRKYla0TQmxQv6jLt3zA0HnNAsQhQcQxeVgRZIxAfD0=; b=Ii008l2E+Cog+3zbyFPaeFWo/KoxXl2WD26RHEm8orQjjKMewGG5J2DQL54ZRyJH830fpw yU8vTkpOsIErzykSqO5RJCGUY4//hspw0YiqmmSG75L6zv8CPRg8Y66CDLGsR7TmfWV8i+ 1fsxRM2JmC2kVKVIpureK3phRmTWfcM= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=aY3F+EdX; spf=pass (imf01.hostedemail.com: domain of 3yBvwYwoKCOUQaOVbNOaVUNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--jthoughton.bounces.google.com designates 209.85.221.202 as permitted sender) smtp.mailfrom=3yBvwYwoKCOUQaOVbNOaVUNVVNSL.JVTSPUbe-TTRcHJR.VYN@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676680137; a=rsa-sha256; cv=none; b=J0QT51CHGcYwH94rv+Akqrcjl8FFR2chX5HVVEIa/CfF1ByBZXoB7LLcEkX8Oo00PybaLW F6MPtep5fSxukYW+otk3DsT/t+GDPLhH4S35Le/bMuu9or8gZoGKzyU0Xu/B6z5XUu727l Z057Sel02NCBBsba81S8G+2FkpHbdiY= Received: by mail-vk1-f202.google.com with SMTP id bj44-20020a0561220e6c00b003e1cb6fe65dso762987vkb.9 for ; Fri, 17 Feb 2023 16:28:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1676680136; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=uRKYla0TQmxQv6jLt3zA0HnNAsQhQcQxeVgRZIxAfD0=; b=aY3F+EdXtCUshFPXDCNcttdzNX+c4LbhQi1SlJKFNgfqHWfMJ9vDmVr9sTpieHk05X o4uR5QadzoXUGRF67oJyJbtK2h79UlcD3xZm62j78SP9Oa4ap24kfVkt0+OF4A/n+edW 7IAq9oxJqPxznoi+TtMM93uAoNkDnONQ+ts18nLlvOqmoYn3iH/qQKh7/ZFGrihwcuj9 1V4KX6t2Pwg0FKhmdUmpWWjWMENeKOVdjc9494RWzzvR00UMehx3hUstaQ0j8uMeOex4 5VuSVQoGMANGWUUusrSYOeAZy9IVSY0n/qOvAGec/mTz8lN3nGuzMNi0BmdrRxSyqNfw Sbbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1676680136; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=uRKYla0TQmxQv6jLt3zA0HnNAsQhQcQxeVgRZIxAfD0=; b=scISKYNT7ZqG2aiXrcRlyXVklOaGWDMpRaLPRCP1kcbbkNKFh4z7LUJPxvj/oyM5Yg keYWtx6eHBfXRROR5Foqy8SymWxiJBzBHpKrQv51WiUi9xGnNC8juY5aCAC8f7zXevKi fO/GGXfwSIXUJnrahnH+C3Jj6rvdLluIXi+qYbEv3BKCxPbz/alvMlnVneAXsT3KWML6 l4Ziw6r+WrH6itLwTJUT2d1FSq+Gpdbl4yVKLcjJWQkMPAbH/8H/UjXowXNOAiFZtVPF BPkL3ygE+mFeNbfbC2QgEPS2solnGTErr1Cbl1wI7j1J14f6eoI8T/JmKLPV0uz3apf3 CjLw== X-Gm-Message-State: AO0yUKXXvFr657qdA4tyzdf2/lkqbuvH868NdpV1C3x2UiRfK+Zx+Bm4 tCO4TEYrqm9TstukAthUVyc86eu1UEYKStO1 X-Google-Smtp-Source: AK7set/GEorAyYMaCd0jNzqquEaNj7jwa1sYuJrs5tjMlDyYAqUGDdv0CU5O6YV9UsW4yKpSO/jg62VpebURY2U0 X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a05:6102:153:b0:417:159c:218b with SMTP id a19-20020a056102015300b00417159c218bmr652647vsr.13.1676680136399; Fri, 17 Feb 2023 16:28:56 -0800 (PST) Date: Sat, 18 Feb 2023 00:27:46 +0000 In-Reply-To: <20230218002819.1486479-1-jthoughton@google.com> Mime-Version: 1.0 References: <20230218002819.1486479-1-jthoughton@google.com> X-Mailer: git-send-email 2.39.2.637.g21b0678d19-goog Message-ID: <20230218002819.1486479-14-jthoughton@google.com> Subject: [PATCH v2 13/46] hugetlb: add hugetlb_hgm_walk and hugetlb_walk_step From: James Houghton To: Mike Kravetz , Muchun Song , Peter Xu , Andrew Morton Cc: David Hildenbrand , David Rientjes , Axel Rasmussen , Mina Almasry , "Zach O'Keefe" , Manish Mishra , Naoya Horiguchi , "Dr . David Alan Gilbert" , "Matthew Wilcox (Oracle)" , Vlastimil Babka , Baolin Wang , Miaohe Lin , Yang Shi , Frank van der Linden , Jiaqi Yan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, James Houghton X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 2B37640008 X-Rspam-User: X-Stat-Signature: wsgo4a8k56gqxok4kwxgy9se7gm7jjjp X-HE-Tag: 1676680136-857607 X-HE-Meta: U2FsdGVkX1+lw+Cj/QvEVtEUgvQ9EbIvGgJ4cegixcsfVanLcK82kjp0bFRkw7pWr6Rf1XNucGhhkAL70t/SRkBsCA8hvA9ThjFzrC1lcx7hYRaTSEmAv1pyborwC1sshEcX2zJiAh+KW2k4/ETRzut196vkRhUBBdXUsugJ9yoEYeiqud3VQUg5CplCMdhjkAhe7KvqIRdFDMqLdT5kZIw5xNFtPpK5c/31874TJ2cZVWbJFFWdPuILkoWerVm1XAyr9vBDmUcSX7KRXKg/O51f3dwaeZjYis/nAPGLYHxw9WqAUQ9/hy/k8C9yaWrOzVq45OVjqXD7UnuqqQODeloTCAevKMU+neGO38WyFUDcejG5hMU8rEathse/s2ZaiWKOHPn99tJignY3l3mtaq9oyy0a8e5y4+9YN5C+Pc/RiogoRdONVogMw7XMPHYFyUa0Jqmz7fq/AuUdH9+z9aloL/ecN8ikiZ3txZHR+OsW5Kj/N9ipOUhZYF1iy1STcL8djmYPW9WX+gLMqyLl5pAThhGUCTswaLElpFavwwBGN4zcD/lYhXpdmd0T6Iz51N+IGXawVxfTPkM56XdcJ56l3liw5Z/GHZ2ifuW5fv8K73iAEs456E4maDCPdfxDi1kXIMqGC+vabJXXhCqiGn6xEeqalIzBqf6c0hY/maZf4k8PFjyhjiCH8xK29amEx9rGcNFYFLBmKjTF9uuD3AhoK7aa/NRYrezapJvnhisz6BQRFbN7EQss2I6SVO0sAX0/ERu6V0fPK2WCWRm9oQTfN/PYiRuto1Mrc3NBwSTpEJFN5BtCPo3JWLGUhVTInETSlT4IWiQbALQ4bd6Lkq6Et+Klfy794WuCrbPGHh3NhAuzwEhMeUvobCvUm2VlBkh2UPLnID0tomOggbnGPH1lMTWOtBEQ4RjZ/faPy1cVUz3r6ZXB4/Kvk3IDuBLPC0ze5SY8NVSfGykC0GD yHjbotZN BVvH+pvsgdMz3WQ8x1zvhpA7SH4d49r8JPgsgtzaFu6PlZTyJyo6WxCNZYI9QMn2olTLBp4+EHPcAacCkRh14Vi/i9pg1iaHCr1w6aPZWhQnoQ2Uf5K09zOIsv3c1+XwUOx1mWC+DS1uBvKgS1zZpCGoGGj2LixS/Ct65ovxX9OpsjerF1fotrgRkCGjWObl0ZFs+jafryHIFf+3fgAMMgCzrbu66g6+XBthm1wVGjPyWhPvHrL2trd+DEawv3cxt06wniAAE4BKksVPSO+N0DxgEiUmr2PMUF8ct9XlvuwYMhEuhIU6hUHJ/k/jML58QyCoWqMFYJPTaqrdfNxuguDbeUOh2JKSAd53UX6qbyu7YJxpN7t4iujy/hJ/We0hg6UZu4CFM5tQ6TrWl9MllavRRtLQKzXYXubr1GH/kRbC6eBToWkEeJyhRw7pxYg6bJMuhZprq7CQfVdc2FKOC4ElOdq6ETdjEnJN+VMxTJC5j+MPQ1xUzs6AO8F+rDmn+viLkSHQBP1AJO2OE12+BIy4uxkJn/jOOUeEq8gQE15CZyUAfePZfIm3/nnqcWEWW2RE6MZcGw/IbtpQpMPDwJEQhoRA86kVcPtsaQ83nQpKmVKKPmQm7X4YopO/q5FAINysiP6vdRXGNlJ+ZUmFfm1Dh3Xk1I/sehDj8mTlNYdBIPRzArA27iBxmsSS2CP8a/hmIzpAwYc/yfqQ= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: hugetlb_hgm_walk implements high-granularity page table walks for HugeTLB. It is safe to call on non-HGM enabled VMAs; it will return immediately. hugetlb_walk_step implements how we step forwards in the walk. For architectures that don't use GENERAL_HUGETLB, they will need to provide their own implementation. The broader API that should be used is hugetlb_full_walk[,alloc|,continue]. Signed-off-by: James Houghton diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 9d839519c875..726d581158b1 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -223,6 +223,14 @@ u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx); pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, pud_t *pud); +int hugetlb_full_walk(struct hugetlb_pte *hpte, struct vm_area_struct *vma, + unsigned long addr); +void hugetlb_full_walk_continue(struct hugetlb_pte *hpte, + struct vm_area_struct *vma, unsigned long addr); +int hugetlb_full_walk_alloc(struct hugetlb_pte *hpte, + struct vm_area_struct *vma, unsigned long addr, + unsigned long target_sz); + struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage); extern int sysctl_hugetlb_shm_group; @@ -272,6 +280,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr, unsigned long sz); unsigned long hugetlb_mask_last_page(struct hstate *h); +int hugetlb_walk_step(struct mm_struct *mm, struct hugetlb_pte *hpte, + unsigned long addr, unsigned long sz); int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, pte_t *ptep); void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, @@ -1054,6 +1064,8 @@ void hugetlb_register_node(struct node *node); void hugetlb_unregister_node(struct node *node); #endif +enum hugetlb_level hpage_size_to_level(unsigned long sz); + #else /* CONFIG_HUGETLB_PAGE */ struct hstate {}; @@ -1246,6 +1258,11 @@ static inline void hugetlb_register_node(struct node *node) static inline void hugetlb_unregister_node(struct node *node) { } + +static inline enum hugetlb_level hpage_size_to_level(unsigned long sz) +{ + return HUGETLB_LEVEL_PTE; +} #endif /* CONFIG_HUGETLB_PAGE */ #ifdef CONFIG_HUGETLB_HIGH_GRANULARITY_MAPPING diff --git a/mm/hugetlb.c b/mm/hugetlb.c index bb424cdf79e4..810c05feb41f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -97,6 +97,29 @@ static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma); static void hugetlb_unshare_pmds(struct vm_area_struct *vma, unsigned long start, unsigned long end); +/* + * hpage_size_to_level() - convert @sz to the corresponding page table level + * + * @sz must be less than or equal to a valid hugepage size. + */ +enum hugetlb_level hpage_size_to_level(unsigned long sz) +{ + /* + * We order the conditionals from smallest to largest to pick the + * smallest level when multiple levels have the same size (i.e., + * when levels are folded). + */ + if (sz < PMD_SIZE) + return HUGETLB_LEVEL_PTE; + if (sz < PUD_SIZE) + return HUGETLB_LEVEL_PMD; + if (sz < P4D_SIZE) + return HUGETLB_LEVEL_PUD; + if (sz < PGDIR_SIZE) + return HUGETLB_LEVEL_P4D; + return HUGETLB_LEVEL_PGD; +} + static inline bool subpool_is_free(struct hugepage_subpool *spool) { if (spool->count) @@ -7315,6 +7338,154 @@ bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr) } #endif /* CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ +/* __hugetlb_hgm_walk - walks a high-granularity HugeTLB page table to resolve + * the page table entry for @addr. We might allocate new PTEs. + * + * @hpte must always be pointing at an hstate-level PTE or deeper. + * + * This function will never walk further if it encounters a PTE of a size + * less than or equal to @sz. + * + * @alloc determines what we do when we encounter an empty PTE. If false, + * we stop walking. If true and @sz is less than the current PTE's size, + * we make that PTE point to the next level down, going until @sz is the same + * as our current PTE. + * + * If @alloc is false and @sz is PAGE_SIZE, this function will always + * succeed, but that does not guarantee that hugetlb_pte_size(hpte) is @sz. + * + * Return: + * -ENOMEM if we couldn't allocate new PTEs. + * -EEXIST if the caller wanted to walk further than a migration PTE, + * poison PTE, or a PTE marker. The caller needs to manually deal + * with this scenario. + * -EINVAL if called with invalid arguments (@sz invalid, @hpte not + * initialized). + * 0 otherwise. + * + * Even if this function fails, @hpte is guaranteed to always remain + * valid. + */ +static int __hugetlb_hgm_walk(struct mm_struct *mm, struct vm_area_struct *vma, + struct hugetlb_pte *hpte, unsigned long addr, + unsigned long sz, bool alloc) +{ + int ret = 0; + pte_t pte; + + if (WARN_ON_ONCE(sz < PAGE_SIZE)) + return -EINVAL; + + if (WARN_ON_ONCE(!hpte->ptep)) + return -EINVAL; + + while (hugetlb_pte_size(hpte) > sz && !ret) { + pte = huge_ptep_get(hpte->ptep); + if (!pte_present(pte)) { + if (!alloc) + return 0; + if (unlikely(!huge_pte_none(pte))) + return -EEXIST; + } else if (hugetlb_pte_present_leaf(hpte, pte)) + return 0; + ret = hugetlb_walk_step(mm, hpte, addr, sz); + } + + return ret; +} + +/* + * hugetlb_hgm_walk - Has the same behavior as __hugetlb_hgm_walk but will + * initialize @hpte with hstate-level PTE pointer @ptep. + */ +static int hugetlb_hgm_walk(struct hugetlb_pte *hpte, + pte_t *ptep, + struct vm_area_struct *vma, + unsigned long addr, + unsigned long target_sz, + bool alloc) +{ + struct hstate *h = hstate_vma(vma); + + hugetlb_pte_init(vma->vm_mm, hpte, ptep, huge_page_shift(h), + hpage_size_to_level(huge_page_size(h))); + return __hugetlb_hgm_walk(vma->vm_mm, vma, hpte, addr, target_sz, + alloc); +} + +/* + * hugetlb_full_walk_continue - continue a high-granularity page-table walk. + * + * If a user has a valid @hpte but knows that @hpte is not a leaf, they can + * attempt to continue walking by calling this function. + * + * This function will never fail, but @hpte might not change. + * + * If @hpte hasn't been initialized, then this function's behavior is + * undefined. + */ +void hugetlb_full_walk_continue(struct hugetlb_pte *hpte, + struct vm_area_struct *vma, + unsigned long addr) +{ + /* __hugetlb_hgm_walk will never fail with these arguments. */ + WARN_ON_ONCE(__hugetlb_hgm_walk(vma->vm_mm, vma, hpte, addr, + PAGE_SIZE, false)); +} + +/* + * hugetlb_full_walk - do a high-granularity page-table walk; never allocate. + * + * This function can only fail if we find that the hstate-level PTE is not + * allocated. Callers can take advantage of this fact to skip address regions + * that cannot be mapped in that case. + * + * If this function succeeds, @hpte is guaranteed to be valid. + */ +int hugetlb_full_walk(struct hugetlb_pte *hpte, + struct vm_area_struct *vma, + unsigned long addr) +{ + struct hstate *h = hstate_vma(vma); + unsigned long sz = huge_page_size(h); + /* + * We must mask the address appropriately so that we pick up the first + * PTE in a contiguous group. + */ + pte_t *ptep = hugetlb_walk(vma, addr & huge_page_mask(h), sz); + + if (!ptep) + return -ENOMEM; + + /* hugetlb_hgm_walk will never fail with these arguments. */ + WARN_ON_ONCE(hugetlb_hgm_walk(hpte, ptep, vma, addr, PAGE_SIZE, false)); + return 0; +} + +/* + * hugetlb_full_walk_alloc - do a high-granularity walk, potentially allocate + * new PTEs. + */ +int hugetlb_full_walk_alloc(struct hugetlb_pte *hpte, + struct vm_area_struct *vma, + unsigned long addr, + unsigned long target_sz) +{ + struct hstate *h = hstate_vma(vma); + unsigned long sz = huge_page_size(h); + /* + * We must mask the address appropriately so that we pick up the first + * PTE in a contiguous group. + */ + pte_t *ptep = huge_pte_alloc(vma->vm_mm, vma, addr & huge_page_mask(h), + sz); + + if (!ptep) + return -ENOMEM; + + return hugetlb_hgm_walk(hpte, ptep, vma, addr, target_sz, true); +} + #ifdef CONFIG_ARCH_WANT_GENERAL_HUGETLB pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, unsigned long sz) @@ -7382,6 +7553,48 @@ pte_t *huge_pte_offset(struct mm_struct *mm, return (pte_t *)pmd; } +/* + * hugetlb_walk_step() - Walk the page table one step to resolve the page + * (hugepage or subpage) entry at address @addr. + * + * @sz always points at the final target PTE size (e.g. PAGE_SIZE for the + * lowest level PTE). + * + * @hpte will always remain valid, even if this function fails. + * + * Architectures that implement this function must ensure that if @hpte does + * not change levels, then its PTL must also stay the same. + */ +int hugetlb_walk_step(struct mm_struct *mm, struct hugetlb_pte *hpte, + unsigned long addr, unsigned long sz) +{ + pte_t *ptep; + spinlock_t *ptl; + + switch (hpte->level) { + case HUGETLB_LEVEL_PUD: + ptep = (pte_t *)hugetlb_alloc_pmd(mm, hpte, addr); + if (IS_ERR(ptep)) + return PTR_ERR(ptep); + hugetlb_pte_init(mm, hpte, ptep, PMD_SHIFT, + HUGETLB_LEVEL_PMD); + break; + case HUGETLB_LEVEL_PMD: + ptep = hugetlb_alloc_pte(mm, hpte, addr); + if (IS_ERR(ptep)) + return PTR_ERR(ptep); + ptl = pte_lockptr(mm, (pmd_t *)hpte->ptep); + __hugetlb_pte_init(hpte, ptep, PAGE_SHIFT, + HUGETLB_LEVEL_PTE, ptl); + break; + default: + WARN_ONCE(1, "%s: got invalid level: %d (shift: %d)\n", + __func__, hpte->level, hpte->shift); + return -EINVAL; + } + return 0; +} + /* * Return a mask that can be used to update an address to the last huge * page in a page table page mapping size. Used to skip non-present