From patchwork Thu Jan 5 10:18:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13089648 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4061AC3DA7D for ; Thu, 5 Jan 2023 10:19:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C54FD94000B; Thu, 5 Jan 2023 05:19:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id BDC06940008; Thu, 5 Jan 2023 05:19:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A532994000B; Thu, 5 Jan 2023 05:19:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 9644E940008 for ; Thu, 5 Jan 2023 05:19:15 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 7186E1A0D50 for ; Thu, 5 Jan 2023 10:19:15 +0000 (UTC) X-FDA: 80320347870.14.589C9A1 Received: from mail-yb1-f201.google.com (mail-yb1-f201.google.com [209.85.219.201]) by imf04.hostedemail.com (Postfix) with ESMTP id D03A74000A for ; Thu, 5 Jan 2023 10:19:13 +0000 (UTC) Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=FAFveYt9; spf=pass (imf04.hostedemail.com: domain of 3IaS2YwoKCGgPZNUaMNZUTMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--jthoughton.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3IaS2YwoKCGgPZNUaMNZUTMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1672913953; a=rsa-sha256; cv=none; b=i2c9BTfB6kOQNUZ/G6wYwDWU/JIggpnSr2b21q2ZBGNf9TebuYld+KEHLehR781ONw3DGT LAT7qywyK9uB7vrw1w+8WNbX1EPEtPCEcJhPVfzebtDeV1fExJAN3z+mXnyHyT4tN6ayKt fkh9gM9juMZJulp/YSnX3boC9VLPxXk= ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=FAFveYt9; spf=pass (imf04.hostedemail.com: domain of 3IaS2YwoKCGgPZNUaMNZUTMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--jthoughton.bounces.google.com designates 209.85.219.201 as permitted sender) smtp.mailfrom=3IaS2YwoKCGgPZNUaMNZUTMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1672913953; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2Vt58mLGL5M8YpW+S8eBdfZehWuEGnsVXCCJx2houLg=; b=WQuPbXgkGu8SlG31MAz79SDJv14eEv0loeQnFJi3GHCcvk5RTDAc6oc270tRKVg0jiTprD gmGuAlhe/OTRnVXYaNtmGAYWOewKjggAlWGW+6Slj+ShYQOpSGrP2VMvUiQq/DAqCQB4HL cM9/bkMm/U9qjh5TOWyQHHvDfI2tQOU= Received: by mail-yb1-f201.google.com with SMTP id a4-20020a5b0004000000b006fdc6aaec4fso36882665ybp.20 for ; Thu, 05 Jan 2023 02:19:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=2Vt58mLGL5M8YpW+S8eBdfZehWuEGnsVXCCJx2houLg=; b=FAFveYt9nBF0eFfaA1d0Tv7HE5xVwqFCRHNy2MmHG8kjRiH1IJ9npxQOlM0OPPr9x0 SOVsSNTwUscRwlguA9nGCrhEAEgibLkiSxGoSBRrXyiTEZUYuMRPxRxyfuuinve75ZiS b63qtgYDL7WfBK69glJ4RMOKf3QPMUJGqEe/PQ3lN23K5qmqgGr6+syabeOfq80mm59A aqLVtsA1nxHBiLCtySePm+DxyeSOS0NdLf5oXMQa6tQk1Fla5YNZga5tHOyH+tx+EP0G l5qkwogI70U832FjytD4NOveDXyTFLX+2z3zmQ8We0Ne3ob7c9NV3FWN7gNIpfjzemkn +pgg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2Vt58mLGL5M8YpW+S8eBdfZehWuEGnsVXCCJx2houLg=; b=WmWEs/E02uidqTlhXYB8+agdKYzSgyF63jkAXiwcSxOMSC6l4zBw9tCqSIaGIQ2ufn UftvRgW8tP+BHOZ437SRcVwMiQSrNO89z6flI7blfFziipnVIpysZ5g8n6M/C4CCgkfY MCdNJXXAW4LvvQU9P0nnEjkaJLSkxCzAGjnz/vYhS23lBQlj8TNfjXb1LpTBT3Ze7sfz 9nrdKQOaVmCJRXJqXb75zoypSSjwzu+m9I8S0zQ4xKHRHmLDLdlUAe3I+0m0Onvw5oHD xplFq4x7hKWoGy/ad7cXWDdlMyeO+HXTXTQhBQQQIlL3KagWK/1e9YgEHrkAZISJ9ZU2 NHFw== X-Gm-Message-State: AFqh2krbHxMRSsvEbkdE2LJX4m5Mpzzb2XlXso4xEhbLX/F3HTcJX2k1 tCkRhuATNtbKz3ezgqPhbTlTy6yopF6gZjtJ X-Google-Smtp-Source: AMrXdXsG407xQfqrIOR0gOSXoyOo4VMzkb16AaBSb+fslYRZShBcxhwqLEDLjbewemEoWrf7Hfx25vDRkIb8v9+b X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a25:850e:0:b0:6f8:42d8:2507 with SMTP id w14-20020a25850e000000b006f842d82507mr6177141ybk.110.1672913953310; Thu, 05 Jan 2023 02:19:13 -0800 (PST) Date: Thu, 5 Jan 2023 10:18:11 +0000 In-Reply-To: <20230105101844.1893104-1-jthoughton@google.com> Mime-Version: 1.0 References: <20230105101844.1893104-1-jthoughton@google.com> X-Mailer: git-send-email 2.39.0.314.g84b9a713c41-goog Message-ID: <20230105101844.1893104-14-jthoughton@google.com> Subject: [PATCH 13/46] hugetlb: add hugetlb_hgm_walk and hugetlb_walk_step From: James Houghton To: Mike Kravetz , Muchun Song , Peter Xu Cc: David Hildenbrand , David Rientjes , Axel Rasmussen , Mina Almasry , "Zach O'Keefe" , Manish Mishra , Naoya Horiguchi , "Dr . David Alan Gilbert" , "Matthew Wilcox (Oracle)" , Vlastimil Babka , Baolin Wang , Miaohe Lin , Yang Shi , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, James Houghton X-Rspam-User: X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: D03A74000A X-Stat-Signature: knbrsqc5dj648zgfk199s5xks51ufadh X-HE-Tag: 1672913953-31932 X-HE-Meta: U2FsdGVkX1/Sz3hDbx7IZBUhnYOJKeyoGzGgugo/m3Lm37FliL5kYOBDBsP5rc/lMkCSraWLjU6hHP/Qnm9YAbXitKoF3UFH5+cu9geD9fswbVzhXFDaPsGR3HsnWRnu0KYbMwOEbaNfGCoqfWdw2L9XLvjSwuJ6X4k2HtVAiqJViMAdGdk86Is1Ip52DhvU0U+vi97VFz2Gqq2gXtuyfyFEHeulxz3BCgzoWwe19vgcFC7YMsSqPETzRqHa09bYYf1XIzsU8jgcuQj+eqb6XqMBrGqJltjUUQEQyfwy23n4WCswXQ78K1qSgbZeDI9c6c+6/5Z0PBWBdLVNXzLqNw68s6TffbY4vcyNUp/WvQV+3Rs7YE3K/gHB6Nr4xfjk0tYn7S9GqoDVXL4RLwrs4PnyhnXxAQpUSGSGFUH0ESu0KtM99v5eyInOmyVcVsQMwzu7HAfKdtl0yXQ8U6TA3CwQidGhVslj2RZWd8yg9yZt4LWWU3Dz1N16LUZpu4oa2tUUrATOcSy2GlIklYH4tvCzdCGSSN7ak/8Ri8DW9AcPoviFpHuLlZPEoiz09P6lBJXwzs48n3KtCxaj8K3cFGJC5qAB9TmYKA8izIXCPMz1+MILyMXDUajqvlrn95lObiVHTz9WC13hxtI5bSdYIX9hymxm70X4EDZccN8dXrYCDnSKtlwwNnDit+bJbTrSPviOQZj1Dm2lWrVE+SQWIWwOtpfakWZanEAGf8ZzBu30QirctWUMjkSDVc3vt2ukyw6rDuntra4xg8nvVMKJGsA3UszTtW9YquMS1IeCAS4JYP/xJvSSo1zCobbbgkjvq636jIKNSlDPQPqk9fOi1+wYmNFUkCq3ntIO1z6MypWtL1Skcp4ZpvkNzBwJjB4ytfLRWnE36DbXlMQhS6WLAZzQUu85aKoJZ1xvsl5qqpqiBnXsv3qKwwJgS42AQtnYioIMYx4sFxvGzcK5GHC UKeo4rYH 7NN2b1TS69xU6eK6ecxtE8Mw0dnYWHlxqQLNi1b0Dz5PPAVkH+6tVKz0raTxCy4x58oINumDZ1y2Fl4UYkAGrymde9Qo239q+uvXnZqeaoDjchh3F3dwla75vs0+QVY1Gpf/B7km+LJ5n13UegnHj6m6gtSRUeODSn6qiPPlgfW4r26iNANweigAF6QbMvUSI0Jp9T5RUZ/l/SYN+5Uw5HYbHkH+j9ED+d+Jv7+L/VNSvk6zgAeE7gi9U8HKympid68yFA5LIAmI4MpbNRr889zTu9dIn1AgA8JEj08jP3Xv4b9BLIyDZQZ33qfudsaVwFfl56uEcECfZ61SZcuFxJMJjOyrxCQxy2I24FdqZKz79dpVLkm1wO+zlJTeCGOqEwXJIJZOC8PBuw9OyR3MEdZ/hbYHhBBN++/fIMJgROxLcZf2P7UvglyIuFODf2HH7RAYE X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: hugetlb_hgm_walk implements high-granularity page table walks for HugeTLB. It is safe to call on non-HGM enabled VMAs; it will return immediately. hugetlb_walk_step implements how we step forwards in the walk. For architectures that don't use GENERAL_HUGETLB, they will need to provide their own implementation. Signed-off-by: James Houghton --- include/linux/hugetlb.h | 35 +++++-- mm/hugetlb.c | 213 ++++++++++++++++++++++++++++++++++++++++ 2 files changed, 242 insertions(+), 6 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index ad9d19f0d1b9..2fcd8f313628 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -239,6 +239,14 @@ u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx); pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, pud_t *pud); +int hugetlb_full_walk(struct hugetlb_pte *hpte, struct vm_area_struct *vma, + unsigned long addr); +void hugetlb_full_walk_continue(struct hugetlb_pte *hpte, + struct vm_area_struct *vma, unsigned long addr); +int hugetlb_full_walk_alloc(struct hugetlb_pte *hpte, + struct vm_area_struct *vma, unsigned long addr, + unsigned long target_sz); + struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage); extern int sysctl_hugetlb_shm_group; @@ -288,6 +296,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr, unsigned long sz); unsigned long hugetlb_mask_last_page(struct hstate *h); +int hugetlb_walk_step(struct mm_struct *mm, struct hugetlb_pte *hpte, + unsigned long addr, unsigned long sz); int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, pte_t *ptep); void adjust_range_if_pmd_sharing_possible(struct vm_area_struct *vma, @@ -1067,6 +1077,8 @@ void hugetlb_register_node(struct node *node); void hugetlb_unregister_node(struct node *node); #endif +enum hugetlb_level hpage_size_to_level(unsigned long sz); + #else /* CONFIG_HUGETLB_PAGE */ struct hstate {}; @@ -1259,6 +1271,11 @@ static inline void hugetlb_register_node(struct node *node) static inline void hugetlb_unregister_node(struct node *node) { } + +static inline enum hugetlb_level hpage_size_to_level(unsigned long sz) +{ + return HUGETLB_LEVEL_PTE; +} #endif /* CONFIG_HUGETLB_PAGE */ #ifdef CONFIG_HUGETLB_HIGH_GRANULARITY_MAPPING @@ -1333,12 +1350,8 @@ __vma_has_hugetlb_vma_lock(struct vm_area_struct *vma) return (vma->vm_flags & VM_MAYSHARE) && vma->vm_private_data; } -/* - * Safe version of huge_pte_offset() to check the locks. See comments - * above huge_pte_offset(). - */ -static inline pte_t * -hugetlb_walk(struct vm_area_struct *vma, unsigned long addr, unsigned long sz) +static inline void +hugetlb_walk_lock_check(struct vm_area_struct *vma) { #if defined(CONFIG_HUGETLB_PAGE) && \ defined(CONFIG_ARCH_WANT_HUGE_PMD_SHARE) && defined(CONFIG_LOCKDEP) @@ -1360,6 +1373,16 @@ hugetlb_walk(struct vm_area_struct *vma, unsigned long addr, unsigned long sz) !lockdep_is_held( &vma->vm_file->f_mapping->i_mmap_rwsem)); #endif +} + +/* + * Safe version of huge_pte_offset() to check the locks. See comments + * above huge_pte_offset(). + */ +static inline pte_t * +hugetlb_walk(struct vm_area_struct *vma, unsigned long addr, unsigned long sz) +{ + hugetlb_walk_lock_check(vma); return huge_pte_offset(vma->vm_mm, addr, sz); } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 2160cbaf3311..aa8e59cbca69 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -94,6 +94,29 @@ static int hugetlb_acct_memory(struct hstate *h, long delta); static void hugetlb_vma_lock_free(struct vm_area_struct *vma); static void __hugetlb_vma_unlock_write_free(struct vm_area_struct *vma); +/* + * hpage_size_to_level() - convert @sz to the corresponding page table level + * + * @sz must be less than or equal to a valid hugepage size. + */ +enum hugetlb_level hpage_size_to_level(unsigned long sz) +{ + /* + * We order the conditionals from smallest to largest to pick the + * smallest level when multiple levels have the same size (i.e., + * when levels are folded). + */ + if (sz < PMD_SIZE) + return HUGETLB_LEVEL_PTE; + if (sz < PUD_SIZE) + return HUGETLB_LEVEL_PMD; + if (sz < P4D_SIZE) + return HUGETLB_LEVEL_PUD; + if (sz < PGDIR_SIZE) + return HUGETLB_LEVEL_P4D; + return HUGETLB_LEVEL_PGD; +} + static inline bool subpool_is_free(struct hugepage_subpool *spool) { if (spool->count) @@ -7276,6 +7299,153 @@ bool want_pmd_share(struct vm_area_struct *vma, unsigned long addr) } #endif /* CONFIG_ARCH_WANT_HUGE_PMD_SHARE */ +/* hugetlb_hgm_walk - walks a high-granularity HugeTLB page table to resolve + * the page table entry for @addr. We might allocate new PTEs. + * + * @hpte must always be pointing at an hstate-level PTE or deeper. + * + * This function will never walk further if it encounters a PTE of a size + * less than or equal to @sz. + * + * @alloc determines what we do when we encounter an empty PTE. If false, + * we stop walking. If true and @sz is less than the current PTE's size, + * we make that PTE point to the next level down, going until @sz is the same + * as our current PTE. + * + * If @alloc is false and @sz is PAGE_SIZE, this function will always + * succeed, but that does not guarantee that hugetlb_pte_size(hpte) is @sz. + * + * Return: + * -ENOMEM if we couldn't allocate new PTEs. + * -EEXIST if the caller wanted to walk further than a migration PTE, + * poison PTE, or a PTE marker. The caller needs to manually deal + * with this scenario. + * -EINVAL if called with invalid arguments (@sz invalid, @hpte not + * initialized). + * 0 otherwise. + * + * Even if this function fails, @hpte is guaranteed to always remain + * valid. + */ +static int hugetlb_hgm_walk(struct mm_struct *mm, struct vm_area_struct *vma, + struct hugetlb_pte *hpte, unsigned long addr, + unsigned long sz, bool alloc) +{ + int ret = 0; + pte_t pte; + + if (WARN_ON_ONCE(sz < PAGE_SIZE)) + return -EINVAL; + + if (WARN_ON_ONCE(!hpte->ptep)) + return -EINVAL; + + /* We have the same synchronization requirements as hugetlb_walk. */ + hugetlb_walk_lock_check(vma); + + while (hugetlb_pte_size(hpte) > sz && !ret) { + pte = huge_ptep_get(hpte->ptep); + if (!pte_present(pte)) { + if (!alloc) + return 0; + if (unlikely(!huge_pte_none(pte))) + return -EEXIST; + } else if (hugetlb_pte_present_leaf(hpte, pte)) + return 0; + ret = hugetlb_walk_step(mm, hpte, addr, sz); + } + + return ret; +} + +static int hugetlb_hgm_walk_uninit(struct hugetlb_pte *hpte, + pte_t *ptep, + struct vm_area_struct *vma, + unsigned long addr, + unsigned long target_sz, + bool alloc) +{ + struct hstate *h = hstate_vma(vma); + + hugetlb_pte_populate(vma->vm_mm, hpte, ptep, huge_page_shift(h), + hpage_size_to_level(huge_page_size(h))); + return hugetlb_hgm_walk(vma->vm_mm, vma, hpte, addr, target_sz, + alloc); +} + +/* + * hugetlb_full_walk_continue - continue a high-granularity page-table walk. + * + * If a user has a valid @hpte but knows that @hpte is not a leaf, they can + * attempt to continue walking by calling this function. + * + * This function may never fail, but @hpte might not change. + * + * If @hpte is not valid, then this function is a no-op. + */ +void hugetlb_full_walk_continue(struct hugetlb_pte *hpte, + struct vm_area_struct *vma, + unsigned long addr) +{ + /* hugetlb_hgm_walk will never fail with these arguments. */ + WARN_ON_ONCE(hugetlb_hgm_walk(vma->vm_mm, vma, hpte, addr, + PAGE_SIZE, false)); +} + +/* + * hugetlb_full_walk - do a high-granularity page-table walk; never allocate. + * + * This function can only fail if we find that the hstate-level PTE is not + * allocated. Callers can take advantage of this fact to skip address regions + * that cannot be mapped in that case. + * + * If this function succeeds, @hpte is guaranteed to be valid. + */ +int hugetlb_full_walk(struct hugetlb_pte *hpte, + struct vm_area_struct *vma, + unsigned long addr) +{ + struct hstate *h = hstate_vma(vma); + unsigned long sz = huge_page_size(h); + /* + * We must mask the address appropriately so that we pick up the first + * PTE in a contiguous group. + */ + pte_t *ptep = hugetlb_walk(vma, addr & huge_page_mask(h), sz); + + if (!ptep) + return -ENOMEM; + + /* hugetlb_hgm_walk_uninit will never fail with these arguments. */ + WARN_ON_ONCE(hugetlb_hgm_walk_uninit(hpte, ptep, vma, addr, + PAGE_SIZE, false)); + return 0; +} + +/* + * hugetlb_full_walk_alloc - do a high-granularity walk, potentially allocate + * new PTEs. + */ +int hugetlb_full_walk_alloc(struct hugetlb_pte *hpte, + struct vm_area_struct *vma, + unsigned long addr, + unsigned long target_sz) +{ + struct hstate *h = hstate_vma(vma); + unsigned long sz = huge_page_size(h); + /* + * We must mask the address appropriately so that we pick up the first + * PTE in a contiguous group. + */ + pte_t *ptep = huge_pte_alloc(vma->vm_mm, vma, addr & huge_page_mask(h), + sz); + + if (!ptep) + return -ENOMEM; + + return hugetlb_hgm_walk_uninit(hpte, ptep, vma, addr, target_sz, true); +} + #ifdef CONFIG_ARCH_WANT_GENERAL_HUGETLB pte_t *huge_pte_alloc(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, unsigned long sz) @@ -7343,6 +7513,49 @@ pte_t *huge_pte_offset(struct mm_struct *mm, return (pte_t *)pmd; } +/* + * hugetlb_walk_step() - Walk the page table one step to resolve the page + * (hugepage or subpage) entry at address @addr. + * + * @sz always points at the final target PTE size (e.g. PAGE_SIZE for the + * lowest level PTE). + * + * @hpte will always remain valid, even if this function fails. + * + * Architectures that implement this function must ensure that if @hpte does + * not change levels, then its PTL must also stay the same. + */ +int hugetlb_walk_step(struct mm_struct *mm, struct hugetlb_pte *hpte, + unsigned long addr, unsigned long sz) +{ + pte_t *ptep; + spinlock_t *ptl; + + switch (hpte->level) { + case HUGETLB_LEVEL_PUD: + ptep = (pte_t *)hugetlb_alloc_pmd(mm, hpte, addr); + if (IS_ERR(ptep)) + return PTR_ERR(ptep); + hugetlb_pte_populate(mm, hpte, ptep, PMD_SHIFT, + HUGETLB_LEVEL_PMD); + break; + case HUGETLB_LEVEL_PMD: + ptep = hugetlb_alloc_pte(mm, hpte, addr); + if (IS_ERR(ptep)) + return PTR_ERR(ptep); + ptl = pte_lockptr(mm, (pmd_t *)hpte->ptep); + __hugetlb_pte_populate(hpte, ptep, PAGE_SHIFT, + HUGETLB_LEVEL_PTE, ptl); + hpte->ptl = ptl; + break; + default: + WARN_ONCE(1, "%s: got invalid level: %d (shift: %d)\n", + __func__, hpte->level, hpte->shift); + return -EINVAL; + } + return 0; +} + /* * Return a mask that can be used to update an address to the last huge * page in a page table page mapping size. Used to skip non-present