From patchwork Sat Feb 18 00:27:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Houghton X-Patchwork-Id: 13145378 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 46652C6379F for ; Sat, 18 Feb 2023 00:29:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 80621280008; Fri, 17 Feb 2023 19:28:58 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6F453280002; Fri, 17 Feb 2023 19:28:58 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4A72C280008; Fri, 17 Feb 2023 19:28:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 3C27F280002 for ; Fri, 17 Feb 2023 19:28:58 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 0CC4580799 for ; Sat, 18 Feb 2023 00:28:58 +0000 (UTC) X-FDA: 80478527556.01.CE05AC3 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) by imf12.hostedemail.com (Postfix) with ESMTP id 4CEC24000A for ; Sat, 18 Feb 2023 00:28:56 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=b9QeURwu; spf=pass (imf12.hostedemail.com: domain of 3xxvwYwoKCOQPZNUaMNZUTMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--jthoughton.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3xxvwYwoKCOQPZNUaMNZUTMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1676680136; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Coc3VejVpx2o70gw2F1FMLmVPB7+pJmCD4QBd1TWaEQ=; b=gRj055mdI0sHVcg68lGgBMMn4oDlmOt4B/UfZ6XpaxCrQft9DLGzqnrmqbratAUBzIgTLx /akgP+DjjcwO0mwx0ac7bn3KgbFfhUvDFZhM0zMY/TVTOSfTeVBu+xxGReIW2bntCo0Za+ wv6lSL3Hsm/Q6QK8hrDlC6vejUwRdGs= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=b9QeURwu; spf=pass (imf12.hostedemail.com: domain of 3xxvwYwoKCOQPZNUaMNZUTMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--jthoughton.bounces.google.com designates 209.85.219.202 as permitted sender) smtp.mailfrom=3xxvwYwoKCOQPZNUaMNZUTMUUMRK.IUSROTad-SSQbGIQ.UXM@flex--jthoughton.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1676680136; a=rsa-sha256; cv=none; b=FxsrQglxwChJRL1iM0gDRBptCP3CiJCxjBa0iuy3SADs1f3qAnhlFWjb8+Eu4hjY7jwF2x OppiuLQhqZjuS39pq8OsE2czMe347/Tm7h2uKZdaxmRu1I0mkFoQcInmyWQDGvzUlnDITC wKSuKKx8s6J4FM+Q65Z2m6cm4EEDBQI= Received: by mail-yb1-f202.google.com with SMTP id y187-20020a2532c4000000b008f257b16d71so2333769yby.15 for ; Fri, 17 Feb 2023 16:28:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Coc3VejVpx2o70gw2F1FMLmVPB7+pJmCD4QBd1TWaEQ=; b=b9QeURwuMOHoR6oMuvaNhmrHn5fZGAuH6iVnLRa9UH1kSasBCwgdba6QJPd+K6jzHb xz6s/6q+eVA72+3bOi37v4IpLi+gt+dDSU4ZiqKrbXd+tRUGRvRJ1mGjZ02I2XMyurj4 SsSE1g1trZVfhuYlrhfBM4w2eQv5qoEpUPlr5CJMDFyaM52fzsGSeRGzxi0rFH4dVLC6 IFd4sxykmk7qNUAT62W1oRvfIgHfE5E1BizMsR8G5eNcCZgGLkwZhgNod0TNXPX+Z62j 6Y1IxEaN7fiBpSKCe960CuuSJGub6psv+J5ih1KSgDonLBFQ4O9QsMNnykmF5WR/o4UG 3CMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Coc3VejVpx2o70gw2F1FMLmVPB7+pJmCD4QBd1TWaEQ=; b=x1yF66mdquYC28C6xRIe9IxLv2GRmWhEk3HprVozO9HATuO/Lw5H8/f72P25ckb/TX gLIhZEl/ZafWxunTYkwDL3Vv36OWqy6HrMQK2THLM9Gvo7tEJAScymhQ/fpaP7U8UKz4 4f4aKVtfvuCGdosRrb0TDn3p5JUQjs2JPahlgvHvcODlgbSdOQRbPShXowg60Rip+5gL yRGtioEVcTn1alXiKrjm1C2OWEmIMDyXxtTiedqhXQT2EEUIuhqvh/8HmmHbAWYhgEky peWlsVeGMJnlcafVT0/x/STOJpCvqB44jwzgHDHiCSiD1danp76nKu6oiGASkVKI+/z1 KqFg== X-Gm-Message-State: AO0yUKWeJ2AI/yMpTPN/zGbze6C3WLVpl4TPxJMV1ja2eGhSd+BUd91n 5wQNb6jt1jNWApMk0RSlqnI/PFEwTW01z7Ip X-Google-Smtp-Source: AK7set9ALTOiMY3O/c14DOIxwwoyqi8AmUxl79K9DJLNuYtBmR+KN/t3Tjueou+iosR6XToAl+9j+ymfqMf1UvES X-Received: from jthoughton.c.googlers.com ([fda3:e722:ac3:cc00:14:4d90:c0a8:2a4f]) (user=jthoughton job=sendgmr) by 2002:a81:b705:0:b0:534:d71f:14e6 with SMTP id v5-20020a81b705000000b00534d71f14e6mr53501ywh.9.1676680135521; Fri, 17 Feb 2023 16:28:55 -0800 (PST) Date: Sat, 18 Feb 2023 00:27:45 +0000 In-Reply-To: <20230218002819.1486479-1-jthoughton@google.com> Mime-Version: 1.0 References: <20230218002819.1486479-1-jthoughton@google.com> X-Mailer: git-send-email 2.39.2.637.g21b0678d19-goog Message-ID: <20230218002819.1486479-13-jthoughton@google.com> Subject: [PATCH v2 12/46] hugetlb: add hugetlb_alloc_pmd and hugetlb_alloc_pte From: James Houghton To: Mike Kravetz , Muchun Song , Peter Xu , Andrew Morton Cc: David Hildenbrand , David Rientjes , Axel Rasmussen , Mina Almasry , "Zach O'Keefe" , Manish Mishra , Naoya Horiguchi , "Dr . David Alan Gilbert" , "Matthew Wilcox (Oracle)" , Vlastimil Babka , Baolin Wang , Miaohe Lin , Yang Shi , Frank van der Linden , Jiaqi Yan , linux-mm@kvack.org, linux-kernel@vger.kernel.org, James Houghton X-Stat-Signature: ekjiyt3ujb3ayzxnqenprosmwkes4i38 X-Rspam-User: X-Rspamd-Queue-Id: 4CEC24000A X-Rspamd-Server: rspam06 X-HE-Tag: 1676680136-933915 X-HE-Meta: U2FsdGVkX18wXXo7Br3PR454whEdzqqc1GviKONjoGlb+0ZHpVOPdQzJrU6Ng0aYdzcidX8Awq81PBRmC6oFVQBgISLVO9FeLLZSxWmPvXEG82+6KmqocrF+tHVBPA/SgqDfJd5rEFe/DEuCHs4rqBxC6uTeEXga163w/ZsIGA7bw+gl+V5d+dws167jXTDZb4Dp5KXlEh/hdHj+UuntaTpJS40c7CrF4qp5wCe0lYgO6NQPDJjBsCwCfYeo1RptAa5bLWAgDp0QopPZYGcNtNhzhj+HQVsKQe4nggxtdQrQEKEwvQm/PQV+NijODM9gEplHH9YTTDux49wm5I7nN1QuDPMyT5VGy7hWQsY/4NVyG1KXwQ9hbMPnSl7YFIoyMrkoWqZAV5reDFVSXcBs/ZZnCaDhOMYwVlhzmTuDZsl7aRAFGhVJrlkoHjWwCYRqwgDs/rIaqPDxhB5nuzzKcRlXzmyxmUyxp+HzoloTaLmGeG3VVKaQcJfoKughrxeXtIT2F74TqJL0LOynw6O3Tn+xB8ogUY/EmBY0I93A5GJW0k5GFRd8EFmG5ar8N8xxdkvY6qMmLq0Wx31fR9QLFr4JXqVn6fqCZZxAcaGy9F94QGGRicd4moTTnC+tXHq7/Fbb4Bde4fPblyiszr5llea4y+n1QQzcrEsvcFUswuIVED7s2gxIdOXmhbf1fbgCRQliERm3WF8mzpL7aSpadbM21sYCtThVHlKF+xE/M77YX3Y5UVelknAD4Fe/wIaoJ9gI/n5GvqUDljJTzxOeP8OLUrMb+5jLkxWSxUpozKJhgUoerwZ+dWrB8dlNYL1T0sXvpfbQqjRMVbSpISCGf3y5jFk4Gw6QEYmXBClRz/N6S0wDgYWNOs1T9AwS6dtjD3fg9TsZ75Wh8xptImJUnDOYuEwnZj04Y8OFYh/T/3dDQwX6Jn1Swl6BZXTHxz1/8kWqS7hReXOIQU4NKRF Y8RjNEwJ 60SgKsVu60GPIZxE33usvCH+mikskymJ8EEGw1wMgamTx/HvvGt8iWTU+29Ohz83dGtg2hMd4V82kpw9Op18kkBSy0Y4Wn8KnJRnv9r20of43oRiMLvITt/7U5/SUbF8EECuTfRPQ5TwzSageb8l6m4XRBcQTVB0/S+IoW2QnmW+u1k1REYhWK1mU5F2166CHzKnud91yArfZNMRY9S4WKWoZkbEq3bTLJ7ocQzZwvvqYr4uPem7jSQzJNmTvcV0Mdwi3Ncm/j0/jmjnBT3Hn048TgZcvPiKUczlNa3F2J8xEibwB+2H5T/R8qeQLcFcjvArp4ctyah3F+cgkgoCVat/Sn8GoupugucovfuwEvimAqmdhBEkBFia5MHJwrjlZVY96SfUpHsQ3KnWq0SJoycwZ3zSG56xmzGzxoZjEKBJhQlpsYb7OOf4WJXHlJTdqo2grqyMX/bnXAwBxznbbtd1OLlX/x1Q/PpqxQiwextgoE+W/ke0Ha62QIYrqO1u2zgoxBuKvoZPB+9FAvMBt3PPyQrfP356FujhjUXjKDGtQVCqLvt42bWez1HsUHqfmJ70N2Rwk9bCDwaVgVSMa9WHBvtm8ANh6LKJTBredvC6I1xw= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: These functions are used to allocate new PTEs below the hstate PTE. This will be used by hugetlb_walk_step, which implements stepping forwards in a HugeTLB high-granularity page table walk. The reasons that we don't use the standard pmd_alloc/pte_alloc* functions are: 1) This prevents us from accidentally overwriting swap entries or attempting to use swap entries as present non-leaf PTEs (see pmd_alloc(); we assume that !pte_none means pte_present and non-leaf). 2) Locking hugetlb PTEs can different than regular PTEs. (Although, as implemented right now, locking is the same.) 3) We can maintain compatibility with CONFIG_HIGHPTE. That is, HugeTLB HGM won't use HIGHPTE, but the kernel can still be built with it, and other mm code will use it. When GENERAL_HUGETLB supports P4D-based hugepages, we will need to implement hugetlb_pud_alloc to implement hugetlb_walk_step. Signed-off-by: James Houghton diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index eeacadf3272b..9d839519c875 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -72,6 +72,11 @@ unsigned long hugetlb_pte_mask(const struct hugetlb_pte *hpte) bool hugetlb_pte_present_leaf(const struct hugetlb_pte *hpte, pte_t pte); +pmd_t *hugetlb_alloc_pmd(struct mm_struct *mm, struct hugetlb_pte *hpte, + unsigned long addr); +pte_t *hugetlb_alloc_pte(struct mm_struct *mm, struct hugetlb_pte *hpte, + unsigned long addr); + struct hugepage_subpool { spinlock_t lock; long count; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6c74adff43b6..bb424cdf79e4 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -483,6 +483,120 @@ static bool has_same_uncharge_info(struct file_region *rg, #endif } +/* + * hugetlb_alloc_pmd -- Allocate or find a PMD beneath a PUD-level hpte. + * + * This is meant to be used to implement hugetlb_walk_step when one must go to + * step down to a PMD. Different architectures may implement hugetlb_walk_step + * differently, but hugetlb_alloc_pmd and hugetlb_alloc_pte are architecture- + * independent. + * + * Returns: + * On success: the pointer to the PMD. This should be placed into a + * hugetlb_pte. @hpte is not changed. + * ERR_PTR(-EINVAL): hpte is not PUD-level + * ERR_PTR(-EEXIST): there is a non-leaf and non-empty PUD in @hpte + * ERR_PTR(-ENOMEM): could not allocate the new PMD + */ +pmd_t *hugetlb_alloc_pmd(struct mm_struct *mm, struct hugetlb_pte *hpte, + unsigned long addr) +{ + spinlock_t *ptl = hugetlb_pte_lockptr(hpte); + pmd_t *new; + pud_t *pudp; + pud_t pud; + + if (hpte->level != HUGETLB_LEVEL_PUD) + return ERR_PTR(-EINVAL); + + pudp = (pud_t *)hpte->ptep; +retry: + pud = READ_ONCE(*pudp); + if (likely(pud_present(pud))) + return unlikely(pud_leaf(pud)) + ? ERR_PTR(-EEXIST) + : pmd_offset(pudp, addr); + else if (!pud_none(pud)) + /* + * Not present and not none means that a swap entry lives here, + * and we can't get rid of it. + */ + return ERR_PTR(-EEXIST); + + new = pmd_alloc_one(mm, addr); + if (!new) + return ERR_PTR(-ENOMEM); + + spin_lock(ptl); + if (!pud_same(pud, *pudp)) { + spin_unlock(ptl); + pmd_free(mm, new); + goto retry; + } + + mm_inc_nr_pmds(mm); + smp_wmb(); /* See comment in pmd_install() */ + pud_populate(mm, pudp, new); + spin_unlock(ptl); + return pmd_offset(pudp, addr); +} + +/* + * hugetlb_alloc_pte -- Allocate a PTE beneath a pmd_none PMD-level hpte. + * + * See the comment above hugetlb_alloc_pmd. + */ +pte_t *hugetlb_alloc_pte(struct mm_struct *mm, struct hugetlb_pte *hpte, + unsigned long addr) +{ + spinlock_t *ptl = hugetlb_pte_lockptr(hpte); + pgtable_t new; + pmd_t *pmdp; + pmd_t pmd; + + if (hpte->level != HUGETLB_LEVEL_PMD) + return ERR_PTR(-EINVAL); + + pmdp = (pmd_t *)hpte->ptep; +retry: + pmd = READ_ONCE(*pmdp); + if (likely(pmd_present(pmd))) + return unlikely(pmd_leaf(pmd)) + ? ERR_PTR(-EEXIST) + : pte_offset_kernel(pmdp, addr); + else if (!pmd_none(pmd)) + /* + * Not present and not none means that a swap entry lives here, + * and we can't get rid of it. + */ + return ERR_PTR(-EEXIST); + + /* + * With CONFIG_HIGHPTE, calling `pte_alloc_one` directly may result + * in page tables being allocated in high memory, needing a kmap to + * access. Instead, we call __pte_alloc_one directly with + * GFP_PGTABLE_USER to prevent these PTEs being allocated in high + * memory. + */ + new = __pte_alloc_one(mm, GFP_PGTABLE_USER); + if (!new) + return ERR_PTR(-ENOMEM); + + spin_lock(ptl); + if (!pmd_same(pmd, *pmdp)) { + spin_unlock(ptl); + pgtable_pte_page_dtor(new); + __free_page(new); + goto retry; + } + + mm_inc_nr_ptes(mm); + smp_wmb(); /* See comment in pmd_install() */ + pmd_populate(mm, pmdp, new); + spin_unlock(ptl); + return pte_offset_kernel(pmdp, addr); +} + static void coalesce_file_region(struct resv_map *resv, struct file_region *rg) { struct file_region *nrg, *prg;