From patchwork Tue Jul 25 04:20:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13325728 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E02A9C04FDF for ; Tue, 25 Jul 2023 04:21:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6F0146B008A; Tue, 25 Jul 2023 00:21:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6A043900002; Tue, 25 Jul 2023 00:21:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 456CB6B0092; Tue, 25 Jul 2023 00:21:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 35B1A6B008A for ; Tue, 25 Jul 2023 00:21:39 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 02BF3A075D for ; Tue, 25 Jul 2023 04:21:38 +0000 (UTC) X-FDA: 81048835518.27.69FB794 Received: from mail-yb1-f181.google.com (mail-yb1-f181.google.com [209.85.219.181]) by imf22.hostedemail.com (Postfix) with ESMTP id 28F6FC0003 for ; Tue, 25 Jul 2023 04:21:36 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=QrRJzaA0; spf=pass (imf22.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.219.181 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690258897; a=rsa-sha256; cv=none; b=63UTHDLOc+203L82QDkEZQxFlRXS0x/2XxfN4+OtZhtInsUwWr9qO9Fd8/49nJ98OHS4Nu cqkEGILQYX+Bxm7oZ1+Qi45q+DurARRBbb5E3232LgUtgHWdijzCyXdxqEUca7yy4My6Ay k8/UOySScN4g7s41RRpkOni6LRv0aiA= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20221208 header.b=QrRJzaA0; spf=pass (imf22.hostedemail.com: domain of vishal.moola@gmail.com designates 209.85.219.181 as permitted sender) smtp.mailfrom=vishal.moola@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690258897; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=HTuFXf3k0XF+iUx9+PEFpsWUTVtSma2QbqtalFKJ0Mg=; b=Qepa1M0hAgiXgvdQ5ftG8MalY0NnOmq0/CC0C/BOhpEP0QgQjOFtXqqZ5J2BLlhZvgzL2y LtKZ7ZPQnpnIO0Tn6866PHrK9eFWyf6bS4hjKIKY5W4R+J3WdV/l6RMsC3SB1c9ZmVDZ6B gcG2YEK4IaZfZGUYA76VXPAjEoiWJJM= Received: by mail-yb1-f181.google.com with SMTP id 3f1490d57ef6-d08658c7713so3153782276.3 for ; Mon, 24 Jul 2023 21:21:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1690258896; x=1690863696; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HTuFXf3k0XF+iUx9+PEFpsWUTVtSma2QbqtalFKJ0Mg=; b=QrRJzaA03gdUbTqzHfmEzG/SCLN8TfMO31W01m+ZWQlnbcrzlb4tYT+35zeJjfW06z 9U9gt9lsV+tgAHhZ8n35SKI0CeQMqkEFGZ5B5orgn4svLxjBxl6Jzaq1tScsN4cutOyY MBojHX9rcIUgj3E6lWh0fv0eOSaMCWjRoxbQUhG1aAVpaeKYRSQsFSUSSXXm+4gUld+4 AAwiKYy7kKhjXb8bIEfD9zbtO0aneFF+lO/xjXVcWjg9cVHeUj4bupcxDkGJc2fLy1IN +ECwgNBBuGb+qDcAZhQy4/GreCi4XEqGLy5ZaCaADXrFmYS9d/RmD/xlr0+TZwqE0RSU qvRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690258896; x=1690863696; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HTuFXf3k0XF+iUx9+PEFpsWUTVtSma2QbqtalFKJ0Mg=; b=mFNz3B/UxA/w0iRqWYZdu5aaMiqRO6BkMQRAUclAcuX9G9vUnL1oKMAYZInUgICCEg 4Ks2/83VRXbO2lJ9bMFXCRdaMsr1ZuN7ZgrOKrZOI233X0cOt3aE23VGja1ejp64yTAN DLny2MJAbvsB6Mwg/envkT/h+32sxVySlRVu5urNFwk+InWrnmK++NA6f5SsyJmiimLQ 9I0vnXN28WKUau+PuYDp52tbU0UfMSQKttbjhEpPak/5N5U1MZKigopH72OhSKaIB8Ur SP7H9xH6qa6HUumf0KBDqOkCKz5tATJU1OjX4da0pU/IbnnOYvMxKNKiRpw9Kgfqz4zC TAiw== X-Gm-Message-State: ABy/qLYg9VLyi5rGm9c3NCVijsij5ocYLhIXKyeRL05ZAOFb/oLDyrDz XwDfW6JtASJhoAuUNlpeakQ= X-Google-Smtp-Source: APBJJlEki8LDrmeLBLq59XmoJzB5sRXzAtgIv4vzSjEn99cb+IjR/JJiSJaFtdPyyxG1wP2vNvhZ2w== X-Received: by 2002:a25:8e0d:0:b0:d13:80a0:b428 with SMTP id p13-20020a258e0d000000b00d1380a0b428mr2932580ybl.3.1690258896011; Mon, 24 Jul 2023 21:21:36 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id h9-20020a25b189000000b00d0db687ef48sm1175540ybj.61.2023.07.24.21.21.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Jul 2023 21:21:35 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , David Hildenbrand , Claudio Imbrenda , Mike Rapoport Subject: [PATCH mm-unstable v7 14/31] s390: Convert various pgalloc functions to use ptdescs Date: Mon, 24 Jul 2023 21:20:34 -0700 Message-Id: <20230725042051.36691-15-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230725042051.36691-1-vishal.moola@gmail.com> References: <20230725042051.36691-1-vishal.moola@gmail.com> MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 28F6FC0003 X-Stat-Signature: qj5wxe5jfapiqxyq5hts8jg7jikxche7 X-Rspam-User: X-HE-Tag: 1690258896-638113 X-HE-Meta: U2FsdGVkX18jgmjj/Z57cdf7r3KEtSIJVchs2nbmHtX4D4CPtpS/CXmhkXKJj4xvBzffiY9XgVSNuMoL5vfCG1+uCtevW1xEJrsnb9PXT+1qVfh8F00qiVrUGmJxYm5h/TvlGqDjW0+vz8si0UdehjieOga2vkEiiPj+2hfmhiUuG5+nXNrrLbvOv65ZSzA8t/26NCT4DILR15k1Px5NaG133yUBZBPIfLiL96dpISaf7aB3TeK1XOobMNJqM+tebed+51k+yZ8HPzmk4IsCrhSD17EkAiK8z79+M9tC59buHINioYPIo6/ikpgDs205UlESDQcoTI9iNQvaomjvBOdR4ea1SeRi9PxdqRBftt5zwDjOk+5Dq+NzcIw8gyCSXvPgmcbZ7NxbnJhUt/CBHhdDMLhyMupyW/nwlQyHWXX/ZmpMZwrYNBC4qn9qiqLOLJF/c/WUfZyh/00r0Gadlcf1VH1xWuYCYC1a1X7Ysu8+XcIcEIhFchJAzEyEGcW4JzGU9yHD0s8ieiftWCuUMAEaU5orvG/g3zAE7AlgsDEglLwVqhxfCtdx/qzlqVuM4ZziIlmhOf4qkY+36aU0bFYHEC8qHYZiFnNuwXopjcMeGHjo/xSHaRx6eltlyr0WTt5J/1udy0QV4CzXgsuP2cAjaJQZNbloaqaDK0xyHfLi8xsJ5XsilJWC62wsvg4gSKedFZ9oUEPActoEJAJ38O0gq1BymvakvGMUWfjpMtUtZ2XtgapW7ZvQu9HBb8oIuINYBeNnPzx0J1uVM+VxBbsI+MzpPT5f84bu10jQMGzf5M6Hc5VRsrsxhcueOqkyxKfZmCVcftBO0oLv8MKbwgo1EyY5g64mgXduRCrxZpMllm4N6Hdi4wOj7TNhGBL3sUonPdkzyHnAmwvIiJqZwQ+/W241kcjtyDYyM/VkLRWZc3F4VBRlxty1d/bh1BZ5MI/p2OPjl6WyQSpO6VH s6dG0UGt 9masidCbkoJ3pkI6HK+znYApkfHs4OkBYNIx6/7MUiOugvi6l75uyZC2EvoTWzAibpE711A2mjohRzPd6Gfri3QYU+jtQL8TmQPal9YaUiast9p/x9t3MWvw9133wGnF2B8FfqOVZfWUdlT6yd5I4lgPDS9rV64hq2QhphV5roebuDWLV82yEV+d+4TgfbXLcCePF8XEOBRAlqo8NiwS6U4cB+Eoc5/3LsRQ3Nduq3UXupO81J5qfgPbg2iCXtDDbJFNpkDWYf17mUhVTeVC4X/oQuP/khfUUq1nxkA3Qu+D5OT8BNuWXz4o4mnEAuF/ZHfJnjl8azfovnL0r+0rMEBnCxfSZ2Zxxlmcl2vb5QeuzVLJZ1SiE54T3TyWGZ2TKVMlAb4Tm8Xz1WhYh/A52Zj04mO6t3rOxLUigYFyDUmNvoPDOSqUcGQLOxvKMG4hfPA7flgB9pwhuH3K4qikhVPgNx69PVgv0kqNycnKb9qC3n+/xeAEQ6MDW+M75jZ62+/obyszDBkhbyKlzQTrWo13cdP1F0gPB2BDyou2jNoPNDMhuMuYDBxOFmJoi+oZGto2PyOyXnD3IUYci4RPBLNxX2R1DxKXeIbYQ66JkhnTlu/r0Jf+02mzG7w== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use pagetable_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- arch/s390/include/asm/pgalloc.h | 4 +- arch/s390/include/asm/tlb.h | 4 +- arch/s390/mm/pgalloc.c | 128 ++++++++++++++++---------------- 3 files changed, 69 insertions(+), 67 deletions(-) diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h index 89a9d5ef94f8..376b4b23bdaa 100644 --- a/arch/s390/include/asm/pgalloc.h +++ b/arch/s390/include/asm/pgalloc.h @@ -86,7 +86,7 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long vmaddr) if (!table) return NULL; crst_table_init(table, _SEGMENT_ENTRY_EMPTY); - if (!pgtable_pmd_page_ctor(virt_to_page(table))) { + if (!pagetable_pmd_ctor(virt_to_ptdesc(table))) { crst_table_free(mm, table); return NULL; } @@ -97,7 +97,7 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) { if (mm_pmd_folded(mm)) return; - pgtable_pmd_page_dtor(virt_to_page(pmd)); + pagetable_pmd_dtor(virt_to_ptdesc(pmd)); crst_table_free(mm, (unsigned long *) pmd); } diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index b91f4a9b044c..383b1f91442c 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -89,12 +89,12 @@ static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, { if (mm_pmd_folded(tlb->mm)) return; - pgtable_pmd_page_dtor(virt_to_page(pmd)); + pagetable_pmd_dtor(virt_to_ptdesc(pmd)); __tlb_adjust_range(tlb, address, PAGE_SIZE); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; tlb->cleared_puds = 1; - tlb_remove_table(tlb, pmd); + tlb_remove_ptdesc(tlb, pmd); } /* diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index d7374add7820..07fc660a24aa 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -43,17 +43,17 @@ __initcall(page_table_register_sysctl); unsigned long *crst_table_alloc(struct mm_struct *mm) { - struct page *page = alloc_pages(GFP_KERNEL, CRST_ALLOC_ORDER); + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL, CRST_ALLOC_ORDER); - if (!page) + if (!ptdesc) return NULL; - arch_set_page_dat(page, CRST_ALLOC_ORDER); - return (unsigned long *) page_to_virt(page); + arch_set_page_dat(ptdesc_page(ptdesc), CRST_ALLOC_ORDER); + return (unsigned long *) ptdesc_to_virt(ptdesc); } void crst_table_free(struct mm_struct *mm, unsigned long *table) { - free_pages((unsigned long)table, CRST_ALLOC_ORDER); + pagetable_free(virt_to_ptdesc(table)); } static void __crst_table_upgrade(void *arg) @@ -140,21 +140,21 @@ static inline unsigned int atomic_xor_bits(atomic_t *v, unsigned int bits) struct page *page_table_alloc_pgste(struct mm_struct *mm) { - struct page *page; + struct ptdesc *ptdesc; u64 *table; - page = alloc_page(GFP_KERNEL); - if (page) { - table = (u64 *)page_to_virt(page); + ptdesc = pagetable_alloc(GFP_KERNEL, 0); + if (ptdesc) { + table = (u64 *)ptdesc_to_virt(ptdesc); memset64(table, _PAGE_INVALID, PTRS_PER_PTE); memset64(table + PTRS_PER_PTE, 0, PTRS_PER_PTE); } - return page; + return ptdesc_page(ptdesc); } void page_table_free_pgste(struct page *page) { - __free_page(page); + pagetable_free(page_ptdesc(page)); } #endif /* CONFIG_PGSTE */ @@ -242,7 +242,7 @@ void page_table_free_pgste(struct page *page) unsigned long *page_table_alloc(struct mm_struct *mm) { unsigned long *table; - struct page *page; + struct ptdesc *ptdesc; unsigned int mask, bit; /* Try to get a fragment of a 4K page as a 2K page table */ @@ -250,9 +250,9 @@ unsigned long *page_table_alloc(struct mm_struct *mm) table = NULL; spin_lock_bh(&mm->context.lock); if (!list_empty(&mm->context.pgtable_list)) { - page = list_first_entry(&mm->context.pgtable_list, - struct page, lru); - mask = atomic_read(&page->_refcount) >> 24; + ptdesc = list_first_entry(&mm->context.pgtable_list, + struct ptdesc, pt_list); + mask = atomic_read(&ptdesc->_refcount) >> 24; /* * The pending removal bits must also be checked. * Failure to do so might lead to an impossible @@ -264,13 +264,13 @@ unsigned long *page_table_alloc(struct mm_struct *mm) */ mask = (mask | (mask >> 4)) & 0x03U; if (mask != 0x03U) { - table = (unsigned long *) page_to_virt(page); + table = (unsigned long *) ptdesc_to_virt(ptdesc); bit = mask & 1; /* =1 -> second 2K */ if (bit) table += PTRS_PER_PTE; - atomic_xor_bits(&page->_refcount, + atomic_xor_bits(&ptdesc->_refcount, 0x01U << (bit + 24)); - list_del_init(&page->lru); + list_del_init(&ptdesc->pt_list); } } spin_unlock_bh(&mm->context.lock); @@ -278,28 +278,28 @@ unsigned long *page_table_alloc(struct mm_struct *mm) return table; } /* Allocate a fresh page */ - page = alloc_page(GFP_KERNEL); - if (!page) + ptdesc = pagetable_alloc(GFP_KERNEL, 0); + if (!ptdesc) return NULL; - if (!pgtable_pte_page_ctor(page)) { - __free_page(page); + if (!pagetable_pte_ctor(ptdesc)) { + pagetable_free(ptdesc); return NULL; } - arch_set_page_dat(page, 0); + arch_set_page_dat(ptdesc_page(ptdesc), 0); /* Initialize page table */ - table = (unsigned long *) page_to_virt(page); + table = (unsigned long *) ptdesc_to_virt(ptdesc); if (mm_alloc_pgste(mm)) { /* Return 4K page table with PGSTEs */ - INIT_LIST_HEAD(&page->lru); - atomic_xor_bits(&page->_refcount, 0x03U << 24); + INIT_LIST_HEAD(&ptdesc->pt_list); + atomic_xor_bits(&ptdesc->_refcount, 0x03U << 24); memset64((u64 *)table, _PAGE_INVALID, PTRS_PER_PTE); memset64((u64 *)table + PTRS_PER_PTE, 0, PTRS_PER_PTE); } else { /* Return the first 2K fragment of the page */ - atomic_xor_bits(&page->_refcount, 0x01U << 24); + atomic_xor_bits(&ptdesc->_refcount, 0x01U << 24); memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE); spin_lock_bh(&mm->context.lock); - list_add(&page->lru, &mm->context.pgtable_list); + list_add(&ptdesc->pt_list, &mm->context.pgtable_list); spin_unlock_bh(&mm->context.lock); } return table; @@ -322,19 +322,18 @@ static void page_table_release_check(struct page *page, void *table, static void pte_free_now(struct rcu_head *head) { - struct page *page; + struct ptdesc *ptdesc; - page = container_of(head, struct page, rcu_head); - pgtable_pte_page_dtor(page); - __free_page(page); + ptdesc = container_of(head, struct ptdesc, pt_rcu_head); + pagetable_pte_dtor(ptdesc); + pagetable_free(ptdesc); } void page_table_free(struct mm_struct *mm, unsigned long *table) { unsigned int mask, bit, half; - struct page *page; + struct ptdesc *ptdesc = virt_to_ptdesc(table); - page = virt_to_page(table); if (!mm_alloc_pgste(mm)) { /* Free 2K page table fragment of a 4K page */ bit = ((unsigned long) table & ~PAGE_MASK)/(PTRS_PER_PTE*sizeof(pte_t)); @@ -344,51 +343,50 @@ void page_table_free(struct mm_struct *mm, unsigned long *table) * will happen outside of the critical section from this * function or from __tlb_remove_table() */ - mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24)); + mask = atomic_xor_bits(&ptdesc->_refcount, 0x11U << (bit + 24)); mask >>= 24; - if ((mask & 0x03U) && !PageActive(page)) { + if ((mask & 0x03U) && !folio_test_active(ptdesc_folio(ptdesc))) { /* * Other half is allocated, and neither half has had * its free deferred: add page to head of list, to make * this freed half available for immediate reuse. */ - list_add(&page->lru, &mm->context.pgtable_list); + list_add(&ptdesc->pt_list, &mm->context.pgtable_list); } else { /* If page is on list, now remove it. */ - list_del_init(&page->lru); + list_del_init(&ptdesc->pt_list); } spin_unlock_bh(&mm->context.lock); - mask = atomic_xor_bits(&page->_refcount, 0x10U << (bit + 24)); + mask = atomic_xor_bits(&ptdesc->_refcount, 0x10U << (bit + 24)); mask >>= 24; if (mask != 0x00U) return; half = 0x01U << bit; } else { half = 0x03U; - mask = atomic_xor_bits(&page->_refcount, 0x03U << 24); + mask = atomic_xor_bits(&ptdesc->_refcount, 0x03U << 24); mask >>= 24; } - page_table_release_check(page, table, half, mask); - if (TestClearPageActive(page)) - call_rcu(&page->rcu_head, pte_free_now); + page_table_release_check(ptdesc_page(ptdesc), table, half, mask); + if (folio_test_clear_active(ptdesc_folio(ptdesc))) + call_rcu(&ptdesc->pt_rcu_head, pte_free_now); else - pte_free_now(&page->rcu_head); + pte_free_now(&ptdesc->pt_rcu_head); } void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table, unsigned long vmaddr) { struct mm_struct *mm; - struct page *page; unsigned int bit, mask; + struct ptdesc *ptdesc = virt_to_ptdesc(table); mm = tlb->mm; - page = virt_to_page(table); if (mm_alloc_pgste(mm)) { gmap_unlink(mm, table, vmaddr); table = (unsigned long *) ((unsigned long)table | 0x03U); - tlb_remove_table(tlb, table); + tlb_remove_ptdesc(tlb, table); return; } bit = ((unsigned long) table & ~PAGE_MASK) / (PTRS_PER_PTE*sizeof(pte_t)); @@ -398,19 +396,19 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table, * outside of the critical section from __tlb_remove_table() or from * page_table_free() */ - mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24)); + mask = atomic_xor_bits(&ptdesc->_refcount, 0x11U << (bit + 24)); mask >>= 24; - if ((mask & 0x03U) && !PageActive(page)) { + if ((mask & 0x03U) && !folio_test_active(ptdesc_folio(ptdesc))) { /* * Other half is allocated, and neither half has had * its free deferred: add page to end of list, to make * this freed half available for reuse once its pending * bit has been cleared by __tlb_remove_table(). */ - list_add_tail(&page->lru, &mm->context.pgtable_list); + list_add_tail(&ptdesc->pt_list, &mm->context.pgtable_list); } else { /* If page is on list, now remove it. */ - list_del_init(&page->lru); + list_del_init(&ptdesc->pt_list); } spin_unlock_bh(&mm->context.lock); table = (unsigned long *) ((unsigned long) table | (0x01U << bit)); @@ -421,30 +419,30 @@ void __tlb_remove_table(void *_table) { unsigned int mask = (unsigned long) _table & 0x03U, half = mask; void *table = (void *)((unsigned long) _table ^ mask); - struct page *page = virt_to_page(table); + struct ptdesc *ptdesc = virt_to_ptdesc(table); switch (half) { case 0x00U: /* pmd, pud, or p4d */ - free_pages((unsigned long)table, CRST_ALLOC_ORDER); + pagetable_free(ptdesc); return; case 0x01U: /* lower 2K of a 4K page table */ case 0x02U: /* higher 2K of a 4K page table */ - mask = atomic_xor_bits(&page->_refcount, mask << (4 + 24)); + mask = atomic_xor_bits(&ptdesc->_refcount, mask << (4 + 24)); mask >>= 24; if (mask != 0x00U) return; break; case 0x03U: /* 4K page table with pgstes */ - mask = atomic_xor_bits(&page->_refcount, 0x03U << 24); + mask = atomic_xor_bits(&ptdesc->_refcount, 0x03U << 24); mask >>= 24; break; } - page_table_release_check(page, table, half, mask); - if (TestClearPageActive(page)) - call_rcu(&page->rcu_head, pte_free_now); + page_table_release_check(ptdesc_page(ptdesc), table, half, mask); + if (folio_test_clear_active(ptdesc_folio(ptdesc))) + call_rcu(&ptdesc->pt_rcu_head, pte_free_now); else - pte_free_now(&page->rcu_head); + pte_free_now(&ptdesc->pt_rcu_head); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -488,16 +486,20 @@ static void base_pgt_free(unsigned long *table) static unsigned long *base_crst_alloc(unsigned long val) { unsigned long *table; + struct ptdesc *ptdesc; - table = (unsigned long *)__get_free_pages(GFP_KERNEL, CRST_ALLOC_ORDER); - if (table) - crst_table_init(table, val); + ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, CRST_ALLOC_ORDER); + if (!ptdesc) + return NULL; + table = ptdesc_address(ptdesc); + + crst_table_init(table, val); return table; } static void base_crst_free(unsigned long *table) { - free_pages((unsigned long)table, CRST_ALLOC_ORDER); + pagetable_free(virt_to_ptdesc(table)); } #define BASE_ADDR_END_FUNC(NAME, SIZE) \