From patchwork Thu Dec 19 16:22:49 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Matthew Wilcox (Oracle)" X-Patchwork-Id: 13915209 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B1029E7718B for ; Thu, 19 Dec 2024 16:23:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2F8006B008A; Thu, 19 Dec 2024 11:23:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2A7DA6B008C; Thu, 19 Dec 2024 11:23:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 106096B0092; Thu, 19 Dec 2024 11:23:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id D01196B008A for ; Thu, 19 Dec 2024 11:23:00 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 88B4312174C for ; Thu, 19 Dec 2024 16:23:00 +0000 (UTC) X-FDA: 82912226166.19.C50CF79 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id 239B140005 for ; Thu, 19 Dec 2024 16:22:43 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=iyGvd9G7; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1734625344; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=+uDUWEhmCup4taYr27BiJb9p3jgCmdPaevfqezl8nf8=; b=Sb3ev/RIw3IKl4KlWNzOv+EFksrOudC1NerbCC469mnPGfoUZa0VKcKkXEwINW/UEzE6pm 5Oe78Li43X1T+AoJAhFyHgKuF3dB+wjLnXEhDPPMaMqA2t1zo7CaeYSdc+qhm0mdE7qtjL /BrlzEKrSMySGecjdo4FHYIQrUgBq5Q= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=iyGvd9G7; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1734625344; a=rsa-sha256; cv=none; b=4tIBa7nwBKn5zFhSjYw6Ha57HHH/mr7mw0eS5M58jUQFFt9IYyE3cXvge1suV2m6YZbIwd bLFshDQxaFqrzYsaHhQGD4AcEi/jiMewXiCexa+NGyPBSTbcBjWpsh23wYT2r84Pm/VLJg p4OxPMBgFXRmWajeNFQJHAvL953UlmA= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=+uDUWEhmCup4taYr27BiJb9p3jgCmdPaevfqezl8nf8=; b=iyGvd9G7F8RyF+SdXlAk9enjFV K6OPKMcgAUQplroZ/cgam3+4N8qC+h55SZjzRk1sRuPt1ju8xGrdnBGv21xIai862bCCOWYLTGVZf AG+q7TUbkGxrLB2f2+Kzu1TsbWeev70Vs5/6UOfoe/HCJanUtr2U9BsStgAOLTO1VBy9civAPtsxl 8Vk4FUyFktJ7gF3fsw8O/boGI8jV10Ufw+6qCkuA3ThFCeroWkduW4xG3uV1VEmhRtq7adjK7jLcU yp7sWiatmEe6A6SzTDFvMLaFqsJItw2V2Er9LZcZkWEPTkDyDnCk3pY0V8HPywbfzObSZ+pVPG1WB t4AFfEjw==; Received: from willy by casper.infradead.org with local (Exim 4.98 #2 (Red Hat Linux)) id 1tOJIc-00000004Ijv-0ezX; Thu, 19 Dec 2024 16:22:54 +0000 From: "Matthew Wilcox (Oracle)" To: Claudio Imbrenda Cc: "Matthew Wilcox (Oracle)" , David Hildenbrand , linux-mm@kvack.org, linux-s390@vger.kernel.org Subject: [PATCH 1/2] s390: Convert gmap code to use ptdesc Date: Thu, 19 Dec 2024 16:22:49 +0000 Message-ID: <20241219162252.1025317-2-willy@infradead.org> X-Mailer: git-send-email 2.47.1 In-Reply-To: <20241219162252.1025317-1-willy@infradead.org> References: <20241219162252.1025317-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 239B140005 X-Rspam-User: X-Stat-Signature: xwq814ewp1dr9hi6rdfgasoe5bx4o77b X-HE-Tag: 1734625363-498186 X-HE-Meta: U2FsdGVkX19NA7TN/WedXL2lwG365JX17/6UROgS/4OF6mIrvLXepS8kdr8dYso5h0JX7OKkW7nnUJrnYd18zI0jh+LumykATxHBevS/sgef0wceQvcytoMcKfekkjSZwxuJo0DVLNt8J/WmP89x8mawD5lfR1CUmpqv2Vp44pPHL0Ak3fFcLTAv4VYej3xT6hVFLJdpQbgHkUxHcdD6aknCY2//6T1vchuuzGJHcjpQ1Ab9swEwdYK7eU/9EbGiHFnSkcFY6xwggWxofci8iUs7jPa1LGLXynVCRsbmf4kR2+vspZUBAD3YdqqCCQ/e7baqsFKbMg4UpAMLodkiMS+EeD/E1mMf3ruKXQZvD//GO//MslCo7LQaEouA9fsFjYf7bE87c88CDXsGh9OCeil6qqz0IvOjerXr8Tkm0m/RC5LcT4sQfYwhE1izBn3YlN/PaokIe/mE63khk+cOR8eXD1xaSOFJMOUfLsB1XMO7qM5AQKRo0tKpPWP2yEvRVFayfIPGh5WzEWQ7Ba5oSMFK0Cp6y2KacUfdnDmogujQ4DtnOt9EOInyQGX4QLYwNWchu6w+/uk2c7VQVQskPlbxtR1t4ufTZb2LT8QwpBX36plSRliyXFpCBJZD6XTlMYq4ifUTMv3u2uuE6bI79TMThDmDGZkSXj90Axhg1Cum1yQmQLVH/gjwVgiRm9/5zm4SihY4G2PD3Gkq8hJN+GiL/0JDGVQEvcLb7zh3m5wSGIlPbI3XwF1keHTLkuLsEqBj/fgpvQYaRpJ1/oId74GHfPPC+3M+ujIFCOvvotBjbjGy+LQzFPsup7dCnX0u3ADnZRAci/o7L5xK6a/SBJKLvK3ykv5p2FpqjC4+K2oMHc7naNmyELxriAtGO701Mb56bzSgR1EzJSPf+ZM4FyZ9J6tH/Yn+kaT6Fi3nuvztwDV3yZUa2XRLloQ/6L6wt5WRWnzC5NsXDqTv008 dRQDgIE2 fd1iLXSjqYlU1C0rtG5bJSFDrmGexbbeidjxvIBp8be8X4V6PRm21tgkc2N5DGPYardregZZmNVc9XPmhEUT/mTtLJD68wdxQglW2KiC9Syenmon8oE1jFTbVTvRhLYk21aeC+oRsqMJF671FpNMdi+CujHQn4RMCE/uaJAMuKIX8LwqOYLCe8hnC2V7/XVmGAIvFf5GWbtZJ5jwTbgdNTvimPyOktv9QfWlTIKY/sZhVj84= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: There was originally some doubt about whether these page tables should be represented by a vanilla struct page or whether they should be a ptdesc. As we continue on our quest to shrink struct page, we seem to have crossed the line into believing that thse page tables should be a ptdesc. At least for now. Signed-off-by: Matthew Wilcox (Oracle) --- arch/s390/mm/gmap.c | 181 ++++++++++++++++++++++---------------------- 1 file changed, 89 insertions(+), 92 deletions(-) diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index 16b8a36c56de..2ca100aae1f7 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -26,15 +26,15 @@ #define GMAP_SHADOW_FAKE_TABLE 1ULL -static struct page *gmap_alloc_crst(void) +static struct ptdesc *gmap_alloc_crst(void) { - struct page *page; + struct ptdesc *ptdesc; - page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); - if (!page) + ptdesc = pagetable_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); + if (!ptdesc) return NULL; - __arch_set_page_dat(page_to_virt(page), 1UL << CRST_ALLOC_ORDER); - return page; + __arch_set_page_dat(ptdesc_to_virt(ptdesc), 1UL << CRST_ALLOC_ORDER); + return ptdesc; } /** @@ -46,7 +46,7 @@ static struct page *gmap_alloc_crst(void) static struct gmap *gmap_alloc(unsigned long limit) { struct gmap *gmap; - struct page *page; + struct ptdesc *ptdesc; unsigned long *table; unsigned long etype, atype; @@ -79,12 +79,12 @@ static struct gmap *gmap_alloc(unsigned long limit) spin_lock_init(&gmap->guest_table_lock); spin_lock_init(&gmap->shadow_lock); refcount_set(&gmap->ref_count, 1); - page = gmap_alloc_crst(); - if (!page) + ptdesc = gmap_alloc_crst(); + if (!ptdesc) goto out_free; - page->index = 0; - list_add(&page->lru, &gmap->crst_list); - table = page_to_virt(page); + ptdesc->pt_index = 0; + list_add(&ptdesc->pt_list, &gmap->crst_list); + table = ptdesc_to_virt(ptdesc); crst_table_init(table, etype); gmap->table = table; gmap->asce = atype | _ASCE_TABLE_LENGTH | @@ -193,23 +193,21 @@ static void gmap_rmap_radix_tree_free(struct radix_tree_root *root) */ static void gmap_free(struct gmap *gmap) { - struct page *page, *next; + struct ptdesc *ptdesc, *next; /* Flush tlb of all gmaps (if not already done for shadows) */ if (!(gmap_is_shadow(gmap) && gmap->removed)) gmap_flush_tlb(gmap); /* Free all segment & region tables. */ - list_for_each_entry_safe(page, next, &gmap->crst_list, lru) - __free_pages(page, CRST_ALLOC_ORDER); + list_for_each_entry_safe(ptdesc, next, &gmap->crst_list, pt_list) + pagetable_free(ptdesc); gmap_radix_tree_free(&gmap->guest_to_host); gmap_radix_tree_free(&gmap->host_to_guest); /* Free additional data for a shadow gmap */ if (gmap_is_shadow(gmap)) { - struct ptdesc *ptdesc, *n; - /* Free all page tables. */ - list_for_each_entry_safe(ptdesc, n, &gmap->pt_list, pt_list) + list_for_each_entry_safe(ptdesc, next, &gmap->pt_list, pt_list) page_table_free_pgste(ptdesc); gmap_rmap_radix_tree_free(&gmap->host_to_rmap); /* Release reference to the parent */ @@ -287,26 +285,26 @@ EXPORT_SYMBOL_GPL(gmap_remove); static int gmap_alloc_table(struct gmap *gmap, unsigned long *table, unsigned long init, unsigned long gaddr) { - struct page *page; + struct ptdesc *ptdesc; unsigned long *new; /* since we dont free the gmap table until gmap_free we can unlock */ - page = gmap_alloc_crst(); - if (!page) + ptdesc = gmap_alloc_crst(); + if (!ptdesc) return -ENOMEM; - new = page_to_virt(page); + new = ptdesc_to_virt(ptdesc); crst_table_init(new, init); spin_lock(&gmap->guest_table_lock); if (*table & _REGION_ENTRY_INVALID) { - list_add(&page->lru, &gmap->crst_list); + list_add(&ptdesc->pt_list, &gmap->crst_list); *table = __pa(new) | _REGION_ENTRY_LENGTH | (*table & _REGION_ENTRY_TYPE_MASK); - page->index = gaddr; - page = NULL; + ptdesc->pt_index = gaddr; + ptdesc = NULL; } spin_unlock(&gmap->guest_table_lock); - if (page) - __free_pages(page, CRST_ALLOC_ORDER); + if (ptdesc) + pagetable_free(ptdesc); return 0; } @@ -318,13 +316,13 @@ static int gmap_alloc_table(struct gmap *gmap, unsigned long *table, */ static unsigned long __gmap_segment_gaddr(unsigned long *entry) { - struct page *page; + struct ptdesc *ptdesc; unsigned long offset; offset = (unsigned long) entry / sizeof(unsigned long); offset = (offset & (PTRS_PER_PMD - 1)) * PMD_SIZE; - page = pmd_pgtable_page((pmd_t *) entry); - return page->index + offset; + ptdesc = pmd_ptdesc((pmd_t *) entry); + return ptdesc->pt_index + offset; } /** @@ -1458,7 +1456,7 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr) { unsigned long r3o, *r3e; phys_addr_t sgt; - struct page *page; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); r3e = gmap_table_walk(sg, raddr, 2); /* get region-3 pointer */ @@ -1471,9 +1469,9 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr) *r3e = _REGION3_ENTRY_EMPTY; __gmap_unshadow_sgt(sg, raddr, __va(sgt)); /* Free segment table */ - page = phys_to_page(sgt); - list_del(&page->lru); - __free_pages(page, CRST_ALLOC_ORDER); + ptdesc = page_ptdesc(phys_to_page(sgt)); + list_del(&ptdesc->pt_list); + pagetable_free(ptdesc); } /** @@ -1487,7 +1485,7 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr) static void __gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr, unsigned long *r3t) { - struct page *page; + struct ptdesc *ptdesc; phys_addr_t sgt; int i; @@ -1499,9 +1497,9 @@ static void __gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr, r3t[i] = _REGION3_ENTRY_EMPTY; __gmap_unshadow_sgt(sg, raddr, __va(sgt)); /* Free segment table */ - page = phys_to_page(sgt); - list_del(&page->lru); - __free_pages(page, CRST_ALLOC_ORDER); + ptdesc = page_ptdesc(phys_to_page(sgt)); + list_del(&ptdesc->pt_list); + pagetable_free(ptdesc); } } @@ -1516,7 +1514,7 @@ static void gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr) { unsigned long r2o, *r2e; phys_addr_t r3t; - struct page *page; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); r2e = gmap_table_walk(sg, raddr, 3); /* get region-2 pointer */ @@ -1529,9 +1527,9 @@ static void gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr) *r2e = _REGION2_ENTRY_EMPTY; __gmap_unshadow_r3t(sg, raddr, __va(r3t)); /* Free region 3 table */ - page = phys_to_page(r3t); - list_del(&page->lru); - __free_pages(page, CRST_ALLOC_ORDER); + ptdesc = page_ptdesc(phys_to_page(r3t)); + list_del(&ptdesc->pt_list); + pagetable_free(ptdesc); } /** @@ -1546,7 +1544,7 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr, unsigned long *r2t) { phys_addr_t r3t; - struct page *page; + struct ptdesc *ptdesc; int i; BUG_ON(!gmap_is_shadow(sg)); @@ -1557,9 +1555,9 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr, r2t[i] = _REGION2_ENTRY_EMPTY; __gmap_unshadow_r3t(sg, raddr, __va(r3t)); /* Free region 3 table */ - page = phys_to_page(r3t); - list_del(&page->lru); - __free_pages(page, CRST_ALLOC_ORDER); + ptdesc = page_ptdesc(phys_to_page(r3t)); + list_del(&ptdesc->pt_list); + pagetable_free(ptdesc); } } @@ -1573,7 +1571,7 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr, static void gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr) { unsigned long r1o, *r1e; - struct page *page; + struct ptdesc *ptdesc; phys_addr_t r2t; BUG_ON(!gmap_is_shadow(sg)); @@ -1587,9 +1585,9 @@ static void gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr) *r1e = _REGION1_ENTRY_EMPTY; __gmap_unshadow_r2t(sg, raddr, __va(r2t)); /* Free region 2 table */ - page = phys_to_page(r2t); - list_del(&page->lru); - __free_pages(page, CRST_ALLOC_ORDER); + ptdesc = page_ptdesc(phys_to_page(r2t)); + list_del(&ptdesc->pt_list); + pagetable_free(ptdesc); } /** @@ -1604,7 +1602,7 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr, unsigned long *r1t) { unsigned long asce; - struct page *page; + struct ptdesc *ptdesc; phys_addr_t r2t; int i; @@ -1619,9 +1617,9 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr, gmap_idte_one(asce, raddr); r1t[i] = _REGION1_ENTRY_EMPTY; /* Free region 2 table */ - page = phys_to_page(r2t); - list_del(&page->lru); - __free_pages(page, CRST_ALLOC_ORDER); + ptdesc = page_ptdesc(phys_to_page(r2t)); + list_del(&ptdesc->pt_list); + pagetable_free(ptdesc); } } @@ -1819,18 +1817,18 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t, unsigned long raddr, origin, offset, len; unsigned long *table; phys_addr_t s_r2t; - struct page *page; + struct ptdesc *ptdesc; int rc; BUG_ON(!gmap_is_shadow(sg)); /* Allocate a shadow region second table */ - page = gmap_alloc_crst(); - if (!page) + ptdesc = gmap_alloc_crst(); + if (!ptdesc) return -ENOMEM; - page->index = r2t & _REGION_ENTRY_ORIGIN; + ptdesc->pt_index = r2t & _REGION_ENTRY_ORIGIN; if (fake) - page->index |= GMAP_SHADOW_FAKE_TABLE; - s_r2t = page_to_phys(page); + ptdesc->pt_index |= GMAP_SHADOW_FAKE_TABLE; + s_r2t = page_to_phys(ptdesc_page(ptdesc)); /* Install shadow region second table */ spin_lock(&sg->guest_table_lock); table = gmap_table_walk(sg, saddr, 4); /* get region-1 pointer */ @@ -1851,7 +1849,7 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t, _REGION_ENTRY_TYPE_R1 | _REGION_ENTRY_INVALID; if (sg->edat_level >= 1) *table |= (r2t & _REGION_ENTRY_PROTECT); - list_add(&page->lru, &sg->crst_list); + list_add(&ptdesc->pt_list, &sg->crst_list); if (fake) { /* nothing to protect for fake tables */ *table &= ~_REGION_ENTRY_INVALID; @@ -1879,7 +1877,7 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t, return rc; out_free: spin_unlock(&sg->guest_table_lock); - __free_pages(page, CRST_ALLOC_ORDER); + pagetable_free(ptdesc); return rc; } EXPORT_SYMBOL_GPL(gmap_shadow_r2t); @@ -1903,18 +1901,18 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t, unsigned long raddr, origin, offset, len; unsigned long *table; phys_addr_t s_r3t; - struct page *page; + struct ptdesc *ptdesc; int rc; BUG_ON(!gmap_is_shadow(sg)); /* Allocate a shadow region second table */ - page = gmap_alloc_crst(); - if (!page) + ptdesc = gmap_alloc_crst(); + if (!ptdesc) return -ENOMEM; - page->index = r3t & _REGION_ENTRY_ORIGIN; + ptdesc->pt_index = r3t & _REGION_ENTRY_ORIGIN; if (fake) - page->index |= GMAP_SHADOW_FAKE_TABLE; - s_r3t = page_to_phys(page); + ptdesc->pt_index |= GMAP_SHADOW_FAKE_TABLE; + s_r3t = page_to_phys(ptdesc_page(ptdesc)); /* Install shadow region second table */ spin_lock(&sg->guest_table_lock); table = gmap_table_walk(sg, saddr, 3); /* get region-2 pointer */ @@ -1935,7 +1933,7 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t, _REGION_ENTRY_TYPE_R2 | _REGION_ENTRY_INVALID; if (sg->edat_level >= 1) *table |= (r3t & _REGION_ENTRY_PROTECT); - list_add(&page->lru, &sg->crst_list); + list_add(&ptdesc->pt_list, &sg->crst_list); if (fake) { /* nothing to protect for fake tables */ *table &= ~_REGION_ENTRY_INVALID; @@ -1963,7 +1961,7 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t, return rc; out_free: spin_unlock(&sg->guest_table_lock); - __free_pages(page, CRST_ALLOC_ORDER); + pagetable_free(ptdesc); return rc; } EXPORT_SYMBOL_GPL(gmap_shadow_r3t); @@ -1987,18 +1985,18 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt, unsigned long raddr, origin, offset, len; unsigned long *table; phys_addr_t s_sgt; - struct page *page; + struct ptdesc *ptdesc; int rc; BUG_ON(!gmap_is_shadow(sg) || (sgt & _REGION3_ENTRY_LARGE)); /* Allocate a shadow segment table */ - page = gmap_alloc_crst(); - if (!page) + ptdesc = gmap_alloc_crst(); + if (!ptdesc) return -ENOMEM; - page->index = sgt & _REGION_ENTRY_ORIGIN; + ptdesc->pt_index = sgt & _REGION_ENTRY_ORIGIN; if (fake) - page->index |= GMAP_SHADOW_FAKE_TABLE; - s_sgt = page_to_phys(page); + ptdesc->pt_index |= GMAP_SHADOW_FAKE_TABLE; + s_sgt = page_to_phys(ptdesc_page(ptdesc)); /* Install shadow region second table */ spin_lock(&sg->guest_table_lock); table = gmap_table_walk(sg, saddr, 2); /* get region-3 pointer */ @@ -2019,7 +2017,7 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt, _REGION_ENTRY_TYPE_R3 | _REGION_ENTRY_INVALID; if (sg->edat_level >= 1) *table |= sgt & _REGION_ENTRY_PROTECT; - list_add(&page->lru, &sg->crst_list); + list_add(&ptdesc->pt_list, &sg->crst_list); if (fake) { /* nothing to protect for fake tables */ *table &= ~_REGION_ENTRY_INVALID; @@ -2047,7 +2045,7 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt, return rc; out_free: spin_unlock(&sg->guest_table_lock); - __free_pages(page, CRST_ALLOC_ORDER); + pagetable_free(ptdesc); return rc; } EXPORT_SYMBOL_GPL(gmap_shadow_sgt); @@ -2070,7 +2068,7 @@ int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr, int *fake) { unsigned long *table; - struct page *page; + struct ptdesc *ptdesc; int rc; BUG_ON(!gmap_is_shadow(sg)); @@ -2078,10 +2076,10 @@ int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr, table = gmap_table_walk(sg, saddr, 1); /* get segment pointer */ if (table && !(*table & _SEGMENT_ENTRY_INVALID)) { /* Shadow page tables are full pages (pte+pgste) */ - page = pfn_to_page(*table >> PAGE_SHIFT); - *pgt = page->index & ~GMAP_SHADOW_FAKE_TABLE; + ptdesc = page_ptdesc(pfn_to_page(*table >> PAGE_SHIFT)); + *pgt = ptdesc->pt_index & ~GMAP_SHADOW_FAKE_TABLE; *dat_protection = !!(*table & _SEGMENT_ENTRY_PROTECT); - *fake = !!(page->index & GMAP_SHADOW_FAKE_TABLE); + *fake = !!(ptdesc->pt_index & GMAP_SHADOW_FAKE_TABLE); rc = 0; } else { rc = -EAGAIN; @@ -2961,11 +2959,10 @@ EXPORT_SYMBOL_GPL(__s390_uv_destroy_range); */ void s390_unlist_old_asce(struct gmap *gmap) { - struct page *old; + struct ptdesc *old; - old = virt_to_page(gmap->table); + old = virt_to_ptdesc(gmap->table); spin_lock(&gmap->guest_table_lock); - list_del(&old->lru); /* * Sometimes the topmost page might need to be "removed" multiple * times, for example if the VM is rebooted into secure mode several @@ -2980,7 +2977,7 @@ void s390_unlist_old_asce(struct gmap *gmap) * pointers, so list_del can work (and do nothing) without * dereferencing stale or invalid pointers. */ - INIT_LIST_HEAD(&old->lru); + list_del_init(&old->pt_list); spin_unlock(&gmap->guest_table_lock); } EXPORT_SYMBOL_GPL(s390_unlist_old_asce); @@ -3001,7 +2998,7 @@ EXPORT_SYMBOL_GPL(s390_unlist_old_asce); int s390_replace_asce(struct gmap *gmap) { unsigned long asce; - struct page *page; + struct ptdesc *ptdesc; void *table; s390_unlist_old_asce(gmap); @@ -3010,11 +3007,11 @@ int s390_replace_asce(struct gmap *gmap) if ((gmap->asce & _ASCE_TYPE_MASK) == _ASCE_TYPE_SEGMENT) return -EINVAL; - page = gmap_alloc_crst(); - if (!page) + ptdesc = gmap_alloc_crst(); + if (!ptdesc) return -ENOMEM; - page->index = 0; - table = page_to_virt(page); + ptdesc->pt_index = 0; + table = ptdesc_to_virt(ptdesc); memcpy(table, gmap->table, 1UL << (CRST_ALLOC_ORDER + PAGE_SHIFT)); /* @@ -3023,7 +3020,7 @@ int s390_replace_asce(struct gmap *gmap) * it will be freed when the VM is torn down. */ spin_lock(&gmap->guest_table_lock); - list_add(&page->lru, &gmap->crst_list); + list_add(&ptdesc->pt_list, &gmap->crst_list); spin_unlock(&gmap->guest_table_lock); /* Set new table origin while preserving existing ASCE control bits */