From patchwork Mon Apr 17 20:50:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214621 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 71DA7C7EE29 for ; Mon, 17 Apr 2023 20:52:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229542AbjDQUwx (ORCPT ); Mon, 17 Apr 2023 16:52:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39196 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229710AbjDQUwv (ORCPT ); Mon, 17 Apr 2023 16:52:51 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75934559A; Mon, 17 Apr 2023 13:52:47 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id h24-20020a17090a9c1800b002404be7920aso27915611pjp.5; Mon, 17 Apr 2023 13:52:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764767; x=1684356767; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=iUQ9/pimfaGXNi0CjQ5OYjWOczKCpE//S8xnkPc78yU=; b=FAf7Q7K913ZX6r/eLsy+yAUsXKMRxOHj1fLyHB9UIaactpRsuyZTxwm6yr9/0kHhuJ 4Vp9zU5n5/h6vSIOmrhOIkQhtsXJjouneV7ATDr91jNlm1HLz9ERQ9iZEYmxf7IWZXzO 7Bjl99gPzEex4CTscD2JLBVlVfqL8/78eXbRp1C2OSneg1gvPPXFUCS8B+INtmwbneb1 7xu84pfQjybA5JWgsxDb5XtXBfw3JMpB/K4/MrDjwMi52JjctUf0w80qqRetA0UVkDY9 /WMnuBxaObETeplAy+OYgYm1cMdR/MAIyIhp1m3TGneur/zdkE7Srlujs8An47+1uBTa DS5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764767; x=1684356767; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=iUQ9/pimfaGXNi0CjQ5OYjWOczKCpE//S8xnkPc78yU=; b=BO1xYJ/QeDIGesHOCmPISYYsl7B1utoGHCcdGrpzgbwdJh60nFFUE/ycPvHSVkzz9R QhBDD9evQNB8TYxKtGa/Fy6DOjWYlRX6DCOdWDEwNV+PuVk1tSgFE9CRPTwC1Y3m6kz2 uvLKTV3+LSip/pWEUqNvqhzcrYGvFUjh6dsJSdgl9gSz8+2Aw0RklYKTLrlHAvPKWtEk Jmx4L8reYXLjw9CwiNBZlIVlpghjpx/t210Ud/8UrY2bz1xhFfEfHAY0U2CRRDSxF/M5 avDRhgD30jo1ldQOgbqonF6jTJ6IybxkI8YKc/jdy6ivhsRMq5qiBnLSRgcrYPPFKmBV MxSw== X-Gm-Message-State: AAQBX9ednN5mfA6uLHjpwISfQBVHPhWtiD7rAIHRtFpGCkoK9NFho/0L SPPtZHbmU6KftNnFcC2vWr8= X-Google-Smtp-Source: AKy350bJ+z7DqyQlo/Tq9sm7RhBTmVNaq1jAeXkgM2fHwXUjj/5Rwh13YNUG3IxyilUJQDR+HGgjBw== X-Received: by 2002:a17:90a:e642:b0:247:83fe:12b6 with SMTP id ep2-20020a17090ae64200b0024783fe12b6mr5579845pjb.43.1681764766744; Mon, 17 Apr 2023 13:52:46 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:52:46 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 01/33] s390: Use _pt_s390_gaddr for gmap address tracking Date: Mon, 17 Apr 2023 13:50:16 -0700 Message-Id: <20230417205048.15870-2-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org s390 uses page->index to keep track of page tables for the guest address space. In an attempt to consolidate the usage of page fields in s390, replace _pt_pad_2 with _pt_s390_gaddr to replace page->index in gmap. This will help with the splitting of struct ptdesc from struct page, as well as allow s390 to use _pt_frag_refcount for fragmented page table tracking. Since page->_pt_s390_gaddr aliases with mapping, ensure its set to NULL before freeing the pages as well. Signed-off-by: Vishal Moola (Oracle) --- arch/s390/mm/gmap.c | 50 +++++++++++++++++++++++++++------------- include/linux/mm_types.h | 2 +- 2 files changed, 35 insertions(+), 17 deletions(-) diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index 5a716bdcba05..a61ea1a491dc 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -70,7 +70,7 @@ static struct gmap *gmap_alloc(unsigned long limit) page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); if (!page) goto out_free; - page->index = 0; + page->_pt_s390_gaddr = 0; list_add(&page->lru, &gmap->crst_list); table = page_to_virt(page); crst_table_init(table, etype); @@ -187,16 +187,20 @@ static void gmap_free(struct gmap *gmap) if (!(gmap_is_shadow(gmap) && gmap->removed)) gmap_flush_tlb(gmap); /* Free all segment & region tables. */ - list_for_each_entry_safe(page, next, &gmap->crst_list, lru) + list_for_each_entry_safe(page, next, &gmap->crst_list, lru) { + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); + } gmap_radix_tree_free(&gmap->guest_to_host); gmap_radix_tree_free(&gmap->host_to_guest); /* Free additional data for a shadow gmap */ if (gmap_is_shadow(gmap)) { /* Free all page tables. */ - list_for_each_entry_safe(page, next, &gmap->pt_list, lru) + list_for_each_entry_safe(page, next, &gmap->pt_list, lru) { + page->_pt_s390_gaddr = 0; page_table_free_pgste(page); + } gmap_rmap_radix_tree_free(&gmap->host_to_rmap); /* Release reference to the parent */ gmap_put(gmap->parent); @@ -318,12 +322,14 @@ static int gmap_alloc_table(struct gmap *gmap, unsigned long *table, list_add(&page->lru, &gmap->crst_list); *table = __pa(new) | _REGION_ENTRY_LENGTH | (*table & _REGION_ENTRY_TYPE_MASK); - page->index = gaddr; + page->_pt_s390_gaddr = gaddr; page = NULL; } spin_unlock(&gmap->guest_table_lock); - if (page) + if (page) { + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); + } return 0; } @@ -341,7 +347,7 @@ static unsigned long __gmap_segment_gaddr(unsigned long *entry) offset = (unsigned long) entry / sizeof(unsigned long); offset = (offset & (PTRS_PER_PMD - 1)) * PMD_SIZE; page = pmd_pgtable_page((pmd_t *) entry); - return page->index + offset; + return page->_pt_s390_gaddr + offset; } /** @@ -1351,6 +1357,7 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr) /* Free page table */ page = phys_to_page(pgt); list_del(&page->lru); + page->_pt_s390_gaddr = 0; page_table_free_pgste(page); } @@ -1379,6 +1386,7 @@ static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr, /* Free page table */ page = phys_to_page(pgt); list_del(&page->lru); + page->_pt_s390_gaddr = 0; page_table_free_pgste(page); } } @@ -1409,6 +1417,7 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr) /* Free segment table */ page = phys_to_page(sgt); list_del(&page->lru); + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); } @@ -1437,6 +1446,7 @@ static void __gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr, /* Free segment table */ page = phys_to_page(sgt); list_del(&page->lru); + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); } } @@ -1467,6 +1477,7 @@ static void gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr) /* Free region 3 table */ page = phys_to_page(r3t); list_del(&page->lru); + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); } @@ -1495,6 +1506,7 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr, /* Free region 3 table */ page = phys_to_page(r3t); list_del(&page->lru); + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); } } @@ -1525,6 +1537,7 @@ static void gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr) /* Free region 2 table */ page = phys_to_page(r2t); list_del(&page->lru); + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); } @@ -1557,6 +1570,7 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr, /* Free region 2 table */ page = phys_to_page(r2t); list_del(&page->lru); + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); } } @@ -1762,9 +1776,9 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t, page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); if (!page) return -ENOMEM; - page->index = r2t & _REGION_ENTRY_ORIGIN; + page->_pt_s390_gaddr = r2t & _REGION_ENTRY_ORIGIN; if (fake) - page->index |= GMAP_SHADOW_FAKE_TABLE; + page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; s_r2t = page_to_phys(page); /* Install shadow region second table */ spin_lock(&sg->guest_table_lock); @@ -1814,6 +1828,7 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t, return rc; out_free: spin_unlock(&sg->guest_table_lock); + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); return rc; } @@ -1846,9 +1861,9 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t, page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); if (!page) return -ENOMEM; - page->index = r3t & _REGION_ENTRY_ORIGIN; + page->_pt_s390_gaddr = r3t & _REGION_ENTRY_ORIGIN; if (fake) - page->index |= GMAP_SHADOW_FAKE_TABLE; + page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; s_r3t = page_to_phys(page); /* Install shadow region second table */ spin_lock(&sg->guest_table_lock); @@ -1898,6 +1913,7 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t, return rc; out_free: spin_unlock(&sg->guest_table_lock); + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); return rc; } @@ -1930,9 +1946,9 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt, page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); if (!page) return -ENOMEM; - page->index = sgt & _REGION_ENTRY_ORIGIN; + page->_pt_s390_gaddr = sgt & _REGION_ENTRY_ORIGIN; if (fake) - page->index |= GMAP_SHADOW_FAKE_TABLE; + page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; s_sgt = page_to_phys(page); /* Install shadow region second table */ spin_lock(&sg->guest_table_lock); @@ -1982,6 +1998,7 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt, return rc; out_free: spin_unlock(&sg->guest_table_lock); + page->_pt_s390_gaddr = 0; __free_pages(page, CRST_ALLOC_ORDER); return rc; } @@ -2014,9 +2031,9 @@ int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr, if (table && !(*table & _SEGMENT_ENTRY_INVALID)) { /* Shadow page tables are full pages (pte+pgste) */ page = pfn_to_page(*table >> PAGE_SHIFT); - *pgt = page->index & ~GMAP_SHADOW_FAKE_TABLE; + *pgt = page->_pt_s390_gaddr & ~GMAP_SHADOW_FAKE_TABLE; *dat_protection = !!(*table & _SEGMENT_ENTRY_PROTECT); - *fake = !!(page->index & GMAP_SHADOW_FAKE_TABLE); + *fake = !!(page->_pt_s390_gaddr & GMAP_SHADOW_FAKE_TABLE); rc = 0; } else { rc = -EAGAIN; @@ -2054,9 +2071,9 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt, page = page_table_alloc_pgste(sg->mm); if (!page) return -ENOMEM; - page->index = pgt & _SEGMENT_ENTRY_ORIGIN; + page->_pt_s390_gaddr = pgt & _SEGMENT_ENTRY_ORIGIN; if (fake) - page->index |= GMAP_SHADOW_FAKE_TABLE; + page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; s_pgt = page_to_phys(page); /* Install shadow page table */ spin_lock(&sg->guest_table_lock); @@ -2101,6 +2118,7 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt, return rc; out_free: spin_unlock(&sg->guest_table_lock); + page->_pt_s390_gaddr = 0; page_table_free_pgste(page); return rc; diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 3fc9e680f174..2616d64c0e8c 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -144,7 +144,7 @@ struct page { struct { /* Page table pages */ unsigned long _pt_pad_1; /* compound_head */ pgtable_t pmd_huge_pte; /* protected by page->ptl */ - unsigned long _pt_pad_2; /* mapping */ + unsigned long _pt_s390_gaddr; /* mapping */ union { struct mm_struct *pt_mm; /* x86 pgds only */ atomic_t pt_frag_refcount; /* powerpc */ From patchwork Mon Apr 17 20:50:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214622 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4541EC7EE33 for ; Mon, 17 Apr 2023 20:53:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230340AbjDQUw6 (ORCPT ); Mon, 17 Apr 2023 16:52:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39294 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230331AbjDQUwx (ORCPT ); Mon, 17 Apr 2023 16:52:53 -0400 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 86D421726; Mon, 17 Apr 2023 13:52:48 -0700 (PDT) Received: by mail-pj1-x102c.google.com with SMTP id b2-20020a17090a6e0200b002470b249e59so16984200pjk.4; Mon, 17 Apr 2023 13:52:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764768; x=1684356768; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bmbsS8DkaHNij/v9bdafzBlktB7f+f7NAeK2Q02GvbI=; b=BOUYQbRtprz0j+QE5FspkyGNnQXo7tua/LH56ZXhTbzE3CAOV8yTgXemF8QTCY9h4g g6V6zIZ7VFlE3ubnJkZ/yYqYyanZgx2V65Z6ltF6wcw0210IAZlR8To7Sk4y8TLWY4Hz xNxTo6TOWAdqeGCeg1Pit+CsJNU/MS0gvSxN85F+x2sLQC0IMYUPA6am8ZQrYVGpPzga 8IeLZRTxr8/8Yg1lDFCUFpg2PfjK4sGTGgffRI7OQ6kx1dS8UMGUuVs3EuNQkywAoLYX E+HKq98y8quQjgUskaGG1Ic02lqe4ZtSQ+Zea8JL+rlDVF597bKHVbSZetgCXEY/xB7J hYgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764768; x=1684356768; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bmbsS8DkaHNij/v9bdafzBlktB7f+f7NAeK2Q02GvbI=; b=QUVhNOdIQJ/qjomxZQbTYNcy9poxhFrWqDefNvytwDtX7KeRg7gIYjyi5UKxmU+sHM V1LUV3NInxLCARk8cEJ48NhXgWyFYSgBr1zyHV2q1E/FSOT5LstuWwGaXtL7ziFsXMUo YIImMyIIcYTz8lvKMQmYMAr9oqI0JFYWy/pod6LBI66QDEVr/y2LKuBpptR6qhmwflQ0 j9IvHT0dHXfzx9R5EN3ekFYI5JLid6NoGmJNW6bWr7P4ARKevyZQpn9RjN8oLYiI1uWz x2sApeKBQJl40cgIApD1KDbz6rx8ce7esPO6e+6Anunz7OGUshobW8RYRqw3ztUQtsRK 58Ng== X-Gm-Message-State: AAQBX9eTkF+Q5mLPBMmw+AMWsCMYaXjyhL0MTq9o1jPmTekL9ayFb6va 3JKHrZU7E/I+rFwbjN831iU= X-Google-Smtp-Source: AKy350ZtXMgj2Ot8GUfxTmnnYjtYP4oOp+wIKL46L+o6DOMpet2qdkXsjbB50RH+WV9/LkMm+8nkDA== X-Received: by 2002:a17:90b:3907:b0:247:8f24:eb31 with SMTP id ob7-20020a17090b390700b002478f24eb31mr4943020pjb.48.1681764767923; Mon, 17 Apr 2023 13:52:47 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:52:47 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 02/33] s390: Use pt_frag_refcount for pagetables Date: Mon, 17 Apr 2023 13:50:17 -0700 Message-Id: <20230417205048.15870-3-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org s390 currently uses _refcount to identify fragmented page tables. The page table struct already has a member pt_frag_refcount used by powerpc, so have s390 use that instead of the _refcount field as well. This improves the safety for _refcount and the page table tracking. This also allows us to simplify the tracking since we can once again use the lower byte of pt_frag_refcount instead of the upper byte of _refcount. Signed-off-by: Vishal Moola (Oracle) --- arch/s390/mm/pgalloc.c | 38 +++++++++++++++----------------------- 1 file changed, 15 insertions(+), 23 deletions(-) diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index 66ab68db9842..6b99932abc66 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -182,20 +182,17 @@ void page_table_free_pgste(struct page *page) * As follows from the above, no unallocated or fully allocated parent * pages are contained in mm_context_t::pgtable_list. * - * The upper byte (bits 24-31) of the parent page _refcount is used + * The lower byte (bits 0-7) of the parent page pt_frag_refcount is used * for tracking contained 2KB-pgtables and has the following format: * * PP AA - * 01234567 upper byte (bits 24-31) of struct page::_refcount + * 01234567 upper byte (bits 0-7) of struct page::pt_frag_refcount * || || * || |+--- upper 2KB-pgtable is allocated * || +---- lower 2KB-pgtable is allocated * |+------- upper 2KB-pgtable is pending for removal * +-------- lower 2KB-pgtable is pending for removal * - * (See commit 620b4e903179 ("s390: use _refcount for pgtables") on why - * using _refcount is possible). - * * When 2KB-pgtable is allocated the corresponding AA bit is set to 1. * The parent page is either: * - added to mm_context_t::pgtable_list in case the second half of the @@ -243,11 +240,12 @@ unsigned long *page_table_alloc(struct mm_struct *mm) if (!list_empty(&mm->context.pgtable_list)) { page = list_first_entry(&mm->context.pgtable_list, struct page, lru); - mask = atomic_read(&page->_refcount) >> 24; + mask = atomic_read(&page->pt_frag_refcount); /* * The pending removal bits must also be checked. * Failure to do so might lead to an impossible - * value of (i.e 0x13 or 0x23) written to _refcount. + * value of (i.e 0x13 or 0x23) written to + * pt_frag_refcount. * Such values violate the assumption that pending and * allocation bits are mutually exclusive, and the rest * of the code unrails as result. That could lead to @@ -259,8 +257,8 @@ unsigned long *page_table_alloc(struct mm_struct *mm) bit = mask & 1; /* =1 -> second 2K */ if (bit) table += PTRS_PER_PTE; - atomic_xor_bits(&page->_refcount, - 0x01U << (bit + 24)); + atomic_xor_bits(&page->pt_frag_refcount, + 0x01U << bit); list_del(&page->lru); } } @@ -281,12 +279,12 @@ unsigned long *page_table_alloc(struct mm_struct *mm) table = (unsigned long *) page_to_virt(page); if (mm_alloc_pgste(mm)) { /* Return 4K page table with PGSTEs */ - atomic_xor_bits(&page->_refcount, 0x03U << 24); + atomic_xor_bits(&page->pt_frag_refcount, 0x03U); memset64((u64 *)table, _PAGE_INVALID, PTRS_PER_PTE); memset64((u64 *)table + PTRS_PER_PTE, 0, PTRS_PER_PTE); } else { /* Return the first 2K fragment of the page */ - atomic_xor_bits(&page->_refcount, 0x01U << 24); + atomic_xor_bits(&page->pt_frag_refcount, 0x01U); memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE); spin_lock_bh(&mm->context.lock); list_add(&page->lru, &mm->context.pgtable_list); @@ -323,22 +321,19 @@ void page_table_free(struct mm_struct *mm, unsigned long *table) * will happen outside of the critical section from this * function or from __tlb_remove_table() */ - mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24)); - mask >>= 24; + mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit); if (mask & 0x03U) list_add(&page->lru, &mm->context.pgtable_list); else list_del(&page->lru); spin_unlock_bh(&mm->context.lock); - mask = atomic_xor_bits(&page->_refcount, 0x10U << (bit + 24)); - mask >>= 24; + mask = atomic_xor_bits(&page->pt_frag_refcount, 0x10U << bit); if (mask != 0x00U) return; half = 0x01U << bit; } else { half = 0x03U; - mask = atomic_xor_bits(&page->_refcount, 0x03U << 24); - mask >>= 24; + mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U); } page_table_release_check(page, table, half, mask); @@ -368,8 +363,7 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table, * outside of the critical section from __tlb_remove_table() or from * page_table_free() */ - mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24)); - mask >>= 24; + mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit); if (mask & 0x03U) list_add_tail(&page->lru, &mm->context.pgtable_list); else @@ -391,14 +385,12 @@ void __tlb_remove_table(void *_table) return; case 0x01U: /* lower 2K of a 4K page table */ case 0x02U: /* higher 2K of a 4K page table */ - mask = atomic_xor_bits(&page->_refcount, mask << (4 + 24)); - mask >>= 24; + mask = atomic_xor_bits(&page->pt_frag_refcount, mask << 4); if (mask != 0x00U) return; break; case 0x03U: /* 4K page table with pgstes */ - mask = atomic_xor_bits(&page->_refcount, 0x03U << 24); - mask >>= 24; + mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U); break; } From patchwork Mon Apr 17 20:50:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214623 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FA7DC7EE43 for ; Mon, 17 Apr 2023 20:53:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230399AbjDQUxE (ORCPT ); Mon, 17 Apr 2023 16:53:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39336 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230298AbjDQUwz (ORCPT ); Mon, 17 Apr 2023 16:52:55 -0400 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E53B240FD; Mon, 17 Apr 2023 13:52:49 -0700 (PDT) Received: by mail-pl1-x62b.google.com with SMTP id i8so18357489plt.10; Mon, 17 Apr 2023 13:52:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764769; x=1684356769; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ds3IS9TWlY4iXOF1Vygr3Q/N2fJxO0/Dzlxnv3cq6/0=; b=hr7pEt7A7XcPfK4WHrNDJyXTyW/f522/QxHB9K2NM6BmG8KnIlcSk7Prak06XFsubX Sjsu1MPagS5Ajy0wM6nwYZFly4hHrr9DYIrVTEdELHRdfsGo64oEk7qD+41S/NMObi8y l8A2tz8IWZ1X+Du4Mg5V1agW+uGCUPtQdgx1HioQtiEf01hZXw902OFYpkixEmc4936W rbvY3hq0N/ZSHnXC+KGydT6+ZrTUjCaNrxKIoD5OlWamJ5QPi1l2Tt7uz8xTA1Fp0QB2 97/hz5zZCcVfwzgfNZP+zBkRfPzWu3gB3ojY6/LKA9XIpl4tn1OEejbLEhRl9E9QH4p3 YC/g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764769; x=1684356769; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ds3IS9TWlY4iXOF1Vygr3Q/N2fJxO0/Dzlxnv3cq6/0=; b=OvT9RvbMatd/Ae5qHVGDY4wBl7382xF3pHhTuzaS0Qaz2HOwwtQNOlckj0wR44yXoB tUVE/eU4lpYGTDuYWWeKBP5CpSWaNQq/mL+uUnqtW9DmDvZrmF3afFAMSZ1uat8K67SC vIZ55BvV5lAzgODMydclQoQguUYEZ/kQNEUCMKE6k1R/2m+y183Hb+Rz+hZXARLU7DyF ne2ZErEsyEIfNPQt0Q53+DAmkOfmNMCUwMOvhL4MvJAZc0mkn/otBD03A+l5Y9cQRI+Q RIm1Y8/gCVFCNIHSbt/8x4CuQ3f0OAkIShFDjmYZK4Y1s4DPzQOCzQvUWZwbe5KILg4G RppQ== X-Gm-Message-State: AAQBX9fMz3SM6mN2dGXkrRfRhaIHaxywReXGG+9hCdIJa0ZukOSI3mm2 w8+67rkP7qx3skHdJIFpUBk= X-Google-Smtp-Source: AKy350bKOtS2RU5+UPauyBEeUGcMLqfRctd5dndnPm3bZbQM5N2yHnkL/dVsnC5V83h9pYCSEFQOHw== X-Received: by 2002:a05:6a20:6595:b0:eb:ad6a:ccf4 with SMTP id p21-20020a056a20659500b000ebad6accf4mr15296929pzh.18.1681764769281; Mon, 17 Apr 2023 13:52:49 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:52:49 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 03/33] pgtable: Create struct ptdesc Date: Mon, 17 Apr 2023 13:50:18 -0700 Message-Id: <20230417205048.15870-4-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Currently, page table information is stored within struct page. As part of simplifying struct page, create struct ptdesc for page table information. Signed-off-by: Vishal Moola (Oracle) --- include/linux/pgtable.h | 50 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 50 insertions(+) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 023918666dd4..7cc6ea057ee9 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -47,6 +47,56 @@ #define pmd_pgtable(pmd) pmd_page(pmd) #endif +/** + * struct ptdesc - Memory descriptor for page tables. + * @__page_flags: Same as page flags. Unused for page tables. + * @pt_list: List of used page tables. Used for s390 and x86. + * @_pt_pad_1: Padding that aliases with page's compound head. + * @pmd_huge_pte: Protected by ptdesc->ptl, used for THPs. + * @_pt_s390_gaddr: Aliases with page's mapping. Used for s390 gmap only. + * @pt_mm: Used for x86 pgds. + * @pt_frag_refcount: For fragmented page table tracking. Powerpc and s390 only. + * @ptl: Lock for the page table. + * + * This struct overlays struct page for now. Do not modify without a good + * understanding of the issues. + */ +struct ptdesc { + unsigned long __page_flags; + + union { + struct list_head pt_list; + struct { + unsigned long _pt_pad_1; + pgtable_t pmd_huge_pte; + }; + }; + unsigned long _pt_s390_gaddr; + + union { + struct mm_struct *pt_mm; + atomic_t pt_frag_refcount; + }; + +#if ALLOC_SPLIT_PTLOCKS + spinlock_t *ptl; +#else + spinlock_t ptl; +#endif +}; + +#define TABLE_MATCH(pg, pt) \ + static_assert(offsetof(struct page, pg) == offsetof(struct ptdesc, pt)) +TABLE_MATCH(flags, __page_flags); +TABLE_MATCH(compound_head, pt_list); +TABLE_MATCH(compound_head, _pt_pad_1); +TABLE_MATCH(mapping, _pt_s390_gaddr); +TABLE_MATCH(pmd_huge_pte, pmd_huge_pte); +TABLE_MATCH(pt_mm, pt_mm); +TABLE_MATCH(ptl, ptl); +#undef TABLE_MATCH +static_assert(sizeof(struct ptdesc) <= sizeof(struct page)); + /* * A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD] * From patchwork Mon Apr 17 20:50:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214624 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 202E0C77B7F for ; Mon, 17 Apr 2023 20:53:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231185AbjDQUx2 (ORCPT ); Mon, 17 Apr 2023 16:53:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39436 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229710AbjDQUw7 (ORCPT ); Mon, 17 Apr 2023 16:52:59 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6EAD649C3; Mon, 17 Apr 2023 13:52:51 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id i8so18357571plt.10; Mon, 17 Apr 2023 13:52:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764771; x=1684356771; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=73w88W+Bv7iZwJaXp6X3OPD7H4X7IV5/5eDbDnIxbNE=; b=W9nPVyoXWKBUEnGfTyhkHARBTsZqdpJrkrNTKo667lyZEzjN6PxmFbbY0YylZMq9yl YfrdFmghyeO5HwXhYCrYBTje7eKqSW91jSFSDlT0dbMWvyCEN9ttpAXe7AFMT1oxAH0K BCnetbVzBpycS1Tax/m+o/Gfxhef/Kkj1D6sF8rcDiRlOa7VvGL8i+xK5Gl2/zoIf4yK M/9RMi7Hd+G3wH7cuJrpgQnusczVewZS29/AobewpqmBg09r8g/9YIIka/gTIbj6DHBc MFg7CjS5kCFyT6XG4/q3MJQ78bnC2QIXJhA82Tu9j9zbgdzbLpx3IE+L7G092ISLAdCE r0mA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764771; x=1684356771; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=73w88W+Bv7iZwJaXp6X3OPD7H4X7IV5/5eDbDnIxbNE=; b=Te6MV/AiBHKaEJ6l1TQKjBKaJJw8+yQG51jiY2PNRKJIuYncjv1wBbGpmY2oM361hT Ta21/3Vy0AfbY+ph97CuLc3sbD58wbXVrv7XKSJsqHmyddGayNrevUL5iAUZuujwQkiV Bjf4wBHHmuQBAQsd1nRqBEtMyEUUXPk1x5THKL2WnJO2WWcy8OE7E0ohEm6kjwjNRoMr d5+xTYrs9rYgO1guzfoxdZ+iVoMrW3J597TghPSE7/EVBBETaGGn9ywvjvtVF5Qposfe SKym/UqNMf7wJQu41dvQmOY5wFB6IoZMuxVbidMsxsZFpCiNrQQ4g/Ob3D5U6MzvLDbF kh8w== X-Gm-Message-State: AAQBX9ehBYayugYIBdddB/WsXl47cxdu1ZpNZHHkNjp6LLc9FXebhouq +eNW/whHJrExr8AOByQNwYA= X-Google-Smtp-Source: AKy350ZgO/j6WOIysdkqHxj8eua8dcgo+KDK2FD0csNlrjJfCGtmgVmsUk4Xos3l7Ag2y16revzrHg== X-Received: by 2002:a17:902:8bc6:b0:1a2:8924:224a with SMTP id r6-20020a1709028bc600b001a28924224amr205356plo.25.1681764770838; Mon, 17 Apr 2023 13:52:50 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:52:50 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 04/33] mm: add utility functions for ptdesc Date: Mon, 17 Apr 2023 13:50:19 -0700 Message-Id: <20230417205048.15870-5-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Introduce utility functions setting the foundation for ptdescs. These will also assist in the splitting out of ptdesc from struct page. ptdesc_alloc() is defined to allocate new ptdesc pages as compound pages. This is to standardize ptdescs by allowing for one allocation and one free function, in contrast to 2 allocation and 2 free functions. Signed-off-by: Vishal Moola (Oracle) --- include/asm-generic/tlb.h | 11 ++++++++++ include/linux/mm.h | 44 +++++++++++++++++++++++++++++++++++++++ include/linux/pgtable.h | 13 ++++++++++++ 3 files changed, 68 insertions(+) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index b46617207c93..6bade9e0e799 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -481,6 +481,17 @@ static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) return tlb_remove_page_size(tlb, page, PAGE_SIZE); } +static inline void tlb_remove_ptdesc(struct mmu_gather *tlb, void *pt) +{ + tlb_remove_table(tlb, pt); +} + +/* Like tlb_remove_ptdesc, but for page-like page directories. */ +static inline void tlb_remove_page_ptdesc(struct mmu_gather *tlb, struct ptdesc *pt) +{ + tlb_remove_page(tlb, ptdesc_page(pt)); +} + static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) { diff --git a/include/linux/mm.h b/include/linux/mm.h index b18848ae7e22..ec3cbe2fa665 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2744,6 +2744,45 @@ static inline pmd_t *pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long a } #endif /* CONFIG_MMU */ +static inline struct ptdesc *virt_to_ptdesc(const void *x) +{ + return page_ptdesc(virt_to_head_page(x)); +} + +static inline void *ptdesc_to_virt(struct ptdesc *pt) +{ + return page_to_virt(ptdesc_page(pt)); +} + +static inline void *ptdesc_address(struct ptdesc *pt) +{ + return folio_address(ptdesc_folio(pt)); +} + +static inline bool ptdesc_is_reserved(struct ptdesc *pt) +{ + return folio_test_reserved(ptdesc_folio(pt)); +} + +static inline struct ptdesc *ptdesc_alloc(gfp_t gfp, unsigned int order) +{ + struct page *page = alloc_pages(gfp | __GFP_COMP, order); + + return page_ptdesc(page); +} + +static inline void ptdesc_free(struct ptdesc *pt) +{ + struct page *page = ptdesc_page(pt); + + __free_pages(page, compound_order(page)); +} + +static inline void ptdesc_clear(void *x) +{ + clear_page(x); +} + #if USE_SPLIT_PTE_PTLOCKS #if ALLOC_SPLIT_PTLOCKS void __init ptlock_cache_init(void); @@ -2970,6 +3009,11 @@ static inline void mark_page_reserved(struct page *page) adjust_managed_page_count(page, -1); } +static inline void free_reserved_ptdesc(struct ptdesc *pt) +{ + free_reserved_page(ptdesc_page(pt)); +} + /* * Default method to free all the __init memory into the buddy system. * The freed pages will be poisoned with pattern "poison" if it's within diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 7cc6ea057ee9..7cd803aa38eb 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -97,6 +97,19 @@ TABLE_MATCH(ptl, ptl); #undef TABLE_MATCH static_assert(sizeof(struct ptdesc) <= sizeof(struct page)); +#define ptdesc_page(pt) (_Generic((pt), \ + const struct ptdesc *: (const struct page *)(pt), \ + struct ptdesc *: (struct page *)(pt))) + +#define ptdesc_folio(pt) (_Generic((pt), \ + const struct ptdesc *: (const struct folio *)(pt), \ + struct ptdesc *: (struct folio *)(pt))) + +static inline struct ptdesc *page_ptdesc(struct page *page) +{ + return (struct ptdesc *)page; +} + /* * A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD] * From patchwork Mon Apr 17 20:50:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214625 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0018C7EE30 for ; Mon, 17 Apr 2023 20:53:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231207AbjDQUx3 (ORCPT ); Mon, 17 Apr 2023 16:53:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39780 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230298AbjDQUxO (ORCPT ); Mon, 17 Apr 2023 16:53:14 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CA2304C2B; Mon, 17 Apr 2023 13:52:52 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id w11so27335099plp.13; Mon, 17 Apr 2023 13:52:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764772; x=1684356772; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BPCTGsUKFtNZ0E2E3gxAeEJL+3qovRd//xVc9xUETqc=; b=IRDdSAScs3TBGztRzu6QZgH4fMR7/UKueaVS3DDhFsZMLZlJQUDaWkKiPXvov39z8Z bssT73UYUy1LP7uOeEalBEo3irDXhaxWlDKHCjJjWa82XbzBt1YrruOwks0GGaa/KE2/ kcH9ajFlIINgfw2e+eWdEuaQpWFwhb8tF79JjCU2EAC5A/73X/TUfQV/8mop72+F/v41 nZHSSM2ss9/OewEDYIYb1Iz4F1hF7m8KXVNPHD0iWKxsJnaQ4CmjrVdNW0xNgbwEICJR VACO7VFEKZxOIvTtIDcu6wtwdqmMEpSRB4owcroJF6TbGx9VSAN4uiN81drhHklILqhb p9aA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764772; x=1684356772; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BPCTGsUKFtNZ0E2E3gxAeEJL+3qovRd//xVc9xUETqc=; b=Qm+LLh7Wpf0F6qqGYZodIj4Tat/qwDk63MKc+Q9F//6qitUnUgBteUCFLFataBJdjq aSIh0ItKlOfAnGZQLmoHIorZUl/gqkKA+tyT+XoTFaLaClHp/xPDE5zk5clbKRX6oIcq U9/m1ArDk/AQmMHGE6GW5Q8Y4Er8Hts8Z9LD4gRe3ApbKPLeQOxBdttHnjGTcazgIU7G rrt/w65xMkVmT7zv8txUW7P1moq5hmD3ypJmwur9VfFYgEEevNqE+Uk7Dqhel7/vvmV5 w9D5fK+g/zVNGxEciHv6+CpaSeD/5t0y3t+l1hoMYKKX5JeGTWLC8ri6+SjlhMBiDblm zrtQ== X-Gm-Message-State: AAQBX9e8HbCT0gBcrqEgYpKo2g3gfrfZCRZ24sF4H9XFXx5GhHbHyZo4 je7Bo0kaX6jdB+HLZ6MfOVY= X-Google-Smtp-Source: AKy350bms9aSjWgOwaAnzsnpGuFvcEqsOo1myEWAVSaCBXxzFAqV0lCr0DmPVe9wT5EHiIOY8l5jIQ== X-Received: by 2002:a17:90b:f84:b0:246:fc58:d77b with SMTP id ft4-20020a17090b0f8400b00246fc58d77bmr15834371pjb.44.1681764772342; Mon, 17 Apr 2023 13:52:52 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:52:51 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 05/33] mm: Convert pmd_pgtable_page() to pmd_ptdesc() Date: Mon, 17 Apr 2023 13:50:20 -0700 Message-Id: <20230417205048.15870-6-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Converts pmd_pgtable_page() to pmd_ptdesc() and all its callers. This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Signed-off-by: Vishal Moola (Oracle) --- include/linux/mm.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ec3cbe2fa665..069187e84e35 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2892,15 +2892,15 @@ static inline void pgtable_pte_page_dtor(struct page *page) #if USE_SPLIT_PMD_PTLOCKS -static inline struct page *pmd_pgtable_page(pmd_t *pmd) +static inline struct ptdesc *pmd_ptdesc(pmd_t *pmd) { unsigned long mask = ~(PTRS_PER_PMD * sizeof(pmd_t) - 1); - return virt_to_page((void *)((unsigned long) pmd & mask)); + return virt_to_ptdesc((void *)((unsigned long) pmd & mask)); } static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) { - return ptlock_ptr(pmd_pgtable_page(pmd)); + return ptlock_ptr(ptdesc_page(pmd_ptdesc(pmd))); } static inline bool pmd_ptlock_init(struct page *page) @@ -2919,7 +2919,7 @@ static inline void pmd_ptlock_free(struct page *page) ptlock_free(page); } -#define pmd_huge_pte(mm, pmd) (pmd_pgtable_page(pmd)->pmd_huge_pte) +#define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte) #else From patchwork Mon Apr 17 20:50:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214628 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 876FBC7EE2D for ; Mon, 17 Apr 2023 20:53:33 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231249AbjDQUxb (ORCPT ); Mon, 17 Apr 2023 16:53:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229713AbjDQUxO (ORCPT ); Mon, 17 Apr 2023 16:53:14 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46F501726; Mon, 17 Apr 2023 13:52:54 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id sj1-20020a17090b2d8100b00247bd1a66d4so1716379pjb.5; Mon, 17 Apr 2023 13:52:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764774; x=1684356774; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=seoMdP0S43tB5tHiAJsyEn/McuL6oTkl/wxfiJ8zej0=; b=m9/ZIcz+Tqctc8DsupfPgmRFjJ+xGEJ5QFcDJn618MR0YPogH6sfzkHf8JHJze9cJp x5NQYQBUB4f261Rif3c3u/e/lVyrkb7fkfJolfOwhFk6XOwO7D63t/TQQqs9S95k9Ae8 FKQXzDTaphWXYOHY2MRQP+XNQ8wh3L+YHkBgztA18oGJHjweyg/A/nWWIx5QMqDpAEPB X0wu3GkIqgBlm9RuBmiAr6UEFrV7LCwD9Gy8HkrUbR2REAIxrVGyinkjPCtpi11n6vo5 ARCTsKi2jOZHutSbPq4tlksgdP3OAZq4N0EqMo6MlYwn+u6KLBtMzuBeO5yqTZEreTX6 9o+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764774; x=1684356774; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=seoMdP0S43tB5tHiAJsyEn/McuL6oTkl/wxfiJ8zej0=; b=UdTtPr/zNM6Bb4nxxF1QY8V8RGd28TaOblJq/Ex+oU99KSpSCOOQU+6xw5Zuj6XS/s mfgvytQ5mBLNvSOyD/oFdNJntGKX2E7uiLuuFTVagNKU33OvQmAsPiCGINkon9vY5f8K bnwczg2yDF5ldPZ4hflu2O3tG9EYqH/MBoruOjDBKGkhJsgRgHRQU2iFKG/MzEy7bPl/ q78Zj9CQ4wGrJz1XG35HTtMLicqPRx5w87AKGTN8Q1my8FWec5OJy1yPVJJIu09O1TSd lLaZtt/EJx4tPTUvq/Q5UWgd5/Ujnm/Gxxs3hbG7mDupaUQgVpEyczKHm93jhZw2n2s4 xWng== X-Gm-Message-State: AAQBX9fcAIE+2QUwBT+ljJWxyx0xjsbpAswbrtvFACX6a1B8+HpbEPRm HTCIgIN3+Xu049lMTkTYGNo= X-Google-Smtp-Source: AKy350aUejw0DeO6puoaXC3R5n6x3DzcxqacgAq+Qj51dxkW8glMsrCpGOXbLfdvjTORWF+TApg9qw== X-Received: by 2002:a05:6a20:b21:b0:eb:8d47:332a with SMTP id x33-20020a056a200b2100b000eb8d47332amr14024065pzf.36.1681764773709; Mon, 17 Apr 2023 13:52:53 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:52:53 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 06/33] mm: Convert ptlock_alloc() to use ptdescs Date: Mon, 17 Apr 2023 13:50:21 -0700 Message-Id: <20230417205048.15870-7-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Signed-off-by: Vishal Moola (Oracle) --- include/linux/mm.h | 6 +++--- mm/memory.c | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 069187e84e35..17dc6e37ea03 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2786,7 +2786,7 @@ static inline void ptdesc_clear(void *x) #if USE_SPLIT_PTE_PTLOCKS #if ALLOC_SPLIT_PTLOCKS void __init ptlock_cache_init(void); -extern bool ptlock_alloc(struct page *page); +bool ptlock_alloc(struct ptdesc *ptdesc); extern void ptlock_free(struct page *page); static inline spinlock_t *ptlock_ptr(struct page *page) @@ -2798,7 +2798,7 @@ static inline void ptlock_cache_init(void) { } -static inline bool ptlock_alloc(struct page *page) +static inline bool ptlock_alloc(struct ptdesc *ptdesc) { return true; } @@ -2828,7 +2828,7 @@ static inline bool ptlock_init(struct page *page) * slab code uses page->slab_cache, which share storage with page->ptl. */ VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page); - if (!ptlock_alloc(page)) + if (!ptlock_alloc(page_ptdesc(page))) return false; spin_lock_init(ptlock_ptr(page)); return true; diff --git a/mm/memory.c b/mm/memory.c index d4d7df041b6f..37d408ac1b8d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5926,14 +5926,14 @@ void __init ptlock_cache_init(void) SLAB_PANIC, NULL); } -bool ptlock_alloc(struct page *page) +bool ptlock_alloc(struct ptdesc *ptdesc) { spinlock_t *ptl; ptl = kmem_cache_alloc(page_ptl_cachep, GFP_KERNEL); if (!ptl) return false; - page->ptl = ptl; + ptdesc->ptl = ptl; return true; } From patchwork Mon Apr 17 20:50:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214627 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 45B11C7EE2C for ; Mon, 17 Apr 2023 20:53:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231236AbjDQUxa (ORCPT ); Mon, 17 Apr 2023 16:53:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39386 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230414AbjDQUxP (ORCPT ); Mon, 17 Apr 2023 16:53:15 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A5C05582; Mon, 17 Apr 2023 13:52:55 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id sj1-20020a17090b2d8100b00247bd1a66d4so1716469pjb.5; Mon, 17 Apr 2023 13:52:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764775; x=1684356775; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=7rEN7TkKGGrS4Nr1e9+VSo1FDhdsaSEXyIULzlMdAE4=; b=EY4pAo5dPiMdURcOVQM/Apa82VCO0I/XR1aYYt5qSz76HGumyOhUfKwA9+Mr5OFHh6 y4qX3lmzBRH5sE5hiL3G5H7+P1vU90VNiEuutpY92Aii8KdV81S7nJ0QBzccPQA4L2A1 H0r+gFj2e0wYN/AG4aVILv2cjCWqAiomeMPXPkWjszJF01iQ5TmXH8Bsz45QUOgwxoQ4 sHC8DrgU25+a1fsqdhglJSyR11j/cBOYD1x2MxJDA6rfBM+Ci8Y+wWCZicg642iTcaFB y1U70avkZLTukuwNbayq0JR0dDmctnOUY0ghxZuZ1egx4BqGjtzmW9DXZrJYTWGZ5wwv sRpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764775; x=1684356775; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=7rEN7TkKGGrS4Nr1e9+VSo1FDhdsaSEXyIULzlMdAE4=; b=Zs4n3immWWinWs6YVXUFCiY6Nov07aEmqO0tasIkN6Fs4fQ4ZiapspIpb01T4U8rO3 zB9I2kjC9znDQd8EVi6/R/6HKaQHcsrKytAtWSAs3lfKXowPk2RTiSX+JzZiNO9FzYx8 CL5o0C3KHZd1tBvNlXz/rzcGZ24FTzL80M4WTWgZEj/dEXCFeH5n6tLMFYKFEBHzrnAH OaSDZSggZkCxqEkzTCFh+bdsk6E4xrNo4Ck7HxSzlhAmYMuCcPvqdbLo9CdfMNNn+W+O 3yGemWTVyO5ds0U9thZLPirN456nKIhSxkDuLk+KFxocuxClIez1dAGcbAyEhPslIq0M 2DNA== X-Gm-Message-State: AAQBX9dtg584RLQq+VUCQclpqCf60OBI5suxNPs7Gy8ZRTG/MLB6pipR W6z/v7AApKyIqbCfdOBK6D0= X-Google-Smtp-Source: AKy350Yghynh+/JYw1Y4OeSQkY97RuGOXV9v1JZPlpoHn0H6WhBdjqt/CneUg+QzYmLvk18Z1w5zNw== X-Received: by 2002:a05:6a20:394f:b0:ef:b02a:b35b with SMTP id r15-20020a056a20394f00b000efb02ab35bmr6833020pzg.0.1681764775054; Mon, 17 Apr 2023 13:52:55 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:52:54 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 07/33] mm: Convert ptlock_ptr() to use ptdescs Date: Mon, 17 Apr 2023 13:50:22 -0700 Message-Id: <20230417205048.15870-8-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Signed-off-by: Vishal Moola (Oracle) --- arch/x86/xen/mmu_pv.c | 2 +- include/linux/mm.h | 14 +++++++------- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c index fdc91deece7e..a1c9f8dcbb5a 100644 --- a/arch/x86/xen/mmu_pv.c +++ b/arch/x86/xen/mmu_pv.c @@ -651,7 +651,7 @@ static spinlock_t *xen_pte_lock(struct page *page, struct mm_struct *mm) spinlock_t *ptl = NULL; #if USE_SPLIT_PTE_PTLOCKS - ptl = ptlock_ptr(page); + ptl = ptlock_ptr(page_ptdesc(page)); spin_lock_nest_lock(ptl, &mm->page_table_lock); #endif diff --git a/include/linux/mm.h b/include/linux/mm.h index 17dc6e37ea03..ed8dd0464841 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2789,9 +2789,9 @@ void __init ptlock_cache_init(void); bool ptlock_alloc(struct ptdesc *ptdesc); extern void ptlock_free(struct page *page); -static inline spinlock_t *ptlock_ptr(struct page *page) +static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc) { - return page->ptl; + return ptdesc->ptl; } #else /* ALLOC_SPLIT_PTLOCKS */ static inline void ptlock_cache_init(void) @@ -2807,15 +2807,15 @@ static inline void ptlock_free(struct page *page) { } -static inline spinlock_t *ptlock_ptr(struct page *page) +static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc) { - return &page->ptl; + return &ptdesc->ptl; } #endif /* ALLOC_SPLIT_PTLOCKS */ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) { - return ptlock_ptr(pmd_page(*pmd)); + return ptlock_ptr(page_ptdesc(pmd_page(*pmd))); } static inline bool ptlock_init(struct page *page) @@ -2830,7 +2830,7 @@ static inline bool ptlock_init(struct page *page) VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page); if (!ptlock_alloc(page_ptdesc(page))) return false; - spin_lock_init(ptlock_ptr(page)); + spin_lock_init(ptlock_ptr(page_ptdesc(page))); return true; } @@ -2900,7 +2900,7 @@ static inline struct ptdesc *pmd_ptdesc(pmd_t *pmd) static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) { - return ptlock_ptr(ptdesc_page(pmd_ptdesc(pmd))); + return ptlock_ptr(pmd_ptdesc(pmd)); } static inline bool pmd_ptlock_init(struct page *page) From patchwork Mon Apr 17 20:50:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214626 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6C927C7EE43 for ; Mon, 17 Apr 2023 20:53:31 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231217AbjDQUx3 (ORCPT ); Mon, 17 Apr 2023 16:53:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230420AbjDQUxU (ORCPT ); Mon, 17 Apr 2023 16:53:20 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1F8B055BE; Mon, 17 Apr 2023 13:52:57 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id kh6so25752521plb.0; Mon, 17 Apr 2023 13:52:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764776; x=1684356776; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=I0S9vHWOxLue2VyFd3w8q0WLRZoA25NyvyllT+RcawU=; b=XGW0pwJ3AeguS3BSEjZkQda83npB8B7RbJEymTCypauBOGiv3vWjnUTzzYfl8eoJ/G l8vpSac3iXyBMdZp/fMD6sum0gW4MOCVqLS502CXgyi0RUYYf3ntgvAGSXGBolZoNn2i V+ToiFcIdhtDlP2wctGltXpdu2C91rdsIF1S9oPOQ0O7Jf0vF/Q8QloTMvDXpKn4JpZo XwyusDwNIWazvsnj/elh4vmpM936WP8JEzePXhLQ+OdbmLb8t8oYaUiKKrqV8mjQTuNV U8/37jTeHtYYWs9KLzMjZZ9An2VnMVkd7CaDSgdHdeupgwKBIBRRGFzwxZa8/Q4lnujL ZhVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764776; x=1684356776; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=I0S9vHWOxLue2VyFd3w8q0WLRZoA25NyvyllT+RcawU=; b=KZlY1HzzKPSbFWRaXF66Ie3cRd13jv758/WiX6vBBBHtCUPeBIToawl9/GyUPtrHbh +SfttTRwU1BhgknWdycvDTR06Ly4pgsu3atrbBeWZrBUfo/HB2uZoZ+mRfUXXD0gJy// 7cWVmOmwpkv1NUMtcWSMG8EkFu15EwgKaKa6t+Au69tjvOG/hYddkSA4Vm6x/eiQfqVJ xS7I5WR6Vn0e+qEyCqwcX8DDPVew4wxWbIEzKfh1MsRbQAJCaGfesEutHQgkz70dF1aD aintFV+wevjD0bw5O6yabojOwvxY3s8/x4qen6nzN6PhhuI8UP/QfiV5hT+3sboew1Gi tX8g== X-Gm-Message-State: AAQBX9cy99mOk27DLDaiVpCGbBSLhYGTEXNNdrBxWUZH3H9V5Ermsjqr A7i8ZLIezsn6s1LWMckq3IM= X-Google-Smtp-Source: AKy350ZxT8mmBCzqXXqE8vhbMJvpIdAkvQhshhpl/b9NVru4kEOX264b08aZJzrt7Emcu03WoO14SQ== X-Received: by 2002:a17:90a:3fc8:b0:23d:15d8:1bc3 with SMTP id u8-20020a17090a3fc800b0023d15d81bc3mr16271900pjm.39.1681764776607; Mon, 17 Apr 2023 13:52:56 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:52:56 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 08/33] mm: Convert pmd_ptlock_init() to use ptdescs Date: Mon, 17 Apr 2023 13:50:23 -0700 Message-Id: <20230417205048.15870-9-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Signed-off-by: Vishal Moola (Oracle) --- include/linux/mm.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index ed8dd0464841..7eb562909b2c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2903,12 +2903,12 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return ptlock_ptr(pmd_ptdesc(pmd)); } -static inline bool pmd_ptlock_init(struct page *page) +static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - page->pmd_huge_pte = NULL; + ptdesc->pmd_huge_pte = NULL; #endif - return ptlock_init(page); + return ptlock_init(ptdesc_page(ptdesc)); } static inline void pmd_ptlock_free(struct page *page) @@ -2928,7 +2928,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return &mm->page_table_lock; } -static inline bool pmd_ptlock_init(struct page *page) { return true; } +static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) { return true; } static inline void pmd_ptlock_free(struct page *page) {} #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte) @@ -2944,7 +2944,7 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) static inline bool pgtable_pmd_page_ctor(struct page *page) { - if (!pmd_ptlock_init(page)) + if (!pmd_ptlock_init(page_ptdesc(page))) return false; __SetPageTable(page); inc_lruvec_page_state(page, NR_PAGETABLE); From patchwork Mon Apr 17 20:50:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214629 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A096CC77B72 for ; Mon, 17 Apr 2023 20:53:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231266AbjDQUxc (ORCPT ); Mon, 17 Apr 2023 16:53:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39782 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230371AbjDQUxV (ORCPT ); Mon, 17 Apr 2023 16:53:21 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B49DE4ED9; Mon, 17 Apr 2023 13:52:58 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id l21so9393183pla.5; Mon, 17 Apr 2023 13:52:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764778; x=1684356778; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=X2z4+jy2VvwYGAEqDadtXD4/9gzlnrS5efeNMpJ5P9k=; b=I/h1pnbYM2DwgRZCSJteD0jcjPogajwYvoNTWu05FIt8Dug2P9e0wRrWLPxc4zYyi2 bF+AVTemVcp3diPO+7OSxLM72RxW/MppC8vSDXDjD8liqzM1pBLkxoUoebzionO/oxsa qHKGIp7d97W6jlem/3yxXQy6r/jNX2Q0ie6VB85Wwsz/8JpbJnzPVNlM6MFS7XC7iPJr OFMLKNx0UPHxfGtNqDckarA/92oBRif46FzIncuYkpkQ+fhRc09HfBoH8I7mBRTJiPxj c4VsiDutZTNuFgmpjh8q1KLiugVszPA4Gaz711F805EB6LUcut5gmvOb7RYuzrO1w3sO Wqsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764778; x=1684356778; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=X2z4+jy2VvwYGAEqDadtXD4/9gzlnrS5efeNMpJ5P9k=; b=ahEvJcvMuagBW3HiT6iUVVynMVPCU+oA6OHilXeAhncXpJd/LKeMBu7xRgJSyuN79/ KC8p3wDCAMbGF9UBJOUnWcva2pixYQmrAJAQ0/kJktnocQWaBwTwdoMVyE8lbljQghDr 5lIyUV2s3+X8ixSwot24dmMT+XSMDc1W9COy0+rcOVXYdksv4ba9gFyAfNppu4B+iIVS 5Ynwm/28f2idx3w5luGTNMw1A8b0Cf6ItqZuYLo0fN5BOW8knJnyPU26iWPmQooXokG7 J1UAiel/pCq0jegBLLQB7Zs0wREQSpwVWWVubrJwMt2az7bdxwk/zw2Tqgk9j60T0zYY C2DA== X-Gm-Message-State: AAQBX9cpc5/lB01p6GTx6lvXbFqb/GU3CfEsPc24SQNCGb/gudqd9oQ1 iqoiRE2/EL8EvVh90T1j97RazbQcRougaw== X-Google-Smtp-Source: AKy350a6Xakm9Y4oYLvOvIWfyGM8BNUM3uv3d+WixCRDkQ1rcDDo2rZhVhsbndThyou7yfhT1XyHRg== X-Received: by 2002:a17:90a:8a04:b0:247:14ac:4d3a with SMTP id w4-20020a17090a8a0400b0024714ac4d3amr17877854pjn.20.1681764778063; Mon, 17 Apr 2023 13:52:58 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:52:57 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 09/33] mm: Convert ptlock_init() to use ptdescs Date: Mon, 17 Apr 2023 13:50:24 -0700 Message-Id: <20230417205048.15870-10-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Signed-off-by: Vishal Moola (Oracle) --- include/linux/mm.h | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 7eb562909b2c..d2485a110936 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2818,7 +2818,7 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) return ptlock_ptr(page_ptdesc(pmd_page(*pmd))); } -static inline bool ptlock_init(struct page *page) +static inline bool ptlock_init(struct ptdesc *ptdesc) { /* * prep_new_page() initialize page->private (and therefore page->ptl) @@ -2827,10 +2827,10 @@ static inline bool ptlock_init(struct page *page) * It can happen if arch try to use slab for page table allocation: * slab code uses page->slab_cache, which share storage with page->ptl. */ - VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page); - if (!ptlock_alloc(page_ptdesc(page))) + VM_BUG_ON_PAGE(*(unsigned long *)&ptdesc->ptl, ptdesc_page(ptdesc)); + if (!ptlock_alloc(ptdesc)) return false; - spin_lock_init(ptlock_ptr(page_ptdesc(page))); + spin_lock_init(ptlock_ptr(ptdesc)); return true; } @@ -2843,13 +2843,13 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) return &mm->page_table_lock; } static inline void ptlock_cache_init(void) {} -static inline bool ptlock_init(struct page *page) { return true; } +static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; } static inline void ptlock_free(struct page *page) {} #endif /* USE_SPLIT_PTE_PTLOCKS */ static inline bool pgtable_pte_page_ctor(struct page *page) { - if (!ptlock_init(page)) + if (!ptlock_init(page_ptdesc(page))) return false; __SetPageTable(page); inc_lruvec_page_state(page, NR_PAGETABLE); @@ -2908,7 +2908,7 @@ static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) #ifdef CONFIG_TRANSPARENT_HUGEPAGE ptdesc->pmd_huge_pte = NULL; #endif - return ptlock_init(ptdesc_page(ptdesc)); + return ptlock_init(ptdesc); } static inline void pmd_ptlock_free(struct page *page) From patchwork Mon Apr 17 20:50:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214640 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CDA4BC88CB0 for ; Mon, 17 Apr 2023 20:54:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230494AbjDQUyF (ORCPT ); Mon, 17 Apr 2023 16:54:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230432AbjDQUxV (ORCPT ); Mon, 17 Apr 2023 16:53:21 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0294E5B90; Mon, 17 Apr 2023 13:52:59 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id la15so4090223plb.11; Mon, 17 Apr 2023 13:52:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764779; x=1684356779; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=X3zVXX/N6SOzga9RDJq+tPAqPrrx50X9RMNZ27hRgPo=; b=sICUrUNhKpAOA4VRjrqbzI2X/H4tUjO6T2SdOnpuj4Iy+PajhE1DrUSbcJznfffRrX L+wg0APu+UnJsiMWo07D8eesoEiJoCPkIf7iFkO3frlAWKtnOneztwNYCGmhOQpiZeuy AXohiTFVSZ6j/QYK9dyhVhIaufaLIPexDAAJXmf6qW3dNeCDEUq++V0vxfiKc0g0e7Ab Ewxv8L1qYPD4uyzx9cQ6CGY815b+pAM3zpWc0XUF8sLnRhkHRA/O3qjcuWg0cKvRpE4M YyoU6OpnuIHEQ4e7n1BxpYaP+3T9kR2hf6Oim0PIubi0gprGsuCOvZCKtVwS9TGYF6kY TvEw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764779; x=1684356779; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=X3zVXX/N6SOzga9RDJq+tPAqPrrx50X9RMNZ27hRgPo=; b=jcGd4cdY7Ch+fIG+r2fc4u7HEtxmBRSC5AcLGAPwsin+Pq3IKf5if7v2j1ApQzRNrR Ec3fAVs9iIGiUs83RIsdKyf3CQpmH6wJYz5ZcYK/dW9s0JT1yRGSiU0vEpuGuJf0lATb YgCQRPbRlKicQ/eb1xIGECk52q+p0S82zdJVrfrjmgU7gK94APEc2mCjZ/qnPR1cstLf AmhhS5tyHdOE08e7qC2sHEnWhBXbYUw3M7lVVtxmar0oFgFYI+iCF4NoPLG2zEvwIl/s TjbUCxjyxqPwHWC8KLbGP2hnlw4vN4CxptFNXIpsG2p5ux1D9mxHxWDlGMYgABbq/MMX sODQ== X-Gm-Message-State: AAQBX9dwdiktTWl5W2IkprYefu+pOsIRXeszoSQxLgUyO/NBWCsTg3hp YFgakSwP+DfV5YCqd9EWAuI= X-Google-Smtp-Source: AKy350aL1Q/BuQU4b/coHWha7jodau934g5ZfPpWwX1i7O384OcXkkUs5sEaNrPzHDYpgrYCEKrWgg== X-Received: by 2002:a17:90a:cb8c:b0:233:f393:f6cd with SMTP id a12-20020a17090acb8c00b00233f393f6cdmr15121483pju.5.1681764779380; Mon, 17 Apr 2023 13:52:59 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:52:59 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 10/33] mm: Convert pmd_ptlock_free() to use ptdescs Date: Mon, 17 Apr 2023 13:50:25 -0700 Message-Id: <20230417205048.15870-11-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Signed-off-by: Vishal Moola (Oracle) --- include/linux/mm.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index d2485a110936..2390fc2542aa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2911,12 +2911,12 @@ static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) return ptlock_init(ptdesc); } -static inline void pmd_ptlock_free(struct page *page) +static inline void pmd_ptlock_free(struct ptdesc *ptdesc) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - VM_BUG_ON_PAGE(page->pmd_huge_pte, page); + VM_BUG_ON_PAGE(ptdesc->pmd_huge_pte, ptdesc_page(ptdesc)); #endif - ptlock_free(page); + ptlock_free(ptdesc_page(ptdesc)); } #define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte) @@ -2929,7 +2929,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) } static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) { return true; } -static inline void pmd_ptlock_free(struct page *page) {} +static inline void pmd_ptlock_free(struct ptdesc *ptdesc) {} #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte) @@ -2953,7 +2953,7 @@ static inline bool pgtable_pmd_page_ctor(struct page *page) static inline void pgtable_pmd_page_dtor(struct page *page) { - pmd_ptlock_free(page); + pmd_ptlock_free(page_ptdesc(page)); __ClearPageTable(page); dec_lruvec_page_state(page, NR_PAGETABLE); } From patchwork Mon Apr 17 20:50:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214630 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B52DEC7EE2E for ; Mon, 17 Apr 2023 20:53:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230405AbjDQUxd (ORCPT ); Mon, 17 Apr 2023 16:53:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39976 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230463AbjDQUxW (ORCPT ); Mon, 17 Apr 2023 16:53:22 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 82C0B3ABC; Mon, 17 Apr 2023 13:53:01 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id cm18-20020a17090afa1200b0024713adf69dso15384045pjb.3; Mon, 17 Apr 2023 13:53:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764781; x=1684356781; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Hc0m/U8RaSqOAaIvHe2pZ8la6AwQPtvql1ppirfWYYQ=; b=FTIIrhTe8vfWt4jgLQ0zryZ+NfsJ/OFhYpHyrwEJjHtN6fYrjayv1wmbuEiWrZWNBG Ab6oC0JEVFKaVtr5ERs/HGyia4m0cuJPp54Y6qKJvGxwoWwnKA3jL+mq6GAMCgFGvFWn bEbIp/iuCRzaSQasJ9iYfHsEIZ8C5L6UBjIsqJjVLRNCdj3wFA0VCJaoyvqsQShlPxAr nn14k2m42+J6xR2hfl2ffw8tFItkZsF+DKNsuBC18vZWoMB4mE4FcDOZiJ4pqP0DqYky PjLPxBMY5gAsTXZg/iGvnD4lxFGHfkuFymQLt7EWHTlIBpCDY+iSj33ZpMo6EFdNFztz HYrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764781; x=1684356781; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Hc0m/U8RaSqOAaIvHe2pZ8la6AwQPtvql1ppirfWYYQ=; b=MTFMeBxFMlPbgAe5gYqCXZTaw1wzkjlqBV6krtmQ9Amzi1k62NjZeWlQqzKGUhGV55 sg6x579oVVBiexyhtTL1fucKn3270FFTjbeUh9v/nSpEweiYdGkh0zaj1ak+ofrsyPYR u9nmQxWUPWGbuySfgrPb3PCWzQA+bMLEpyCWbekM7X+ZukbFqluRVt0anMJlXGHQqGDr RFY003M4l/m8r1CELKeGg8BVxD90UabnU6K3PnDB/APXWTTNnvNcwk9UPuT6hJc0vYB0 yeW/ua7KjCyH2os7XZnq4qFp4zVezpAJoIAtyQJaUz4R7FLznevbRRBFwg/Cx7af3rFK weAg== X-Gm-Message-State: AAQBX9eVh7ATQfmX06/mkym2Q0pU5doLxJkujUu3Ua7e8uwnTm85GZAH huOgKOf/lB/+3fcS99j3sMs= X-Google-Smtp-Source: AKy350ZMofF2taT9lCtSfGwXj3mTLikp6R4dNTLL/p579XSegIqzslR/71TUEAWTkG6EWBZpLey91g== X-Received: by 2002:a17:90a:d143:b0:23b:2c51:6e7 with SMTP id t3-20020a17090ad14300b0023b2c5106e7mr16554327pjw.21.1681764780697; Mon, 17 Apr 2023 13:53:00 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.52.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:00 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 11/33] mm: Convert ptlock_free() to use ptdescs Date: Mon, 17 Apr 2023 13:50:26 -0700 Message-Id: <20230417205048.15870-12-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Signed-off-by: Vishal Moola (Oracle) --- include/linux/mm.h | 10 +++++----- mm/memory.c | 4 ++-- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 2390fc2542aa..17a64cfd1430 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2787,7 +2787,7 @@ static inline void ptdesc_clear(void *x) #if ALLOC_SPLIT_PTLOCKS void __init ptlock_cache_init(void); bool ptlock_alloc(struct ptdesc *ptdesc); -extern void ptlock_free(struct page *page); +void ptlock_free(struct ptdesc *ptdesc); static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc) { @@ -2803,7 +2803,7 @@ static inline bool ptlock_alloc(struct ptdesc *ptdesc) return true; } -static inline void ptlock_free(struct page *page) +static inline void ptlock_free(struct ptdesc *ptdesc) { } @@ -2844,7 +2844,7 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) } static inline void ptlock_cache_init(void) {} static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; } -static inline void ptlock_free(struct page *page) {} +static inline void ptlock_free(struct ptdesc *ptdesc) {} #endif /* USE_SPLIT_PTE_PTLOCKS */ static inline bool pgtable_pte_page_ctor(struct page *page) @@ -2858,7 +2858,7 @@ static inline bool pgtable_pte_page_ctor(struct page *page) static inline void pgtable_pte_page_dtor(struct page *page) { - ptlock_free(page); + ptlock_free(page_ptdesc(page)); __ClearPageTable(page); dec_lruvec_page_state(page, NR_PAGETABLE); } @@ -2916,7 +2916,7 @@ static inline void pmd_ptlock_free(struct ptdesc *ptdesc) #ifdef CONFIG_TRANSPARENT_HUGEPAGE VM_BUG_ON_PAGE(ptdesc->pmd_huge_pte, ptdesc_page(ptdesc)); #endif - ptlock_free(ptdesc_page(ptdesc)); + ptlock_free(ptdesc); } #define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte) diff --git a/mm/memory.c b/mm/memory.c index 37d408ac1b8d..ca74425c9405 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5937,8 +5937,8 @@ bool ptlock_alloc(struct ptdesc *ptdesc) return true; } -void ptlock_free(struct page *page) +void ptlock_free(struct ptdesc *ptdesc) { - kmem_cache_free(page_ptl_cachep, page->ptl); + kmem_cache_free(page_ptl_cachep, ptdesc->ptl); } #endif From patchwork Mon Apr 17 20:50:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214631 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79D35C87FE2 for ; Mon, 17 Apr 2023 20:53:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231276AbjDQUxe (ORCPT ); Mon, 17 Apr 2023 16:53:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39984 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230467AbjDQUxW (ORCPT ); Mon, 17 Apr 2023 16:53:22 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D49305FE9; Mon, 17 Apr 2023 13:53:02 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id cm18-20020a17090afa1200b0024713adf69dso15384131pjb.3; Mon, 17 Apr 2023 13:53:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764782; x=1684356782; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WrZX7kBoYSlYOsS+wVtDQN8RsK5AFCnoOOzdjwi5Mms=; b=Fq/GP8pcdLi4I5bPiFEsvLaXpxmzLee60TMK8PZJIeuQRzzOkEw4LNRxem1TjMHkl4 hpoJjh3mkwkBNIc+7ZgY54HKylkrI6wgo3KFZgMqWIFkgvOs6SvSJyDnyNXKU57Vfgni F3o9PaXHz7p6eZRgp/mKtHM6HxVxSp+JnGgjpmMtA2nsWbkEb7dNgfcs3WzKBla+e/KK mEc/vn9ckYnWIktc03O3OhGV7y0sYxPsn1QEH3DBGZVQjs25sE8N0p2VjcijmIU6liuv XPjI1oYrX0OJGyw0BE4l6r4tCUbdIF44UY/L4uGFZKlEP+7PRKuryCCDJJcwRQi8c96H TYTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764782; x=1684356782; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WrZX7kBoYSlYOsS+wVtDQN8RsK5AFCnoOOzdjwi5Mms=; b=TXOm/eBaYFobEU9sbXn5d0JHoY56xrwhL5SmgeUyXPor3jQbjzP/h2JUhsKefhwQfq oRgDMIXM2jXxP56ZngydTYQBFWFCUNydzvkdo/lxWJ+HLz+gNQgzeL2MJo5puZ33mg+O 3Xlrzg5OuBI3D+6j89BqeNcau9vYFqfphM0X94Centfkr7cEXfxjZyQ8Sg5ffvOE1oXE cRN9vrJWgehOp/kiFrZuT8MziU1bjrGkOmj/apEGJxJbLnqPv/+ViuxidOJBIsnWDoVU gG+vRKJHJvpUKwitXNCs9i5NdGO9ykHiTgfhmTiKAirA5G/Ilit60N80P2Pqz4DH5TT5 FfpQ== X-Gm-Message-State: AAQBX9eJ+jL38ZW0LB9cuT16eemyCsSirAG4tZerWedHCvc0DOq2+uLI ZrmikYLRyn2p55K1ujnZjbg= X-Google-Smtp-Source: AKy350ZKSmY2j825YUr7PKp4UY4ecmOCun/8MK3oszIbqT0tc1tYYx3cO7v8JESnTYGwYaXHIhiESg== X-Received: by 2002:a17:902:ecd2:b0:1a6:9671:253e with SMTP id a18-20020a170902ecd200b001a69671253emr219179plh.47.1681764781984; Mon, 17 Apr 2023 13:53:01 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:01 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 12/33] mm: Create ptdesc equivalents for pgtable_{pte,pmd}_page_{ctor,dtor} Date: Mon, 17 Apr 2023 13:50:27 -0700 Message-Id: <20230417205048.15870-13-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Creates ptdesc_pte_ctor(), ptdesc_pmd_ctor(), ptdesc_pte_dtor(), and ptdesc_pmd_dtor() and make the original pgtable constructor/destructors wrappers. Signed-off-by: Vishal Moola (Oracle) --- include/linux/mm.h | 56 ++++++++++++++++++++++++++++++++++------------ 1 file changed, 42 insertions(+), 14 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 17a64cfd1430..cb136d2fdf74 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2847,20 +2847,34 @@ static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; } static inline void ptlock_free(struct ptdesc *ptdesc) {} #endif /* USE_SPLIT_PTE_PTLOCKS */ -static inline bool pgtable_pte_page_ctor(struct page *page) +static inline bool ptdesc_pte_ctor(struct ptdesc *ptdesc) { - if (!ptlock_init(page_ptdesc(page))) + struct folio *folio = ptdesc_folio(ptdesc); + + if (!ptlock_init(ptdesc)) return false; - __SetPageTable(page); - inc_lruvec_page_state(page, NR_PAGETABLE); + __SetPageTable(&folio->page); + lruvec_stat_add_folio(folio, NR_PAGETABLE); return true; } +static inline bool pgtable_pte_page_ctor(struct page *page) +{ + return ptdesc_pte_ctor(page_ptdesc(page)); +} + +static inline void ptdesc_pte_dtor(struct ptdesc *ptdesc) +{ + struct folio *folio = ptdesc_folio(ptdesc); + + ptlock_free(ptdesc); + __ClearPageTable(&folio->page); + lruvec_stat_sub_folio(folio, NR_PAGETABLE); +} + static inline void pgtable_pte_page_dtor(struct page *page) { - ptlock_free(page_ptdesc(page)); - __ClearPageTable(page); - dec_lruvec_page_state(page, NR_PAGETABLE); + ptdesc_pte_dtor(page_ptdesc(page)); } #define pte_offset_map_lock(mm, pmd, address, ptlp) \ @@ -2942,20 +2956,34 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) return ptl; } -static inline bool pgtable_pmd_page_ctor(struct page *page) +static inline bool ptdesc_pmd_ctor(struct ptdesc *ptdesc) { - if (!pmd_ptlock_init(page_ptdesc(page))) + struct folio *folio = ptdesc_folio(ptdesc); + + if (!pmd_ptlock_init(ptdesc)) return false; - __SetPageTable(page); - inc_lruvec_page_state(page, NR_PAGETABLE); + __SetPageTable(&folio->page); + lruvec_stat_add_folio(folio, NR_PAGETABLE); return true; } +static inline bool pgtable_pmd_page_ctor(struct page *page) +{ + return ptdesc_pmd_ctor(page_ptdesc(page)); +} + +static inline void ptdesc_pmd_dtor(struct ptdesc *ptdesc) +{ + struct folio *folio = ptdesc_folio(ptdesc); + + pmd_ptlock_free(ptdesc); + __ClearPageTable(&folio->page); + lruvec_stat_sub_folio(folio, NR_PAGETABLE); +} + static inline void pgtable_pmd_page_dtor(struct page *page) { - pmd_ptlock_free(page_ptdesc(page)); - __ClearPageTable(page); - dec_lruvec_page_state(page, NR_PAGETABLE); + ptdesc_pmd_dtor(page_ptdesc(page)); } /* From patchwork Mon Apr 17 20:50:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214639 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 64966C88C98 for ; Mon, 17 Apr 2023 20:54:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231375AbjDQUyE (ORCPT ); Mon, 17 Apr 2023 16:54:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230494AbjDQUxX (ORCPT ); Mon, 17 Apr 2023 16:53:23 -0400 Received: from mail-pj1-x1033.google.com (mail-pj1-x1033.google.com [IPv6:2607:f8b0:4864:20::1033]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2FC1F659A; Mon, 17 Apr 2023 13:53:04 -0700 (PDT) Received: by mail-pj1-x1033.google.com with SMTP id cm18-20020a17090afa1200b0024713adf69dso15384247pjb.3; Mon, 17 Apr 2023 13:53:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764783; x=1684356783; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WiO1HawkIhn5Pxy3bGXHIBKOPwk6qSqlLgZYbsR783I=; b=i+wzmC4THa/GsSihCeauBzLBRwN2BkRSUDfwFHkQgHnG3pkWReRH8MPN+shfgDqHTj IeqmXv7VhbjU+1u0UCoy1GJsHdeP9LwybB0OHvKEyW1YIxBwbDPSb41pvZGpSpOpMkiQ WphjKneGpmymAtIBRE8JTRbqkCrOvlbdVWpBDf/QtnTqQmVA9yQNunQ+rFi4omAX2wkV RN+R+ZBLGm6OPOimIfvRu38tdAQhoIVp1Mn1HnU9Gwaf7SKiulzeceUhW9g6F763mHYB xGmH5zofQsKiypu290WAifFetv+03PwrQODgzNDZ7SB6mx5PwXcbwgLkn67IOgTYLm5f HyhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764783; x=1684356783; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WiO1HawkIhn5Pxy3bGXHIBKOPwk6qSqlLgZYbsR783I=; b=ZSWQ+GmJCLo6sP3TjB6hnUtNnVEIGiElsgEqsyRHOzacoMwCjNPocdU1py39YeZZHs XRbFNG2RzAl8EnMFVNs5zrJPZgYpBedfmHuvApIFo5bKGWlAouBrBi7nD8jHWAghrZj/ 1gASghlkmwjzSUzpPouz/YhBrC+M6/88VbKWNz0IHvWG/B0JikNEaJza10LC601dTPXn KxtLqn8wGIUMOeQiXWidODEQPK67VPqkb1Ce5cjNa01y2maFbm5C+HnGNb3IcQb4A08q diobAB+fwRka+LHvY/F1ARBcwDZqovg5U7EdMCTYBNflz43TwXo7Sq8IaFI3n5fxHZ2Q M1yA== X-Gm-Message-State: AAQBX9dusatvaw1XLwiXQm7DIVKhcLOCmSwEBNQJp4YLjPh8iDM2bycV wXkuMaPlcB22qk9LQdhRpnY= X-Google-Smtp-Source: AKy350ZsayM5AzK3U9g9Pm56Yxkj5Ek92aNaeMW1rTxdVse4fOpzec7lc2oZYj+3ilv2TEF1mDgQ5A== X-Received: by 2002:a17:90b:4f8f:b0:23f:58d6:b532 with SMTP id qe15-20020a17090b4f8f00b0023f58d6b532mr15289201pjb.5.1681764783502; Mon, 17 Apr 2023 13:53:03 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:03 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 13/33] powerpc: Convert various functions to use ptdescs Date: Mon, 17 Apr 2023 13:50:28 -0700 Message-Id: <20230417205048.15870-14-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org In order to split struct ptdesc from struct page, convert various functions to use ptdescs. Signed-off-by: Vishal Moola (Oracle) --- arch/powerpc/mm/book3s64/mmu_context.c | 10 +++--- arch/powerpc/mm/book3s64/pgtable.c | 32 +++++++++--------- arch/powerpc/mm/pgtable-frag.c | 46 +++++++++++++------------- 3 files changed, 44 insertions(+), 44 deletions(-) diff --git a/arch/powerpc/mm/book3s64/mmu_context.c b/arch/powerpc/mm/book3s64/mmu_context.c index c766e4c26e42..b22ad2839897 100644 --- a/arch/powerpc/mm/book3s64/mmu_context.c +++ b/arch/powerpc/mm/book3s64/mmu_context.c @@ -246,15 +246,15 @@ static void destroy_contexts(mm_context_t *ctx) static void pmd_frag_destroy(void *pmd_frag) { int count; - struct page *page; + struct ptdesc *ptdesc; - page = virt_to_page(pmd_frag); + ptdesc = virt_to_ptdesc(pmd_frag); /* drop all the pending references */ count = ((unsigned long)pmd_frag & ~PAGE_MASK) >> PMD_FRAG_SIZE_SHIFT; /* We allow PTE_FRAG_NR fragments from a PTE page */ - if (atomic_sub_and_test(PMD_FRAG_NR - count, &page->pt_frag_refcount)) { - pgtable_pmd_page_dtor(page); - __free_page(page); + if (atomic_sub_and_test(PMD_FRAG_NR - count, &ptdesc->pt_frag_refcount)) { + ptdesc_pmd_dtor(ptdesc); + ptdesc_free(ptdesc); } } diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c index 85c84e89e3ea..7693be80c0f9 100644 --- a/arch/powerpc/mm/book3s64/pgtable.c +++ b/arch/powerpc/mm/book3s64/pgtable.c @@ -306,22 +306,22 @@ static pmd_t *get_pmd_from_cache(struct mm_struct *mm) static pmd_t *__alloc_for_pmdcache(struct mm_struct *mm) { void *ret = NULL; - struct page *page; + struct ptdesc *ptdesc; gfp_t gfp = GFP_KERNEL_ACCOUNT | __GFP_ZERO; if (mm == &init_mm) gfp &= ~__GFP_ACCOUNT; - page = alloc_page(gfp); - if (!page) + ptdesc = ptdesc_alloc(gfp); + if (!ptdesc) return NULL; - if (!pgtable_pmd_page_ctor(page)) { - __free_pages(page, 0); + if (!ptdesc_pmd_ctor(ptdesc)) { + ptdesc_free(ptdesc); return NULL; } - atomic_set(&page->pt_frag_refcount, 1); + atomic_set(&ptdesc->pt_frag_refcount, 1); - ret = page_address(page); + ret = ptdesc_address(ptdesc); /* * if we support only one fragment just return the * allocated page. @@ -331,12 +331,12 @@ static pmd_t *__alloc_for_pmdcache(struct mm_struct *mm) spin_lock(&mm->page_table_lock); /* - * If we find pgtable_page set, we return + * If we find ptdesc_page set, we return * the allocated page with single fragment * count. */ if (likely(!mm->context.pmd_frag)) { - atomic_set(&page->pt_frag_refcount, PMD_FRAG_NR); + atomic_set(&ptdesc->pt_frag_refcount, PMD_FRAG_NR); mm->context.pmd_frag = ret + PMD_FRAG_SIZE; } spin_unlock(&mm->page_table_lock); @@ -357,15 +357,15 @@ pmd_t *pmd_fragment_alloc(struct mm_struct *mm, unsigned long vmaddr) void pmd_fragment_free(unsigned long *pmd) { - struct page *page = virt_to_page(pmd); + struct ptdesc *ptdesc = virt_to_ptdesc(pmd); - if (PageReserved(page)) - return free_reserved_page(page); + if (ptdesc_is_reserved(ptdesc)) + return free_reserved_ptdesc(ptdesc); - BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0); - if (atomic_dec_and_test(&page->pt_frag_refcount)) { - pgtable_pmd_page_dtor(page); - __free_page(page); + BUG_ON(atomic_read(&ptdesc->pt_frag_refcount) <= 0); + if (atomic_dec_and_test(&ptdesc->pt_frag_refcount)) { + ptdesc_pmd_dtor(ptdesc); + ptdesc_free(ptdesc); } } diff --git a/arch/powerpc/mm/pgtable-frag.c b/arch/powerpc/mm/pgtable-frag.c index 20652daa1d7e..cf08831fa7c3 100644 --- a/arch/powerpc/mm/pgtable-frag.c +++ b/arch/powerpc/mm/pgtable-frag.c @@ -18,15 +18,15 @@ void pte_frag_destroy(void *pte_frag) { int count; - struct page *page; + struct ptdesc *ptdesc; - page = virt_to_page(pte_frag); + ptdesc = virt_to_ptdesc(pte_frag); /* drop all the pending references */ count = ((unsigned long)pte_frag & ~PAGE_MASK) >> PTE_FRAG_SIZE_SHIFT; /* We allow PTE_FRAG_NR fragments from a PTE page */ - if (atomic_sub_and_test(PTE_FRAG_NR - count, &page->pt_frag_refcount)) { - pgtable_pte_page_dtor(page); - __free_page(page); + if (atomic_sub_and_test(PTE_FRAG_NR - count, &ptdesc->pt_frag_refcount)) { + ptdesc_pte_dtor(ptdesc); + ptdesc_free(ptdesc); } } @@ -55,25 +55,25 @@ static pte_t *get_pte_from_cache(struct mm_struct *mm) static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel) { void *ret = NULL; - struct page *page; + struct ptdesc *ptdesc; if (!kernel) { - page = alloc_page(PGALLOC_GFP | __GFP_ACCOUNT); - if (!page) + ptdesc = ptdesc_alloc(PGALLOC_GFP | __GFP_ACCOUNT); + if (!ptdesc) return NULL; - if (!pgtable_pte_page_ctor(page)) { - __free_page(page); + if (!ptdesc_pte_ctor(ptdesc)) { + ptdesc_free(ptdesc); return NULL; } } else { - page = alloc_page(PGALLOC_GFP); - if (!page) + ptdesc = ptdesc_alloc(PGALLOC_GFP); + if (!ptdesc) return NULL; } - atomic_set(&page->pt_frag_refcount, 1); + atomic_set(&ptdesc->pt_frag_refcount, 1); - ret = page_address(page); + ret = ptdesc_address(ptdesc); /* * if we support only one fragment just return the * allocated page. @@ -82,12 +82,12 @@ static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel) return ret; spin_lock(&mm->page_table_lock); /* - * If we find pgtable_page set, we return + * If we find ptdesc_page set, we return * the allocated page with single fragment * count. */ if (likely(!pte_frag_get(&mm->context))) { - atomic_set(&page->pt_frag_refcount, PTE_FRAG_NR); + atomic_set(&ptdesc->pt_frag_refcount, PTE_FRAG_NR); pte_frag_set(&mm->context, ret + PTE_FRAG_SIZE); } spin_unlock(&mm->page_table_lock); @@ -108,15 +108,15 @@ pte_t *pte_fragment_alloc(struct mm_struct *mm, int kernel) void pte_fragment_free(unsigned long *table, int kernel) { - struct page *page = virt_to_page(table); + struct ptdesc *ptdesc = virt_to_ptdesc(table); - if (PageReserved(page)) - return free_reserved_page(page); + if (ptdesc_is_reserved(ptdesc)) + return free_reserved_ptdesc(ptdesc); - BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0); - if (atomic_dec_and_test(&page->pt_frag_refcount)) { + BUG_ON(atomic_read(&ptdesc->pt_frag_refcount) <= 0); + if (atomic_dec_and_test(&ptdesc->pt_frag_refcount)) { if (!kernel) - pgtable_pte_page_dtor(page); - __free_page(page); + ptdesc_pte_dtor(ptdesc); + ptdesc_free(ptdesc); } } From patchwork Mon Apr 17 20:50:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214635 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B2E63CA9EA2 for ; Mon, 17 Apr 2023 20:54:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231204AbjDQUyA (ORCPT ); Mon, 17 Apr 2023 16:54:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39936 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230498AbjDQUxX (ORCPT ); Mon, 17 Apr 2023 16:53:23 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A7F66619C; Mon, 17 Apr 2023 13:53:05 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id z11-20020a17090abd8b00b0024721c47ceaso14040322pjr.3; Mon, 17 Apr 2023 13:53:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764785; x=1684356785; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/Q5rpHIbLtPdsxzDzaO3buqXOd2yqrT0Z007lfiZkpI=; b=oN3WN3FaawXE5CF6Cg/W/uIlOgFpp+g16jZXog0VNnI/qtOkBdWsDVehFGMED7dn5+ e56tIJJLtydkdncbDj6EgMfHci5dsJ/ubCUoN7HG1abvk1Sxq0PSQWiGjWfSNWnYoCUZ iexosl+123qGqL0uhQqkt6/lQEj+ZZbGmt11o31Lgc0lv69E3YkAOFWPJgHBPxqW6Hcs AY1JBassDB9z9gn7HCEtsHeuw7WKOVmcQJj/XwZLajgpzdXqkSttOF9JB8ZOKpdhpqCp KcapkWtUG6onUSGsHkpFKHhWq2wTFC19ykRFpbUoVueZkn9rzc97nSY2du8bEMz2qr3M LcZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764785; x=1684356785; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/Q5rpHIbLtPdsxzDzaO3buqXOd2yqrT0Z007lfiZkpI=; b=I3MLdeR/WyhGVTlJEn8DNsm7VUtyeAw94co+O5YJRedemkKonB/mRdHJhFGzn1qjMM IlLARbe6iy/gfRPyMmXBHosIIIYbejXYWi0q/v52ZqJEPfxhicfjilcKVrq7L74Qssxg DShfs+LrtMLg5OcFFJjD8Ih0P+sZAm4BR/00UxgKEdLyy2jwyEYtODh/oLgSQmgRZ3R7 Dbo3585s3BxMBt7NqY5n/RA+Lm40uYN+iIMLI0YVG6AZNslsOOHh9LNIOHIF3V+Ov+ID Eb9Cm9cMQh57TdGQ8y0yPaPFYThgOwBCHaEaSKo9XcKw13yfDSic2OauJGt7x9pNGpAM xp+A== X-Gm-Message-State: AAQBX9fXYimncahdGZTWriyBmBo7Pcn+4vrgAgKUiPvM+zaC5lVjdzpJ r28s200znAf0qRyV4enoEyU= X-Google-Smtp-Source: AKy350bQu6z/qcpmnOWvPCYz19t9kGfi+ogJbAChD8E648rYRlxNxJTxaR2oZuddVpCSFFn5irXClA== X-Received: by 2002:a17:90b:4d81:b0:240:7f0d:9232 with SMTP id oj1-20020a17090b4d8100b002407f0d9232mr16482647pjb.3.1681764785136; Mon, 17 Apr 2023 13:53:05 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:04 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 14/33] x86: Convert various functions to use ptdescs Date: Mon, 17 Apr 2023 13:50:29 -0700 Message-Id: <20230417205048.15870-15-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org In order to split struct ptdesc from struct page, convert various functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use ptdesc_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) --- arch/x86/mm/pgtable.c | 46 +++++++++++++++++++++++++------------------ 1 file changed, 27 insertions(+), 19 deletions(-) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index afab0bc7862b..9b6f81c8eb32 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -52,7 +52,7 @@ early_param("userpte", setup_userpte); void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) { - pgtable_pte_page_dtor(pte); + ptdesc_pte_dtor(page_ptdesc(pte)); paravirt_release_pte(page_to_pfn(pte)); paravirt_tlb_remove_table(tlb, pte); } @@ -60,7 +60,7 @@ void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) #if CONFIG_PGTABLE_LEVELS > 2 void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) { - struct page *page = virt_to_page(pmd); + struct ptdesc *ptdesc = virt_to_ptdesc(pmd); paravirt_release_pmd(__pa(pmd) >> PAGE_SHIFT); /* * NOTE! For PAE, any changes to the top page-directory-pointer-table @@ -69,8 +69,8 @@ void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) #ifdef CONFIG_X86_PAE tlb->need_flush_all = 1; #endif - pgtable_pmd_page_dtor(page); - paravirt_tlb_remove_table(tlb, page); + ptdesc_pmd_dtor(ptdesc); + paravirt_tlb_remove_table(tlb, ptdesc_page(ptdesc)); } #if CONFIG_PGTABLE_LEVELS > 3 @@ -92,16 +92,16 @@ void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d) static inline void pgd_list_add(pgd_t *pgd) { - struct page *page = virt_to_page(pgd); + struct ptdesc *ptdesc = virt_to_ptdesc(pgd); - list_add(&page->lru, &pgd_list); + list_add(&ptdesc->pt_list, &pgd_list); } static inline void pgd_list_del(pgd_t *pgd) { - struct page *page = virt_to_page(pgd); + struct ptdesc *ptdesc = virt_to_ptdesc(pgd); - list_del(&page->lru); + list_del(&ptdesc->pt_list); } #define UNSHARED_PTRS_PER_PGD \ @@ -112,12 +112,12 @@ static inline void pgd_list_del(pgd_t *pgd) static void pgd_set_mm(pgd_t *pgd, struct mm_struct *mm) { - virt_to_page(pgd)->pt_mm = mm; + virt_to_ptdesc(pgd)->pt_mm = mm; } struct mm_struct *pgd_page_get_mm(struct page *page) { - return page->pt_mm; + return page_ptdesc(page)->pt_mm; } static void pgd_ctor(struct mm_struct *mm, pgd_t *pgd) @@ -213,11 +213,14 @@ void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmd) static void free_pmds(struct mm_struct *mm, pmd_t *pmds[], int count) { int i; + struct ptdesc *ptdesc; for (i = 0; i < count; i++) if (pmds[i]) { - pgtable_pmd_page_dtor(virt_to_page(pmds[i])); - free_page((unsigned long)pmds[i]); + ptdesc = virt_to_ptdesc(pmds[i]); + + ptdesc_pmd_dtor(ptdesc); + ptdesc_free(ptdesc); mm_dec_nr_pmds(mm); } } @@ -232,16 +235,21 @@ static int preallocate_pmds(struct mm_struct *mm, pmd_t *pmds[], int count) gfp &= ~__GFP_ACCOUNT; for (i = 0; i < count; i++) { - pmd_t *pmd = (pmd_t *)__get_free_page(gfp); - if (!pmd) + pmd_t *pmd = NULL; + struct ptdesc *ptdesc = ptdesc_alloc(gfp, 0); + + if (!ptdesc) failed = true; - if (pmd && !pgtable_pmd_page_ctor(virt_to_page(pmd))) { - free_page((unsigned long)pmd); - pmd = NULL; + if (ptdesc && !ptdesc_pmd_ctor(ptdesc)) { + ptdesc_free(ptdesc); + ptdesc = NULL; failed = true; } - if (pmd) + if (ptdesc) { mm_inc_nr_pmds(mm); + pmd = (pmd_t *)ptdesc_address(ptdesc); + } + pmds[i] = pmd; } @@ -838,7 +846,7 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr) free_page((unsigned long)pmd_sv); - pgtable_pmd_page_dtor(virt_to_page(pmd)); + ptdesc_pmd_dtor(virt_to_ptdesc(pmd)); free_page((unsigned long)pmd); return 1; From patchwork Mon Apr 17 20:50:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214633 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 12FDBC7EE21 for ; Mon, 17 Apr 2023 20:53:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231282AbjDQUxe (ORCPT ); Mon, 17 Apr 2023 16:53:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39950 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230525AbjDQUxY (ORCPT ); Mon, 17 Apr 2023 16:53:24 -0400 Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com [IPv6:2607:f8b0:4864:20::102d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 364526A7C; Mon, 17 Apr 2023 13:53:07 -0700 (PDT) Received: by mail-pj1-x102d.google.com with SMTP id h24-20020a17090a9c1800b002404be7920aso27917370pjp.5; Mon, 17 Apr 2023 13:53:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764786; x=1684356786; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oL7THI6COKAZs945t/ufo2xvJ6u97lYjYyNIJW1c0dQ=; b=doNd2TAUyyyVhrEHrHJgS8adO/z3uWhCb50/lQYlJdk5oqIjZwFvTJlGFJ89/QBcif 8imrD8TFaRte2p5w0KyYL2OFgHoMnHEPcvjiPP5MaiHba2amKVAMgresLNg9DoMmTCzS PUuzaVsdF5ThP0OKEtKVPgORmoth5LctgfKQpFi3NQhv2sSntUA0rqVkNtUltdbMYKjo vztqHLhdDPahrz8LgoZyQMd0QcyNEvkeUbtOTxQhJimZ8P9c4sf1Y5B8mePYpgeVVI0l u553/In38aRZGgvsmj1RZFHLxXeA0/2uAaIsm3kC5sKW6TY7wXshXxHWLV3FZoGUXIIW 7irQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764786; x=1684356786; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oL7THI6COKAZs945t/ufo2xvJ6u97lYjYyNIJW1c0dQ=; b=Tx/n4GApmN9yM4+QGzYGz8qD0S8ggZN3vYyMZd1Y27vwGhygCW8pddPru3hmfeaXvl XZKkvaaKLuUVzD6qN8f00B8CpQiUk1Nb0wqBU9KKM3N0DKT7SSHceOtp77lP+Il3U7Oo wNpuY2Dzbz3eV9Kwxim3hJ93VsS5i2eaJLvIcApgEOgCtmRVoznjjLzLLRWUojS7EgYS IwW+PFbjHLnRR6Mc27NQNwQr5W+IAuQ2rRVDWDZmkTDINRs6cBHJbgFYBrZsaZjEYsCA nsfv/hb9JJQDNF1Tr3pvf6fFDuVne6Ab2e+S6tE0tDeBYze++JzkJsyR2YAi9A7UYPL7 zeVg== X-Gm-Message-State: AAQBX9ejU1ldeHiVwjuSlSb4qEhKDpwlJnu7gx0SBhHafKrtqgfLymxq MqAWiw5FwhJcqI8qLjClFLQ= X-Google-Smtp-Source: AKy350a3sfSYyKQBZCHGHLi0XTtVBj1S8oso+4y2+ULnoZrU3jXoGaDkpnKpDVzrixRlynGGGLFh9Q== X-Received: by 2002:a05:6a20:7d9b:b0:f0:6d71:5f58 with SMTP id v27-20020a056a207d9b00b000f06d715f58mr1513174pzj.50.1681764786395; Mon, 17 Apr 2023 13:53:06 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:06 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 15/33] s390: Convert various gmap functions to use ptdescs Date: Mon, 17 Apr 2023 13:50:30 -0700 Message-Id: <20230417205048.15870-16-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org In order to split struct ptdesc from struct page, convert various functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use ptdesc_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) --- arch/s390/mm/gmap.c | 230 ++++++++++++++++++++++++-------------------- 1 file changed, 128 insertions(+), 102 deletions(-) diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c index a61ea1a491dc..9c6ea1d16e09 100644 --- a/arch/s390/mm/gmap.c +++ b/arch/s390/mm/gmap.c @@ -34,7 +34,7 @@ static struct gmap *gmap_alloc(unsigned long limit) { struct gmap *gmap; - struct page *page; + struct ptdesc *ptdesc; unsigned long *table; unsigned long etype, atype; @@ -67,12 +67,12 @@ static struct gmap *gmap_alloc(unsigned long limit) spin_lock_init(&gmap->guest_table_lock); spin_lock_init(&gmap->shadow_lock); refcount_set(&gmap->ref_count, 1); - page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); - if (!page) + ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); + if (!ptdesc) goto out_free; - page->_pt_s390_gaddr = 0; - list_add(&page->lru, &gmap->crst_list); - table = page_to_virt(page); + ptdesc->_pt_s390_gaddr = 0; + list_add(&ptdesc->pt_list, &gmap->crst_list); + table = ptdesc_to_virt(ptdesc); crst_table_init(table, etype); gmap->table = table; gmap->asce = atype | _ASCE_TABLE_LENGTH | @@ -181,25 +181,25 @@ static void gmap_rmap_radix_tree_free(struct radix_tree_root *root) */ static void gmap_free(struct gmap *gmap) { - struct page *page, *next; + struct ptdesc *ptdesc, *next; /* Flush tlb of all gmaps (if not already done for shadows) */ if (!(gmap_is_shadow(gmap) && gmap->removed)) gmap_flush_tlb(gmap); /* Free all segment & region tables. */ - list_for_each_entry_safe(page, next, &gmap->crst_list, lru) { - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + list_for_each_entry_safe(ptdesc, next, &gmap->crst_list, pt_list) { + ptdesc->_pt_s390_gaddr = 0; + ptdesc_free(ptdesc); } gmap_radix_tree_free(&gmap->guest_to_host); gmap_radix_tree_free(&gmap->host_to_guest); /* Free additional data for a shadow gmap */ if (gmap_is_shadow(gmap)) { - /* Free all page tables. */ - list_for_each_entry_safe(page, next, &gmap->pt_list, lru) { - page->_pt_s390_gaddr = 0; - page_table_free_pgste(page); + /* Free all ptdesc tables. */ + list_for_each_entry_safe(ptdesc, next, &gmap->pt_list, pt_list) { + ptdesc->_pt_s390_gaddr = 0; + page_table_free_pgste(ptdesc_page(ptdesc)); } gmap_rmap_radix_tree_free(&gmap->host_to_rmap); /* Release reference to the parent */ @@ -308,27 +308,27 @@ EXPORT_SYMBOL_GPL(gmap_get_enabled); static int gmap_alloc_table(struct gmap *gmap, unsigned long *table, unsigned long init, unsigned long gaddr) { - struct page *page; + struct ptdesc *ptdesc; unsigned long *new; /* since we dont free the gmap table until gmap_free we can unlock */ - page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); - if (!page) + ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); + if (!ptdesc) return -ENOMEM; - new = page_to_virt(page); + new = ptdesc_to_virt(ptdesc); crst_table_init(new, init); spin_lock(&gmap->guest_table_lock); if (*table & _REGION_ENTRY_INVALID) { - list_add(&page->lru, &gmap->crst_list); + list_add(&ptdesc->pt_list, &gmap->crst_list); *table = __pa(new) | _REGION_ENTRY_LENGTH | (*table & _REGION_ENTRY_TYPE_MASK); - page->_pt_s390_gaddr = gaddr; - page = NULL; + ptdesc->_pt_s390_gaddr = gaddr; + ptdesc = NULL; } spin_unlock(&gmap->guest_table_lock); - if (page) { - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + if (ptdesc) { + ptdesc->_pt_s390_gaddr = 0; + ptdesc_free(ptdesc); } return 0; } @@ -341,13 +341,13 @@ static int gmap_alloc_table(struct gmap *gmap, unsigned long *table, */ static unsigned long __gmap_segment_gaddr(unsigned long *entry) { - struct page *page; + struct ptdesc *ptdesc; unsigned long offset; offset = (unsigned long) entry / sizeof(unsigned long); offset = (offset & (PTRS_PER_PMD - 1)) * PMD_SIZE; - page = pmd_pgtable_page((pmd_t *) entry); - return page->_pt_s390_gaddr + offset; + ptdesc = pmd_ptdesc((pmd_t *) entry); + return ptdesc->_pt_s390_gaddr + offset; } /** @@ -1343,6 +1343,7 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr) unsigned long *ste; phys_addr_t sto, pgt; struct page *page; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); ste = gmap_table_walk(sg, raddr, 1); /* get segment pointer */ @@ -1356,9 +1357,11 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr) __gmap_unshadow_pgt(sg, raddr, __va(pgt)); /* Free page table */ page = phys_to_page(pgt); - list_del(&page->lru); - page->_pt_s390_gaddr = 0; - page_table_free_pgste(page); + + page_ptdesc(page); + list_del(&ptdesc->pt_list); + ptdesc->_pt_s390_gaddr = 0; + page_table_free_pgste(ptdesc_page(ptdesc)); } /** @@ -1372,9 +1375,10 @@ static void gmap_unshadow_pgt(struct gmap *sg, unsigned long raddr) static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr, unsigned long *sgt) { - struct page *page; phys_addr_t pgt; int i; + struct page *page; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); for (i = 0; i < _CRST_ENTRIES; i++, raddr += _SEGMENT_SIZE) { @@ -1385,9 +1389,11 @@ static void __gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr, __gmap_unshadow_pgt(sg, raddr, __va(pgt)); /* Free page table */ page = phys_to_page(pgt); - list_del(&page->lru); - page->_pt_s390_gaddr = 0; - page_table_free_pgste(page); + + page_ptdesc(page); + list_del(&ptdesc->pt_list); + ptdesc->_pt_s390_gaddr = 0; + page_table_free_pgste(ptdesc_page(ptdesc)); } } @@ -1403,6 +1409,7 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr) unsigned long r3o, *r3e; phys_addr_t sgt; struct page *page; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); r3e = gmap_table_walk(sg, raddr, 2); /* get region-3 pointer */ @@ -1416,9 +1423,11 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr) __gmap_unshadow_sgt(sg, raddr, __va(sgt)); /* Free segment table */ page = phys_to_page(sgt); - list_del(&page->lru); - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + + page_ptdesc(page); + list_del(&ptdesc->pt_list); + ptdesc->_pt_s390_gaddr = 0; + ptdesc_free(ptdesc); } /** @@ -1432,9 +1441,10 @@ static void gmap_unshadow_sgt(struct gmap *sg, unsigned long raddr) static void __gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr, unsigned long *r3t) { - struct page *page; phys_addr_t sgt; int i; + struct page *page; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); for (i = 0; i < _CRST_ENTRIES; i++, raddr += _REGION3_SIZE) { @@ -1445,9 +1455,11 @@ static void __gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr, __gmap_unshadow_sgt(sg, raddr, __va(sgt)); /* Free segment table */ page = phys_to_page(sgt); - list_del(&page->lru); - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + + page_ptdesc(page); + list_del(&ptdesc->pt_list); + ptdesc->_pt_s390_gaddr = 0; + ptdesc_free(ptdesc); } } @@ -1463,6 +1475,7 @@ static void gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr) unsigned long r2o, *r2e; phys_addr_t r3t; struct page *page; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); r2e = gmap_table_walk(sg, raddr, 3); /* get region-2 pointer */ @@ -1476,9 +1489,11 @@ static void gmap_unshadow_r3t(struct gmap *sg, unsigned long raddr) __gmap_unshadow_r3t(sg, raddr, __va(r3t)); /* Free region 3 table */ page = phys_to_page(r3t); - list_del(&page->lru); - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + + page_ptdesc(page); + list_del(&ptdesc->pt_list); + ptdesc->_pt_s390_gaddr = 0; + ptdesc_free(ptdesc); } /** @@ -1493,8 +1508,9 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr, unsigned long *r2t) { phys_addr_t r3t; - struct page *page; int i; + struct page *page; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); for (i = 0; i < _CRST_ENTRIES; i++, raddr += _REGION2_SIZE) { @@ -1505,9 +1521,11 @@ static void __gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr, __gmap_unshadow_r3t(sg, raddr, __va(r3t)); /* Free region 3 table */ page = phys_to_page(r3t); - list_del(&page->lru); - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + + page_ptdesc(page); + list_del(&ptdesc->pt_list); + ptdesc->_pt_s390_gaddr = 0; + ptdesc_free(ptdesc); } } @@ -1523,6 +1541,7 @@ static void gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr) unsigned long r1o, *r1e; struct page *page; phys_addr_t r2t; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); r1e = gmap_table_walk(sg, raddr, 4); /* get region-1 pointer */ @@ -1536,9 +1555,11 @@ static void gmap_unshadow_r2t(struct gmap *sg, unsigned long raddr) __gmap_unshadow_r2t(sg, raddr, __va(r2t)); /* Free region 2 table */ page = phys_to_page(r2t); - list_del(&page->lru); - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + + page_ptdesc(page); + list_del(&ptdesc->pt_list); + ptdesc->_pt_s390_gaddr = 0; + ptdesc_free(ptdesc); } /** @@ -1556,6 +1577,7 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr, struct page *page; phys_addr_t r2t; int i; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); asce = __pa(r1t) | _ASCE_TYPE_REGION1; @@ -1569,9 +1591,11 @@ static void __gmap_unshadow_r1t(struct gmap *sg, unsigned long raddr, r1t[i] = _REGION1_ENTRY_EMPTY; /* Free region 2 table */ page = phys_to_page(r2t); - list_del(&page->lru); - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + + page_ptdesc(page); + list_del(&ptdesc->pt_list); + ptdesc->_pt_s390_gaddr = 0; + ptdesc_free(ptdesc); } } @@ -1768,18 +1792,18 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t, unsigned long raddr, origin, offset, len; unsigned long *table; phys_addr_t s_r2t; - struct page *page; + struct ptdesc *ptdesc; int rc; BUG_ON(!gmap_is_shadow(sg)); /* Allocate a shadow region second table */ - page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); - if (!page) + ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); + if (!ptdesc) return -ENOMEM; - page->_pt_s390_gaddr = r2t & _REGION_ENTRY_ORIGIN; + ptdesc->_pt_s390_gaddr = r2t & _REGION_ENTRY_ORIGIN; if (fake) - page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; - s_r2t = page_to_phys(page); + ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; + s_r2t = page_to_phys(ptdesc_page(ptdesc)); /* Install shadow region second table */ spin_lock(&sg->guest_table_lock); table = gmap_table_walk(sg, saddr, 4); /* get region-1 pointer */ @@ -1800,7 +1824,7 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t, _REGION_ENTRY_TYPE_R1 | _REGION_ENTRY_INVALID; if (sg->edat_level >= 1) *table |= (r2t & _REGION_ENTRY_PROTECT); - list_add(&page->lru, &sg->crst_list); + list_add(&ptdesc->pt_list, &sg->crst_list); if (fake) { /* nothing to protect for fake tables */ *table &= ~_REGION_ENTRY_INVALID; @@ -1828,8 +1852,8 @@ int gmap_shadow_r2t(struct gmap *sg, unsigned long saddr, unsigned long r2t, return rc; out_free: spin_unlock(&sg->guest_table_lock); - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + ptdesc->_pt_s390_gaddr = 0; + ptdesc_free(ptdesc); return rc; } EXPORT_SYMBOL_GPL(gmap_shadow_r2t); @@ -1853,18 +1877,18 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t, unsigned long raddr, origin, offset, len; unsigned long *table; phys_addr_t s_r3t; - struct page *page; + struct ptdesc *ptdesc; int rc; BUG_ON(!gmap_is_shadow(sg)); /* Allocate a shadow region second table */ - page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); - if (!page) + ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); + if (!ptdesc) return -ENOMEM; - page->_pt_s390_gaddr = r3t & _REGION_ENTRY_ORIGIN; + ptdesc->_pt_s390_gaddr = r3t & _REGION_ENTRY_ORIGIN; if (fake) - page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; - s_r3t = page_to_phys(page); + ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; + s_r3t = page_to_phys(ptdesc); /* Install shadow region second table */ spin_lock(&sg->guest_table_lock); table = gmap_table_walk(sg, saddr, 3); /* get region-2 pointer */ @@ -1885,7 +1909,7 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t, _REGION_ENTRY_TYPE_R2 | _REGION_ENTRY_INVALID; if (sg->edat_level >= 1) *table |= (r3t & _REGION_ENTRY_PROTECT); - list_add(&page->lru, &sg->crst_list); + list_add(&ptdesc->pt_list, &sg->crst_list); if (fake) { /* nothing to protect for fake tables */ *table &= ~_REGION_ENTRY_INVALID; @@ -1913,8 +1937,8 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t, return rc; out_free: spin_unlock(&sg->guest_table_lock); - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + ptdesc->_pt_s390_gaddr = 0; + ptdesc_free(ptdesc); return rc; } EXPORT_SYMBOL_GPL(gmap_shadow_r3t); @@ -1938,18 +1962,18 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt, unsigned long raddr, origin, offset, len; unsigned long *table; phys_addr_t s_sgt; - struct page *page; + struct ptdesc *ptdesc; int rc; BUG_ON(!gmap_is_shadow(sg) || (sgt & _REGION3_ENTRY_LARGE)); /* Allocate a shadow segment table */ - page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); - if (!page) + ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); + if (!ptdesc) return -ENOMEM; - page->_pt_s390_gaddr = sgt & _REGION_ENTRY_ORIGIN; + ptdesc->_pt_s390_gaddr = sgt & _REGION_ENTRY_ORIGIN; if (fake) - page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; - s_sgt = page_to_phys(page); + ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; + s_sgt = page_to_phys(ptdesc); /* Install shadow region second table */ spin_lock(&sg->guest_table_lock); table = gmap_table_walk(sg, saddr, 2); /* get region-3 pointer */ @@ -1970,7 +1994,7 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt, _REGION_ENTRY_TYPE_R3 | _REGION_ENTRY_INVALID; if (sg->edat_level >= 1) *table |= sgt & _REGION_ENTRY_PROTECT; - list_add(&page->lru, &sg->crst_list); + list_add(&ptdesc->pt_list, &sg->crst_list); if (fake) { /* nothing to protect for fake tables */ *table &= ~_REGION_ENTRY_INVALID; @@ -1998,8 +2022,8 @@ int gmap_shadow_sgt(struct gmap *sg, unsigned long saddr, unsigned long sgt, return rc; out_free: spin_unlock(&sg->guest_table_lock); - page->_pt_s390_gaddr = 0; - __free_pages(page, CRST_ALLOC_ORDER); + ptdesc->_pt_s390_gaddr = 0; + ptdesc_free(ptdesc); return rc; } EXPORT_SYMBOL_GPL(gmap_shadow_sgt); @@ -2022,8 +2046,9 @@ int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr, int *fake) { unsigned long *table; - struct page *page; int rc; + struct page *page; + struct ptdesc *ptdesc; BUG_ON(!gmap_is_shadow(sg)); spin_lock(&sg->guest_table_lock); @@ -2031,9 +2056,10 @@ int gmap_shadow_pgt_lookup(struct gmap *sg, unsigned long saddr, if (table && !(*table & _SEGMENT_ENTRY_INVALID)) { /* Shadow page tables are full pages (pte+pgste) */ page = pfn_to_page(*table >> PAGE_SHIFT); - *pgt = page->_pt_s390_gaddr & ~GMAP_SHADOW_FAKE_TABLE; + ptdesc = page_ptdesc(page); + *pgt = ptdesc->_pt_s390_gaddr & ~GMAP_SHADOW_FAKE_TABLE; *dat_protection = !!(*table & _SEGMENT_ENTRY_PROTECT); - *fake = !!(page->_pt_s390_gaddr & GMAP_SHADOW_FAKE_TABLE); + *fake = !!(ptdesc->_pt_s390_gaddr & GMAP_SHADOW_FAKE_TABLE); rc = 0; } else { rc = -EAGAIN; @@ -2062,19 +2088,19 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt, { unsigned long raddr, origin; unsigned long *table; - struct page *page; + struct ptdesc *ptdesc; phys_addr_t s_pgt; int rc; BUG_ON(!gmap_is_shadow(sg) || (pgt & _SEGMENT_ENTRY_LARGE)); /* Allocate a shadow page table */ - page = page_table_alloc_pgste(sg->mm); - if (!page) + ptdesc = page_ptdesc(page_table_alloc_pgste(sg->mm)); + if (!ptdesc) return -ENOMEM; - page->_pt_s390_gaddr = pgt & _SEGMENT_ENTRY_ORIGIN; + ptdesc->_pt_s390_gaddr = pgt & _SEGMENT_ENTRY_ORIGIN; if (fake) - page->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; - s_pgt = page_to_phys(page); + ptdesc->_pt_s390_gaddr |= GMAP_SHADOW_FAKE_TABLE; + s_pgt = page_to_phys(ptdesc_page(ptdesc)); /* Install shadow page table */ spin_lock(&sg->guest_table_lock); table = gmap_table_walk(sg, saddr, 1); /* get segment pointer */ @@ -2092,7 +2118,7 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt, /* mark as invalid as long as the parent table is not protected */ *table = (unsigned long) s_pgt | _SEGMENT_ENTRY | (pgt & _SEGMENT_ENTRY_PROTECT) | _SEGMENT_ENTRY_INVALID; - list_add(&page->lru, &sg->pt_list); + list_add(&ptdesc->pt_list, &sg->pt_list); if (fake) { /* nothing to protect for fake tables */ *table &= ~_SEGMENT_ENTRY_INVALID; @@ -2118,8 +2144,8 @@ int gmap_shadow_pgt(struct gmap *sg, unsigned long saddr, unsigned long pgt, return rc; out_free: spin_unlock(&sg->guest_table_lock); - page->_pt_s390_gaddr = 0; - page_table_free_pgste(page); + ptdesc->_pt_s390_gaddr = 0; + page_table_free_pgste(ptdesc_page(ptdesc)); return rc; } @@ -2823,11 +2849,11 @@ EXPORT_SYMBOL_GPL(__s390_uv_destroy_range); */ void s390_unlist_old_asce(struct gmap *gmap) { - struct page *old; + struct ptdesc *old; - old = virt_to_page(gmap->table); + old = virt_to_ptdesc(gmap->table); spin_lock(&gmap->guest_table_lock); - list_del(&old->lru); + list_del(&old->pt_list); /* * Sometimes the topmost page might need to be "removed" multiple * times, for example if the VM is rebooted into secure mode several @@ -2842,7 +2868,7 @@ void s390_unlist_old_asce(struct gmap *gmap) * pointers, so list_del can work (and do nothing) without * dereferencing stale or invalid pointers. */ - INIT_LIST_HEAD(&old->lru); + INIT_LIST_HEAD(&old->pt_list); spin_unlock(&gmap->guest_table_lock); } EXPORT_SYMBOL_GPL(s390_unlist_old_asce); @@ -2860,15 +2886,15 @@ EXPORT_SYMBOL_GPL(s390_unlist_old_asce); int s390_replace_asce(struct gmap *gmap) { unsigned long asce; - struct page *page; + struct ptdesc *ptdesc; void *table; s390_unlist_old_asce(gmap); - page = alloc_pages(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); - if (!page) + ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, CRST_ALLOC_ORDER); + if (!ptdesc) return -ENOMEM; - table = page_to_virt(page); + table = ptdesc_to_virt(ptdesc); memcpy(table, gmap->table, 1UL << (CRST_ALLOC_ORDER + PAGE_SHIFT)); /* @@ -2877,7 +2903,7 @@ int s390_replace_asce(struct gmap *gmap) * it will be freed when the VM is torn down. */ spin_lock(&gmap->guest_table_lock); - list_add(&page->lru, &gmap->crst_list); + list_add(&ptdesc->pt_list, &gmap->crst_list); spin_unlock(&gmap->guest_table_lock); /* Set new table origin while preserving existing ASCE control bits */ From patchwork Mon Apr 17 20:50:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214632 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6031EC7EE24 for ; Mon, 17 Apr 2023 20:53:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231289AbjDQUxf (ORCPT ); Mon, 17 Apr 2023 16:53:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231151AbjDQUx0 (ORCPT ); Mon, 17 Apr 2023 16:53:26 -0400 Received: from mail-pj1-x102f.google.com (mail-pj1-x102f.google.com [IPv6:2607:f8b0:4864:20::102f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8B8E36EBB; Mon, 17 Apr 2023 13:53:08 -0700 (PDT) Received: by mail-pj1-x102f.google.com with SMTP id x8-20020a17090a6b4800b002474c5d3367so9750107pjl.2; Mon, 17 Apr 2023 13:53:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764788; x=1684356788; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=cDm5QDDKVVs67sujhxqGF1xdxDwJ7Cf+86boh2cHUEk=; b=Bg9zrwHmz+zWpBrO553L6xcdY/9UqoWF+7JCbvFGdP25XqpLPTgtgzoeSaxtZoExE9 /hfw60KOOdzfPF4LPrgT1cUNcQCYvvrOjqBV5EeXW1oBY3+U5yLq/ZcoGhd2glQtHjSd E1SNfE4qG9orNxm4nPGVLtj0ZQ+EoD5+Of2kyr0VxZlAahOe3asPxpa5GldaOSiEcuod A/YatVmT6GM0jfgVRuBueRMwbH+1Zmb4szBdOpsRslV8e0A8Jy37baDQLXTR/uZAq8Io SplJDGW4RHssNwF9j3bgci8G5WEKKtAuNGcD0sy8PlvNVq51dUIvBHLo/SQen5NnMvRt jmoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764788; x=1684356788; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cDm5QDDKVVs67sujhxqGF1xdxDwJ7Cf+86boh2cHUEk=; b=GG6ndrkzF0Q61dRR91QFxjhzxoiHpQItbqaQlN66FrMhFUkfs6J/23EhKd4oWzH21B LhCQRrP4YQutjJqMpB7S8wNv0gjzmy+QXUUCrOg9iJTni6EAg8tLMkIQ8nOGnKOVjABD YTcaofWjGPbS1L7Xl3xSWOyDEMPkrLmD6hQ+swz0fdpPFIXMGTkO1m9xyVfC7Vx8zZEZ 7nLhJqmJb4fNpB1MePQGmExYqj3MRMSEkVymsdNFLTXYSvJKF2uvf/yTG42Hk1lmS5N0 Msasq4hWvJ108JmfOFaeyxqSoeNfqSg59Ty4EA8y95oyXDB7t0tITtEa1F33INMgdry5 ua9Q== X-Gm-Message-State: AAQBX9dSkrADnr+hMDbKydiWl+FWleP194CrqXgya9HxtXiGajg/XTXE XxeUhqeXODwYGpG+/TJYxAw= X-Google-Smtp-Source: AKy350Z9i6yPCDN1KgNRMsqi9oENFCLbYvx/eJwTeNJWe0OuxQ2o4hV6w1fDsy15hPJg9C6OAHihoA== X-Received: by 2002:a17:90a:6aca:b0:246:82ac:b6b2 with SMTP id b10-20020a17090a6aca00b0024682acb6b2mr15576120pjm.9.1681764787792; Mon, 17 Apr 2023 13:53:07 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:07 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 16/33] s390: Convert various pgalloc functions to use ptdescs Date: Mon, 17 Apr 2023 13:50:31 -0700 Message-Id: <20230417205048.15870-17-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use ptdesc_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) --- arch/s390/include/asm/pgalloc.h | 4 +- arch/s390/include/asm/tlb.h | 4 +- arch/s390/mm/pgalloc.c | 108 ++++++++++++++++---------------- 3 files changed, 59 insertions(+), 57 deletions(-) diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h index 17eb618f1348..9841481560ae 100644 --- a/arch/s390/include/asm/pgalloc.h +++ b/arch/s390/include/asm/pgalloc.h @@ -86,7 +86,7 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long vmaddr) if (!table) return NULL; crst_table_init(table, _SEGMENT_ENTRY_EMPTY); - if (!pgtable_pmd_page_ctor(virt_to_page(table))) { + if (!ptdesc_pmd_ctor(virt_to_ptdesc(table))) { crst_table_free(mm, table); return NULL; } @@ -97,7 +97,7 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) { if (mm_pmd_folded(mm)) return; - pgtable_pmd_page_dtor(virt_to_page(pmd)); + ptdesc_pmd_dtor(virt_to_ptdesc(pmd)); crst_table_free(mm, (unsigned long *) pmd); } diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index b91f4a9b044c..1388c819b467 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -89,12 +89,12 @@ static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, { if (mm_pmd_folded(tlb->mm)) return; - pgtable_pmd_page_dtor(virt_to_page(pmd)); + ptdesc_pmd_dtor(virt_to_ptdesc(pmd)); __tlb_adjust_range(tlb, address, PAGE_SIZE); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; tlb->cleared_puds = 1; - tlb_remove_table(tlb, pmd); + tlb_remove_ptdesc(tlb, pmd); } /* diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index 6b99932abc66..16a29d2cfe85 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -43,17 +43,17 @@ __initcall(page_table_register_sysctl); unsigned long *crst_table_alloc(struct mm_struct *mm) { - struct page *page = alloc_pages(GFP_KERNEL, CRST_ALLOC_ORDER); + struct ptdesc *ptdesc = ptdesc(GFP_KERNEL, CRST_ALLOC_ORDER); - if (!page) + if (!ptdesc) return NULL; - arch_set_page_dat(page, CRST_ALLOC_ORDER); - return (unsigned long *) page_to_virt(page); + arch_set_page_dat(ptdesc_page(ptdesc), CRST_ALLOC_ORDER); + return (unsigned long *) ptdesc_to_virt(ptdesc); } void crst_table_free(struct mm_struct *mm, unsigned long *table) { - free_pages((unsigned long)table, CRST_ALLOC_ORDER); + ptdesc_free(virt_to_ptdesc(table); } static void __crst_table_upgrade(void *arg) @@ -140,21 +140,21 @@ static inline unsigned int atomic_xor_bits(atomic_t *v, unsigned int bits) struct page *page_table_alloc_pgste(struct mm_struct *mm) { - struct page *page; + struct page *ptdesc; u64 *table; - page = alloc_page(GFP_KERNEL); - if (page) { - table = (u64 *)page_to_virt(page); + ptdesc = ptdesc_alloc(GFP_KERNEL, 0); + if (ptdesc) { + table = (u64 *)ptdesc_to_virt(page); memset64(table, _PAGE_INVALID, PTRS_PER_PTE); memset64(table + PTRS_PER_PTE, 0, PTRS_PER_PTE); } - return page; + return ptdesc_page(ptdesc); } void page_table_free_pgste(struct page *page) { - __free_page(page); + ptdesc_free(page_ptdesc(page)); } #endif /* CONFIG_PGSTE */ @@ -230,7 +230,7 @@ void page_table_free_pgste(struct page *page) unsigned long *page_table_alloc(struct mm_struct *mm) { unsigned long *table; - struct page *page; + struct ptdesc *ptdesc; unsigned int mask, bit; /* Try to get a fragment of a 4K page as a 2K page table */ @@ -238,9 +238,9 @@ unsigned long *page_table_alloc(struct mm_struct *mm) table = NULL; spin_lock_bh(&mm->context.lock); if (!list_empty(&mm->context.pgtable_list)) { - page = list_first_entry(&mm->context.pgtable_list, - struct page, lru); - mask = atomic_read(&page->pt_frag_refcount); + ptdesc = list_first_entry(&mm->context.pgtable_list, + struct ptdesc, pt_list); + mask = atomic_read(&ptdesc->pt_frag_refcount); /* * The pending removal bits must also be checked. * Failure to do so might lead to an impossible @@ -253,13 +253,13 @@ unsigned long *page_table_alloc(struct mm_struct *mm) */ mask = (mask | (mask >> 4)) & 0x03U; if (mask != 0x03U) { - table = (unsigned long *) page_to_virt(page); + table = (unsigned long *) ptdesc_to_virt(ptdesc); bit = mask & 1; /* =1 -> second 2K */ if (bit) table += PTRS_PER_PTE; - atomic_xor_bits(&page->pt_frag_refcount, + atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x01U << bit); - list_del(&page->lru); + list_del(&ptdesc->pt_list); } } spin_unlock_bh(&mm->context.lock); @@ -267,27 +267,27 @@ unsigned long *page_table_alloc(struct mm_struct *mm) return table; } /* Allocate a fresh page */ - page = alloc_page(GFP_KERNEL); - if (!page) + ptdesc = ptdesc_alloc(GFP_KERNEL, 0); + if (!ptdesc) return NULL; - if (!pgtable_pte_page_ctor(page)) { - __free_page(page); + if (!ptdesc_pte_ctor(ptdesc)) { + ptdesc_free(ptdesc); return NULL; } - arch_set_page_dat(page, 0); + arch_set_page_dat(ptdesc_page(ptdesc), 0); /* Initialize page table */ - table = (unsigned long *) page_to_virt(page); + table = (unsigned long *) ptdesc_to_virt(ptdesc); if (mm_alloc_pgste(mm)) { /* Return 4K page table with PGSTEs */ - atomic_xor_bits(&page->pt_frag_refcount, 0x03U); + atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x03U); memset64((u64 *)table, _PAGE_INVALID, PTRS_PER_PTE); memset64((u64 *)table + PTRS_PER_PTE, 0, PTRS_PER_PTE); } else { /* Return the first 2K fragment of the page */ - atomic_xor_bits(&page->pt_frag_refcount, 0x01U); + atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x01U); memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE); spin_lock_bh(&mm->context.lock); - list_add(&page->lru, &mm->context.pgtable_list); + list_add(&ptdesc->pt_list, &mm->context.pgtable_list); spin_unlock_bh(&mm->context.lock); } return table; @@ -309,9 +309,8 @@ static void page_table_release_check(struct page *page, void *table, void page_table_free(struct mm_struct *mm, unsigned long *table) { unsigned int mask, bit, half; - struct page *page; + struct ptdesc *ptdesc = virt_to_ptdesc(table); - page = virt_to_page(table); if (!mm_alloc_pgste(mm)) { /* Free 2K page table fragment of a 4K page */ bit = ((unsigned long) table & ~PAGE_MASK)/(PTRS_PER_PTE*sizeof(pte_t)); @@ -321,39 +320,38 @@ void page_table_free(struct mm_struct *mm, unsigned long *table) * will happen outside of the critical section from this * function or from __tlb_remove_table() */ - mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit); + mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x11U << bit); if (mask & 0x03U) - list_add(&page->lru, &mm->context.pgtable_list); + list_add(&ptdesc->pt_list, &mm->context.pgtable_list); else - list_del(&page->lru); + list_del(&ptdesc->pt_list); spin_unlock_bh(&mm->context.lock); - mask = atomic_xor_bits(&page->pt_frag_refcount, 0x10U << bit); + mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x10U << bit); if (mask != 0x00U) return; half = 0x01U << bit; } else { half = 0x03U; - mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U); + mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x03U); } - page_table_release_check(page, table, half, mask); - pgtable_pte_page_dtor(page); - __free_page(page); + page_table_release_check(ptdesc_page(ptdesc), table, half, mask); + ptdesc_pte_dtor(ptdesc); + ptdesc_free(ptdesc); } void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table, unsigned long vmaddr) { struct mm_struct *mm; - struct page *page; unsigned int bit, mask; + struct ptdesc *ptdesc = virt_to_ptdesc(table); mm = tlb->mm; - page = virt_to_page(table); if (mm_alloc_pgste(mm)) { gmap_unlink(mm, table, vmaddr); table = (unsigned long *) ((unsigned long)table | 0x03U); - tlb_remove_table(tlb, table); + tlb_remove_ptdesc(tlb, table); return; } bit = ((unsigned long) table & ~PAGE_MASK) / (PTRS_PER_PTE*sizeof(pte_t)); @@ -363,11 +361,11 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table, * outside of the critical section from __tlb_remove_table() or from * page_table_free() */ - mask = atomic_xor_bits(&page->pt_frag_refcount, 0x11U << bit); + mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x11U << bit); if (mask & 0x03U) - list_add_tail(&page->lru, &mm->context.pgtable_list); + list_add_tail(&ptdesc->pt_list, &mm->context.pgtable_list); else - list_del(&page->lru); + list_del(&ptdesc->pt_list); spin_unlock_bh(&mm->context.lock); table = (unsigned long *) ((unsigned long) table | (0x01U << bit)); tlb_remove_table(tlb, table); @@ -377,7 +375,7 @@ void __tlb_remove_table(void *_table) { unsigned int mask = (unsigned long) _table & 0x03U, half = mask; void *table = (void *)((unsigned long) _table ^ mask); - struct page *page = virt_to_page(table); + struct ptdesc *ptdesc = virt_to_ptdesc(table); switch (half) { case 0x00U: /* pmd, pud, or p4d */ @@ -385,18 +383,18 @@ void __tlb_remove_table(void *_table) return; case 0x01U: /* lower 2K of a 4K page table */ case 0x02U: /* higher 2K of a 4K page table */ - mask = atomic_xor_bits(&page->pt_frag_refcount, mask << 4); + mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, mask << 4); if (mask != 0x00U) return; break; case 0x03U: /* 4K page table with pgstes */ - mask = atomic_xor_bits(&page->pt_frag_refcount, 0x03U); + mask = atomic_xor_bits(&ptdesc->pt_frag_refcount, 0x03U); break; } - page_table_release_check(page, table, half, mask); - pgtable_pte_page_dtor(page); - __free_page(page); + page_table_release_check(ptdesc_page(ptdesc), table, half, mask); + ptdesc_pte_dtor(ptdesc); + ptdesc_free(ptdesc); } /* @@ -424,16 +422,20 @@ static void base_pgt_free(unsigned long *table) static unsigned long *base_crst_alloc(unsigned long val) { unsigned long *table; + struct ptdesc *ptdesc; - table = (unsigned long *)__get_free_pages(GFP_KERNEL, CRST_ALLOC_ORDER); - if (table) - crst_table_init(table, val); + ptdesc = ptdesc_alloc(GFP_KERNEL, CRST_ALLOC_ORDER); + if (!ptdesc) + return NULL; + table = ptdesc_address(ptdesc); + + crst_table_init(table, val); return table; } static void base_crst_free(unsigned long *table) { - free_pages((unsigned long)table, CRST_ALLOC_ORDER); + ptdesc_free(virt_to_ptdesc(table)); } #define BASE_ADDR_END_FUNC(NAME, SIZE) \ From patchwork Mon Apr 17 20:50:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214634 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6610C88C96 for ; Mon, 17 Apr 2023 20:54:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231154AbjDQUx7 (ORCPT ); Mon, 17 Apr 2023 16:53:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231156AbjDQUx0 (ORCPT ); Mon, 17 Apr 2023 16:53:26 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A4F327286; Mon, 17 Apr 2023 13:53:09 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id s23-20020a17090aba1700b00247a8f0dd50so3410383pjr.1; Mon, 17 Apr 2023 13:53:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764789; x=1684356789; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ySUR2NhNYIi45iieMc0XGAHKG7J++HPTQWlVu2Tasao=; b=r15/y1KDif+Drf/4AN1jpq4aAvcL2g0C6yBlVejRZjViwAKdx4tN5lVxRrilW72t8W lxYrwwZ1V+6OuCBWGNjnVnFHs9fxA0+y+Hq/T2Xun9HaX6uevRtPhFT+6Fyn6rL6cuxl Z3/9OHOOCrjsch5rq9oWYEod7RCMO4fFtyd7gUBrmXZG32pCnlsISiWWId2b7LrGrhST kpDrkb186Ifgo7UoD/Ti/tLM6Mv74UfKgE7j3u4BhSVZUj1yrtlCsZDlhuTy+CmGW1TT t+a+qaRd0Bf5+Wl/CDmlKFz2wUcV2Pe9TbKsnSy3YV2N+zpAWDFA5l3JzNu+kIHHm1Jf 2puA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764789; x=1684356789; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ySUR2NhNYIi45iieMc0XGAHKG7J++HPTQWlVu2Tasao=; b=RcBvuNfBNrhEroN791ymX7dg6iLG0BijC5EsoWFExuCO/5rOtx/bGdZp9NKmGzLuBy mC+A7mBRn5tAwExWbPXrukQh5q7NKlwdYY+oz0LFJ8qQwrkGwo1rFyqrM8aSELC4cO/G ArkBpBS10DWhO04QNP1xyMS0MnQmRrYVsKOCQS6FJDoy48SMYKUDo7SmEJE3Tb7DlAUS tbp1n/IG+MSQhnom3EPym0yJNpEdtc6g04nqc5/cOWAj79eoUW7SwA1AE0/bswk2qufs ooHKMPivC24vb309xtZyIeUwssrus4tGqOc6pN6AWYdTjCWd6RZmZA9TXjUjDQZ2Kxi1 seAQ== X-Gm-Message-State: AAQBX9fnkdktYyVT9RKvWnpLYJTZkhG0tLho+JfzU755RHG7QsroQQrL skesA5pigqo0e089FHyvEnI= X-Google-Smtp-Source: AKy350ZVl3PNdJ0+nN/Rb9LpMw5+uyjbe8u9t4N4aIteNFpQkEwBCpgg7+YRGDc+SZWH4x4d964lpw== X-Received: by 2002:a17:90a:d205:b0:234:409:9752 with SMTP id o5-20020a17090ad20500b0023404099752mr15223725pju.25.1681764789129; Mon, 17 Apr 2023 13:53:09 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:08 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 17/33] mm: Remove page table members from struct page Date: Mon, 17 Apr 2023 13:50:32 -0700 Message-Id: <20230417205048.15870-18-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org The page table members are now split out into their own ptdesc struct. Remove them from struct page. Signed-off-by: Vishal Moola (Oracle) --- include/linux/mm_types.h | 14 -------------- include/linux/pgtable.h | 3 --- 2 files changed, 17 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 2616d64c0e8c..4355f95abc5a 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -141,20 +141,6 @@ struct page { struct { /* Tail pages of compound page */ unsigned long compound_head; /* Bit zero is set */ }; - struct { /* Page table pages */ - unsigned long _pt_pad_1; /* compound_head */ - pgtable_t pmd_huge_pte; /* protected by page->ptl */ - unsigned long _pt_s390_gaddr; /* mapping */ - union { - struct mm_struct *pt_mm; /* x86 pgds only */ - atomic_t pt_frag_refcount; /* powerpc */ - }; -#if ALLOC_SPLIT_PTLOCKS - spinlock_t *ptl; -#else - spinlock_t ptl; -#endif - }; struct { /* ZONE_DEVICE pages */ /** @pgmap: Points to the hosting device page map. */ struct dev_pagemap *pgmap; diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 7cd803aa38eb..8cacdf1fc411 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -91,9 +91,6 @@ TABLE_MATCH(flags, __page_flags); TABLE_MATCH(compound_head, pt_list); TABLE_MATCH(compound_head, _pt_pad_1); TABLE_MATCH(mapping, _pt_s390_gaddr); -TABLE_MATCH(pmd_huge_pte, pmd_huge_pte); -TABLE_MATCH(pt_mm, pt_mm); -TABLE_MATCH(ptl, ptl); #undef TABLE_MATCH static_assert(sizeof(struct ptdesc) <= sizeof(struct page)); From patchwork Mon Apr 17 20:50:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214636 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7AFC1CA9EAD for ; Mon, 17 Apr 2023 20:54:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231365AbjDQUyB (ORCPT ); Mon, 17 Apr 2023 16:54:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39456 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231175AbjDQUx1 (ORCPT ); Mon, 17 Apr 2023 16:53:27 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 43EFC72A0; Mon, 17 Apr 2023 13:53:11 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id lh8so14361065plb.1; Mon, 17 Apr 2023 13:53:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764790; x=1684356790; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dfen1Ya6fa7BCRlN1BPOcmTNp3Sy/DALCm51X/wHZX4=; b=rtRu53vchXs9D5cRJ3CetoIWLY2lh3C5JE3MRdQ44xVuQfskdtiHDLlzsBbYSofn7L e8lSzFUhD7A9hljeGTauQu8egrDhlG5YRZnhWotrw/49W7ZikJ5WwMzRyLlm7d1p91Fj tP9nyG9Dd28ZXF5MIPK2qlE+ewnSWVXYr1sMPo4LeXBmaVcfj1OQVCyd1BVW0WzbvUsd N8PJAsJH/tMmSy67sjwvjLZ5yO4kk3gDbHJjSYjg5iU+rvwTRrmDeUSPnhCF/kmv3N4w dd20d710X+Q7eq6GtdItIvNuKGeMwqGa7YpLc42c8kFJ7PC15/zIHwILDzQm/t8Xoxcw qHNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764790; x=1684356790; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dfen1Ya6fa7BCRlN1BPOcmTNp3Sy/DALCm51X/wHZX4=; b=OY1KFMLI4tZ4ACt3FXqhV6jcdIZtDK43TVIfxD5uOoi1M5XS8CIdFmnOdcU2KWcNuI wqoCPsgRZuTBGvLAJyJKSmNjsANHjUTIZeK8BN//lnf7HqoMKCNjA3/jqQraOy+DyPNe /mgI+MeFH+r5cozLwVclr0UoQ1EgUbUlzJz1011U2BEvHo1X89XBcByuYLcguhc/4cis ESMTmd9fKxTGZkAcjF0fnvPn2TjvaXr7a7WGzZprWNVETZ5tPRKzkr6NmA416XorxrIl O/SX6INCK+YSjwXRfu9DoDy/mHfjDjiP5CXG9MpAYUYOHiWvDKakkrPuFkGYWjIQsBqK kXxQ== X-Gm-Message-State: AAQBX9dc06lFWGf8eeo8uOFXP4rHhXtImpJ9R2n3yX251sUK9SpqgFZQ 1+QjZNUoGkIpS+UiTjIoY24= X-Google-Smtp-Source: AKy350ZvnFjO+S7U7bV96piHTTfvEOv0mlMsOissXbg6oSWmd239xcTtUjPTO8K/viecYNaiDpXOfA== X-Received: by 2002:a17:90a:e997:b0:23f:9448:89c2 with SMTP id v23-20020a17090ae99700b0023f944889c2mr16622304pjy.7.1681764790439; Mon, 17 Apr 2023 13:53:10 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:10 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 18/33] pgalloc: Convert various functions to use ptdescs Date: Mon, 17 Apr 2023 13:50:33 -0700 Message-Id: <20230417205048.15870-19-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use ptdesc_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) --- include/asm-generic/pgalloc.h | 62 +++++++++++++++++++++-------------- 1 file changed, 37 insertions(+), 25 deletions(-) diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h index a7cf825befae..7d4a1f5d3c17 100644 --- a/include/asm-generic/pgalloc.h +++ b/include/asm-generic/pgalloc.h @@ -18,7 +18,11 @@ */ static inline pte_t *__pte_alloc_one_kernel(struct mm_struct *mm) { - return (pte_t *)__get_free_page(GFP_PGTABLE_KERNEL); + struct ptdesc *ptdesc = ptdesc_alloc(GFP_PGTABLE_KERNEL, 0); + + if (!ptdesc) + return NULL; + return (pte_t *)ptdesc_address(ptdesc); } #ifndef __HAVE_ARCH_PTE_ALLOC_ONE_KERNEL @@ -41,7 +45,7 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) */ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) { - free_page((unsigned long)pte); + ptdesc_free(virt_to_ptdesc(pte)); } /** @@ -49,7 +53,7 @@ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) * @mm: the mm_struct of the current context * @gfp: GFP flags to use for the allocation * - * Allocates a page and runs the pgtable_pte_page_ctor(). + * Allocates a ptdesc and runs the ptdesc_pte_ctor(). * * This function is intended for architectures that need * anything beyond simple page allocation or must have custom GFP flags. @@ -58,17 +62,17 @@ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) */ static inline pgtable_t __pte_alloc_one(struct mm_struct *mm, gfp_t gfp) { - struct page *pte; + struct ptdesc *ptdesc; - pte = alloc_page(gfp); - if (!pte) + ptdesc = ptdesc_alloc(gfp, 0); + if (!ptdesc) return NULL; - if (!pgtable_pte_page_ctor(pte)) { - __free_page(pte); + if (!ptdesc_pte_ctor(ptdesc)) { + ptdesc_free(ptdesc); return NULL; } - return pte; + return ptdesc_page(ptdesc); } #ifndef __HAVE_ARCH_PTE_ALLOC_ONE @@ -76,7 +80,7 @@ static inline pgtable_t __pte_alloc_one(struct mm_struct *mm, gfp_t gfp) * pte_alloc_one - allocate a page for PTE-level user page table * @mm: the mm_struct of the current context * - * Allocates a page and runs the pgtable_pte_page_ctor(). + * Allocates a ptdesc and runs the ptdesc_pte_ctor(). * * Return: `struct page` initialized as page table or %NULL on error */ @@ -98,8 +102,10 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm) */ static inline void pte_free(struct mm_struct *mm, struct page *pte_page) { - pgtable_pte_page_dtor(pte_page); - __free_page(pte_page); + struct ptdesc *ptdesc = page_ptdesc(pte_page); + + ptdesc_pte_dtor(ptdesc); + ptdesc_free(ptdesc); } @@ -110,7 +116,7 @@ static inline void pte_free(struct mm_struct *mm, struct page *pte_page) * pmd_alloc_one - allocate a page for PMD-level page table * @mm: the mm_struct of the current context * - * Allocates a page and runs the pgtable_pmd_page_ctor(). + * Allocates a ptdesc and runs the ptdesc_pmd_ctor(). * Allocations use %GFP_PGTABLE_USER in user context and * %GFP_PGTABLE_KERNEL in kernel context. * @@ -118,28 +124,30 @@ static inline void pte_free(struct mm_struct *mm, struct page *pte_page) */ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) { - struct page *page; + struct ptdesc *ptdesc; gfp_t gfp = GFP_PGTABLE_USER; if (mm == &init_mm) gfp = GFP_PGTABLE_KERNEL; - page = alloc_page(gfp); - if (!page) + ptdesc = ptdesc_alloc(gfp, 0); + if (!ptdesc) return NULL; - if (!pgtable_pmd_page_ctor(page)) { - __free_page(page); + if (!ptdesc_pmd_ctor(ptdesc)) { + ptdesc_free(ptdesc); return NULL; } - return (pmd_t *)page_address(page); + return (pmd_t *)ptdesc_address(ptdesc); } #endif #ifndef __HAVE_ARCH_PMD_FREE static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) { + struct ptdesc *ptdesc = virt_to_ptdesc(pmd); + BUG_ON((unsigned long)pmd & (PAGE_SIZE-1)); - pgtable_pmd_page_dtor(virt_to_page(pmd)); - free_page((unsigned long)pmd); + ptdesc_pmd_dtor(ptdesc); + ptdesc_free(ptdesc); } #endif @@ -149,11 +157,15 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) static inline pud_t *__pud_alloc_one(struct mm_struct *mm, unsigned long addr) { - gfp_t gfp = GFP_PGTABLE_USER; + gfp_t gfp = GFP_PGTABLE_USER | __GFP_ZERO; + struct ptdesc *ptdesc; if (mm == &init_mm) gfp = GFP_PGTABLE_KERNEL; - return (pud_t *)get_zeroed_page(gfp); + ptdesc = ptdesc_alloc(gfp, 0); + if (!ptdesc) + return NULL; + return (pud_t *)ptdesc_address(ptdesc); } #ifndef __HAVE_ARCH_PUD_ALLOC_ONE @@ -175,7 +187,7 @@ static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr) static inline void __pud_free(struct mm_struct *mm, pud_t *pud) { BUG_ON((unsigned long)pud & (PAGE_SIZE-1)); - free_page((unsigned long)pud); + ptdesc_free(virt_to_ptdesc(pud)); } #ifndef __HAVE_ARCH_PUD_FREE @@ -190,7 +202,7 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud) #ifndef __HAVE_ARCH_PGD_FREE static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) { - free_page((unsigned long)pgd); + ptdesc_free(virt_to_ptdesc(pgd)); } #endif From patchwork Mon Apr 17 20:50:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214641 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F5F2C88CAE for ; Mon, 17 Apr 2023 20:54:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230432AbjDQUyG (ORCPT ); Mon, 17 Apr 2023 16:54:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231181AbjDQUx2 (ORCPT ); Mon, 17 Apr 2023 16:53:28 -0400 Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 243E26E82; Mon, 17 Apr 2023 13:53:12 -0700 (PDT) Received: by mail-pl1-x632.google.com with SMTP id n17so11376824pln.8; Mon, 17 Apr 2023 13:53:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764791; x=1684356791; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gfpIRfFq9zSU9T7uNfhlcYr/W6dIxhZkZgP28wF3SDA=; b=gYiTGJsq3tS0FDSy3/Ogj5YUQaoNGj6XWEQGKGQUuQuqn1egVSKA3seChXbxd6ie6X KVLVgtxbt+Gl9lkEwLQ+F+iG9Kz1ErH6Jdv2PrLPdPwp/UgBJZIaGw67YMksWIwKnmyC 7jVlh7AKx280dtpirqSUFhv9Pyd2cc0T8yO2lBO5mVRpibO3YWd6y8PCZ2jFDrnw2dOM qIyvjKI6SfUletTh72Za02rULTtpxgw/RsbeOg0p8ic4ZkVJUUvnJu+rLoRN8U3nnoDL 0yuCtQCJj2oWJ5XTJVsuTcfyXkKiXbSXxA+TapEEDARVJ5R07j+Rkpo9klzzaVdh3pXw NvCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764791; x=1684356791; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gfpIRfFq9zSU9T7uNfhlcYr/W6dIxhZkZgP28wF3SDA=; b=br2hh4ajqO3C5IrGeKG4gU7xFpmo7V6FG4mTpzQphgKiNLSz8hl55upTY73bm7nJSH Z+/PygIF/T0jNeeIkr6CbThn7enaH9kYb95Vwa0dKwJLs4K9LvfxYrOLVhCZf52U3Ehl /kHvCKCAXCGV7s07g0bGgMORgt8/8JlsFkwTT+lkvWutviHVwa4FuxrnTfyeUGLnaR9l 54Jz+rQ8dlF+IZ0+o3OpsfKYRFB68dQMDS/z5xukaZte9H0pZiDcNa+VDDFu8eDVqxhf VoVNRtXX883c/6lg9DFXok2Ea6t9Wwwkd1CCORjmEsfhwauzH80iKEKD7n0E9bfsPy8A 3h1w== X-Gm-Message-State: AAQBX9ef+XykCYkHY+cnmLu76FZ66BISXgbaSCJxZ80B/Fp+o8qHJrih +mws+Qv5FW/3p7jpGO6lM/0LuW9TO5U8bA== X-Google-Smtp-Source: AKy350bvWV8yzhiozqrrwWcRpctJK00ZbC2YeKZYOK2IgaSQcSotd8fWuCQxrW1E1m+ogfO6p1yZ/Q== X-Received: by 2002:a17:902:b789:b0:1a6:8024:321e with SMTP id e9-20020a170902b78900b001a68024321emr189655pls.34.1681764791672; Mon, 17 Apr 2023 13:53:11 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:11 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 19/33] arm: Convert various functions to use ptdescs Date: Mon, 17 Apr 2023 13:50:34 -0700 Message-Id: <20230417205048.15870-20-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. late_alloc() also uses the __get_free_pages() helper function. Convert this to use ptdesc_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) --- arch/arm/include/asm/tlb.h | 12 +++++++----- arch/arm/mm/mmu.c | 6 +++--- 2 files changed, 10 insertions(+), 8 deletions(-) diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index b8cbe03ad260..9ab8a6929d35 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -39,7 +39,9 @@ static inline void __tlb_remove_table(void *_table) static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { - pgtable_pte_page_dtor(pte); + struct ptdesc *ptdesc = page_ptdesc(pte); + + ptdesc_pte_dtor(ptdesc); #ifndef CONFIG_ARM_LPAE /* @@ -50,17 +52,17 @@ __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) __tlb_adjust_range(tlb, addr - PAGE_SIZE, 2 * PAGE_SIZE); #endif - tlb_remove_table(tlb, pte); + tlb_remove_ptdesc(tlb, ptdesc); } static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) { #ifdef CONFIG_ARM_LPAE - struct page *page = virt_to_page(pmdp); + struct ptdesc *ptdesc = virt_to_ptdesc(pmdp); - pgtable_pmd_page_dtor(page); - tlb_remove_table(tlb, page); + ptdesc_pmd_dtor(ptdesc); + tlb_remove_ptdesc(tlb, ptdesc); #endif } diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index 463fc2a8448f..7add505bd797 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -737,11 +737,11 @@ static void __init *early_alloc(unsigned long sz) static void *__init late_alloc(unsigned long sz) { - void *ptr = (void *)__get_free_pages(GFP_PGTABLE_KERNEL, get_order(sz)); + void *ptdesc = ptdesc_alloc(GFP_PGTABLE_KERNEL, get_order(sz)); - if (!ptr || !pgtable_pte_page_ctor(virt_to_page(ptr))) + if (!ptdesc || !ptdesc_pte_ctor(ptdesc)) BUG(); - return ptr; + return ptdesc; } static pte_t * __init arm_pte_alloc(pmd_t *pmd, unsigned long addr, From patchwork Mon Apr 17 20:50:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214638 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B2BCC87FDC for ; Mon, 17 Apr 2023 20:54:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231370AbjDQUyC (ORCPT ); Mon, 17 Apr 2023 16:54:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39984 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231179AbjDQUx2 (ORCPT ); Mon, 17 Apr 2023 16:53:28 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7E92B72BC; Mon, 17 Apr 2023 13:53:13 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id i8so18359280plt.10; Mon, 17 Apr 2023 13:53:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764793; x=1684356793; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oJJ/5JNaAhhn0BtCWyESRHZW29NfaaBB7RT6Dtpfjog=; b=VrQ0lWqrs/ZeVvxhrMkEnri25zysQvP6rvUVlziuAvcyoO7aiCs2Ip8nrZdL+4djlt EalUxXukap5u9TcR0Ypqd1nj4oCE7Zy1crkVgOFcBvFvQfkcdrgLVV5yu4aZ8sXiNQzS MzbdyHaLybqiTpdHLCapkJdM135ufCzQfSX7LCwPasNWotwzcPJsMf59UKAQgZOOczBx ZYZuB1ddeu/80rtF2gv16gY+bwHM4NdL4pbXt10qFjUxYFen/hCDj/7OzFab+Hq/fc/w UWRA/d9TiIVFZRkxxpOQPugklzg3SfxU/z6fpq4Q2x5DyNF+cvnkXmPJMDWg2KdcBHj4 X1Mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764793; x=1684356793; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oJJ/5JNaAhhn0BtCWyESRHZW29NfaaBB7RT6Dtpfjog=; b=L9DOkafzvfMyZgy/UT2lHa+SZidly+EZH319ZEZeF0JWSWyXmww7P+9xtKuKe3B+dO UiQDhGwPJqzjxBKCUG9CyzER/jpKf44owQ+sItGVJIjZ2FOmmulU8dBtMojRB14zEAuq BfZwG1miuuETvoVADHys2DivnhpfuOboqteO+rXNjSmBjcZRAw6x4xJ9AI3kmih7cz/3 3m8zMzvUHEbyvVnkjE3alWE34kTb2HePHxnyerRLE2HwtMcxBvFdKzNFRNH32zOWUqFt iMN0+gBT1Pd57VoTUS+cr03Hp7klI1wgdLy6+wRqG2DlMNq3+HV9/QKS3QgRocQXzG5g 31YA== X-Gm-Message-State: AAQBX9c1j0y3OsL8qY28kThwt/i0iNIsi1EQxcqh2l6T6fHC6rv+PX4o SvvHVYkEHn8UNO9EWfOiLd8= X-Google-Smtp-Source: AKy350Y2w7IaE9PrAymAnxztuomQd4Ga7koe4Zw+ByQfv97VTbICRP51vtaSBBH4u41LkAvC07UcrA== X-Received: by 2002:a17:902:bd86:b0:1a6:c6d4:5586 with SMTP id q6-20020a170902bd8600b001a6c6d45586mr227041pls.13.1681764792942; Mon, 17 Apr 2023 13:53:12 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:12 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 20/33] arm64: Convert various functions to use ptdescs Date: Mon, 17 Apr 2023 13:50:35 -0700 Message-Id: <20230417205048.15870-21-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Signed-off-by: Vishal Moola (Oracle) --- arch/arm64/include/asm/tlb.h | 14 ++++++++------ arch/arm64/mm/mmu.c | 7 ++++--- 2 files changed, 12 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index c995d1f4594f..6cb70c247e30 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -75,18 +75,20 @@ static inline void tlb_flush(struct mmu_gather *tlb) static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { - pgtable_pte_page_dtor(pte); - tlb_remove_table(tlb, pte); + struct ptdesc *ptdesc = page_ptdesc(pte); + + ptdesc_pte_dtor(ptdesc); + tlb_remove_ptdesc(tlb, ptdesc); } #if CONFIG_PGTABLE_LEVELS > 2 static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) { - struct page *page = virt_to_page(pmdp); + struct ptdesc *ptdesc = virt_to_ptdesc(pmdp); - pgtable_pmd_page_dtor(page); - tlb_remove_table(tlb, page); + ptdesc_pmd_dtor(ptdesc); + tlb_remove_ptdesc(tlb, ptdesc); } #endif @@ -94,7 +96,7 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, unsigned long addr) { - tlb_remove_table(tlb, virt_to_page(pudp)); + tlb_remove_ptdesc(tlb, virt_to_ptdesc(pudp)); } #endif diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index af6bc8403ee4..5ba005fd607e 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -426,6 +426,7 @@ static phys_addr_t __pgd_pgtable_alloc(int shift) static phys_addr_t pgd_pgtable_alloc(int shift) { phys_addr_t pa = __pgd_pgtable_alloc(shift); + struct ptdesc *ptdesc = page_ptdesc(phys_to_page(pa)); /* * Call proper page table ctor in case later we need to @@ -433,12 +434,12 @@ static phys_addr_t pgd_pgtable_alloc(int shift) * this pre-allocated page table. * * We don't select ARCH_ENABLE_SPLIT_PMD_PTLOCK if pmd is - * folded, and if so pgtable_pmd_page_ctor() becomes nop. + * folded, and if so ptdesc_pte_dtor() becomes nop. */ if (shift == PAGE_SHIFT) - BUG_ON(!pgtable_pte_page_ctor(phys_to_page(pa))); + BUG_ON(!ptdesc_pte_dtor(ptdesc)); else if (shift == PMD_SHIFT) - BUG_ON(!pgtable_pmd_page_ctor(phys_to_page(pa))); + BUG_ON(!ptdesc_pte_dtor(ptdesc)); return pa; } From patchwork Mon Apr 17 20:50:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214637 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54BB4C77B72 for ; Mon, 17 Apr 2023 20:54:04 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231366AbjDQUyB (ORCPT ); Mon, 17 Apr 2023 16:54:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40166 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231192AbjDQUx2 (ORCPT ); Mon, 17 Apr 2023 16:53:28 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C4DA3B3; Mon, 17 Apr 2023 13:53:14 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id s23-20020a17090aba1700b00247a8f0dd50so3410938pjr.1; Mon, 17 Apr 2023 13:53:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764794; x=1684356794; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=WInZHFLF0E37CPvH/fyQuQmpONrFi7mPGahLu3Z9gNY=; b=Xg6P0UpaqLjmTr9GC8q8FEqPaWuvWAOVoK//8dKOM2wu82xE/PCXSYeAATRXUJ/HKh 6EEKIr46u43LbmKVnfil2W4E6LUnTir89HhCdDllvASzF+yevHo7MjwPE9+XRv+3hhES OUUsnstcNrA0rLCFMYj7vWl5ZczSf3KPYpA7wTt20fxP/HKu0nYQRIzVhW6nrnGOA5jn ZX8zcSMK8Y+FlG/fX2gye51alnw5R0vQJIqmi3wnRTlMtqWonWHxJepWd7O0uYkErLAf nElaONPL6snfMupRjx7SgoPL0fo06TUuoxc3sYPVOppMWFUfAwgvDIOHO8Ugh+LFqXUj IYjg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764794; x=1684356794; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WInZHFLF0E37CPvH/fyQuQmpONrFi7mPGahLu3Z9gNY=; b=J4otx3qQJMJ25H8RdsuvhMnLi470EYmeDdo4BA53KYqgAc/g1h4fuEgq57Pr7o9Mrs inPUKaudhYvcw93mZb7JjSa0NvLiSm9/aD6fPxWP/ChiElI3ilqADwISZj54tROEWYWF NZ7xp+1s3gthZdCaPKJxYQ4ocihqzgOMJzv/Iy2VreXUNKaY+ahdFH0iJWVEhrgTVfgI D1B6RICbwRoceF3jg7/vKpyqA7+IaHpbVR6VpHtRr4dHiXFHS4h469mlBejUiZpkA2XH pmFcueeEGMGt6lmmn2WiuEsJjQVH+suK//GHVDGLMV44NdWe2Jazh42OCGYmrLi7eHep Ts+w== X-Gm-Message-State: AAQBX9dSSQH40Q0Q3S3h5PP+Srr7uFxV95TWXnhgs+jDE3029oEuskWL rFPxh0O5Xa6IszN095OKhYrSUxs62evLxw== X-Google-Smtp-Source: AKy350Ym3GSROxWd30N3vnOcQ+scJJk4j2bn57g6f2y/wdjBz5ko3J6EyTomKSlBO1Xk2XiBoSAe7w== X-Received: by 2002:a17:90a:5315:b0:23f:81c0:eadd with SMTP id x21-20020a17090a531500b0023f81c0eaddmr15717621pjh.47.1681764794292; Mon, 17 Apr 2023 13:53:14 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:14 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 21/33] csky: Convert __pte_free_tlb() to use ptdescs Date: Mon, 17 Apr 2023 13:50:36 -0700 Message-Id: <20230417205048.15870-22-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents. Signed-off-by: Vishal Moola (Oracle) --- arch/csky/include/asm/pgalloc.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/csky/include/asm/pgalloc.h b/arch/csky/include/asm/pgalloc.h index 7d57e5da0914..af26f1191b43 100644 --- a/arch/csky/include/asm/pgalloc.h +++ b/arch/csky/include/asm/pgalloc.h @@ -63,8 +63,8 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) #define __pte_free_tlb(tlb, pte, address) \ do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page(tlb, pte); \ + ptdesc_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc(tlb, page_ptdesc(pte)); \ } while (0) extern void pagetable_init(void); From patchwork Mon Apr 17 20:50:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214642 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AF3DCA9EB3 for ; Mon, 17 Apr 2023 20:54:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231382AbjDQUyM (ORCPT ); Mon, 17 Apr 2023 16:54:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39996 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231210AbjDQUx3 (ORCPT ); Mon, 17 Apr 2023 16:53:29 -0400 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CABA876AF; Mon, 17 Apr 2023 13:53:16 -0700 (PDT) Received: by mail-pj1-x1030.google.com with SMTP id v21-20020a17090a459500b0024776162815so6441039pjg.2; Mon, 17 Apr 2023 13:53:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764796; x=1684356796; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=apg836Gn0N7QExyyhUUGNgc2fd2r3iyoAghRwsUjc2o=; b=DU+PC+Lb75+sxyzE8TFBhUq23MhtXNiuA1nw0c/GhsuV5IK3M7l/Hh4azDLq1DRa+j fu7HXiEzvV3T+N5Mt1is7DhkrUBl4PJsqFnRPYaX3oFVj8P6Qlk5ZLs6mljyu96mU+sv zpquQbUhbrKn4GUU7udC1O03s9ThOPwnMOZQlJkJdwRy8ikjPxI0naLAwtnZbb4WzEoy 3BNpuQVnvIHEKJKSenAtt9bKOamb6q7fxWlfz+PtLFjOTztncr7nNFnGXeF7WX7UQ2CN r/J7luQKDmaqSpU1aG4/YKRgzEf0E8SysEU40ARw1q6PEFiJTtJVgg7dYOEMGKOTGrN2 xyCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764796; x=1684356796; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=apg836Gn0N7QExyyhUUGNgc2fd2r3iyoAghRwsUjc2o=; b=ZDjNDJ505lpUPd7IFNNLJ0uPyASL8WTXYreEYg7DUjI9uIR/lbd7ztQQaEA1CI0G9P tqWOrulsfjVHygORX0Zy+4aIp9gHTW04cVL25DuyMgx9AVtxFx1LD+51vafjdqxC3lHO QhvgWZw1qAtu770XA8Ir6dVuR5RwOgb8ZUX51YPCFBsYZAS7r/VhgBdS3tJ1ujEPlhwp N2jmJHpFtLPfkZJjNnfo9IQRF5xdLGjXuIoIiGXRawjthBzQt6Uy/N4sHTnGIoIgHN+o E2Ga7wZ51UYRVoGBnFpmQzOVKf9nH1Y+zJcTeREnkY7htNrs4Wy8rgo2yt4JzEMYSMn6 hWyA== X-Gm-Message-State: AAQBX9eUcD6rUXxj2OHPt0b3teMdIXxPMVIwh3t3Bu5iEWgF5rQvC4a3 dRX4FUn7kEz6qmCFjSyQE2s= X-Google-Smtp-Source: AKy350ZcoHXe5N321oYs87Q7Ayd5DnxwU4W2BhAi27QaGMsyV7i9NrgOQAsujiVTD54cy1B75qhHjg== X-Received: by 2002:a17:90b:3b8c:b0:23f:9d83:ad76 with SMTP id pc12-20020a17090b3b8c00b0023f9d83ad76mr12727299pjb.23.1681764795745; Mon, 17 Apr 2023 13:53:15 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:15 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 22/33] hexagon: Convert __pte_free_tlb() to use ptdescs Date: Mon, 17 Apr 2023 13:50:37 -0700 Message-Id: <20230417205048.15870-23-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents. Signed-off-by: Vishal Moola (Oracle) --- arch/hexagon/include/asm/pgalloc.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/hexagon/include/asm/pgalloc.h b/arch/hexagon/include/asm/pgalloc.h index f0c47e6a7427..0f8432430e68 100644 --- a/arch/hexagon/include/asm/pgalloc.h +++ b/arch/hexagon/include/asm/pgalloc.h @@ -87,10 +87,10 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, max_kernel_seg = pmdindex; } -#define __pte_free_tlb(tlb, pte, addr) \ -do { \ - pgtable_pte_page_dtor((pte)); \ - tlb_remove_page((tlb), (pte)); \ +#define __pte_free_tlb(tlb, pte, addr) \ +do { \ + ptdesc_pte_dtor((page_ptdesc(pte))); \ + tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) #endif From patchwork Mon Apr 17 20:50:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214643 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4088AC7EE33 for ; Mon, 17 Apr 2023 20:54:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231148AbjDQUyS (ORCPT ); Mon, 17 Apr 2023 16:54:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40216 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231215AbjDQUx3 (ORCPT ); Mon, 17 Apr 2023 16:53:29 -0400 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF4227A91; Mon, 17 Apr 2023 13:53:17 -0700 (PDT) Received: by mail-pl1-x62f.google.com with SMTP id p17so16171574pla.3; Mon, 17 Apr 2023 13:53:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764797; x=1684356797; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Pf7K/iFMD6od7/SRYpuPsYcodkkXMgOdjXv5YJvZldk=; b=Bwk4ooUgupEoGCPug2H2Eo/HpnuuhxU7x81C/Qk6PZMSk1B54ZJ/T95wzfLs2bHmOL EoIfThp59UctKku6w1OEzm9Tx1+NGxZtvh5x5EZcL/GqxrGONyvWeZgGKJ7IB/ueJlyr 3shywEfhQgNVSuay+LMKRYZJ2fcroNukWPhJ2nbiEAP2p1v1zQae9MOASciSz7iDNPe8 PaaHFDAGtzWOJsB/jrA0V4yjZJzVFkj1D7lHE74htYvR1G0N63C9Bb92oSk5fpPTqJrg y20llT4kvkmAZ0rau94Yl1KLiffp2Awm3htmoO0PKOmikRHdd7NJITZhSry9QyqyhUwZ 8Oiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764797; x=1684356797; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Pf7K/iFMD6od7/SRYpuPsYcodkkXMgOdjXv5YJvZldk=; b=fIM9vJXV4q2wA+fi1fhfk/WCr846AG6KQFVwVp0QJXJg8qN06tFZfql28Rbu6wkwwA iU4BFJTHNB0nh/Qhmshpmyz97NPPUUnaEyX4JoTnfB5stJ5zHd9Olu/5gENSXSPDL9G5 Vyc2V4Hth2ASsR5DFOuDgfSPzuH/FFsHDlNkgh5Dg9qH8hSqbXiId0QzPHUGNyxRbZ9H z4oGq5gqV1hmJ8FHqYXwCR78Uc5ktX9+vogIBs4KZdb6Rv9bi71EdVIu4KsJ6YvAL5Ye mAA7pUhVtl3VjR435INVoo7EwyJvEH69K3KBQ6qkPPrNFse/kuvxqCbOP4yw9IvMgHZw 4awA== X-Gm-Message-State: AAQBX9eLkSeIlBl7cB2SQlQWCbZNN2FTw2COS87M5HtS5b3q7BnA8/tT OCAO9iWhEXPBQ3sgwghU0Do= X-Google-Smtp-Source: AKy350YGLzUMfl8U7k4p6Sd9nO3FUwkbMa1TWeVYLUOOgANIZAClkJJEoJzR0+dt425gFfwmjH24tg== X-Received: by 2002:a17:90a:ab12:b0:246:8b47:3d5b with SMTP id m18-20020a17090aab1200b002468b473d5bmr16661185pjq.18.1681764797279; Mon, 17 Apr 2023 13:53:17 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:17 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 23/33] loongarch: Convert various functions to use ptdescs Date: Mon, 17 Apr 2023 13:50:38 -0700 Message-Id: <20230417205048.15870-24-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use ptdesc_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) --- arch/loongarch/include/asm/pgalloc.h | 27 +++++++++++++++------------ arch/loongarch/mm/pgtable.c | 7 ++++--- 2 files changed, 19 insertions(+), 15 deletions(-) diff --git a/arch/loongarch/include/asm/pgalloc.h b/arch/loongarch/include/asm/pgalloc.h index af1d1e4a6965..1fe074f85b6b 100644 --- a/arch/loongarch/include/asm/pgalloc.h +++ b/arch/loongarch/include/asm/pgalloc.h @@ -45,9 +45,9 @@ extern void pagetable_init(void); extern pgd_t *pgd_alloc(struct mm_struct *mm); #define __pte_free_tlb(tlb, pte, address) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), pte); \ +do { \ + ptdesc_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ } while (0) #ifndef __PAGETABLE_PMD_FOLDED @@ -55,18 +55,18 @@ do { \ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) { pmd_t *pmd; - struct page *pg; + struct ptdesc *ptdesc; - pg = alloc_page(GFP_KERNEL_ACCOUNT); - if (!pg) + ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, 0); + if (!ptdesc) return NULL; - if (!pgtable_pmd_page_ctor(pg)) { - __free_page(pg); + if (!ptdesc_pmd_ctor(ptdesc)) { + ptdesc_free(ptdesc); return NULL; } - pmd = (pmd_t *)page_address(pg); + pmd = (pmd_t *)ptdesc_address(ptdesc); pmd_init(pmd); return pmd; } @@ -80,10 +80,13 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address) { pud_t *pud; + struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, 0); - pud = (pud_t *) __get_free_page(GFP_KERNEL); - if (pud) - pud_init(pud); + if (!ptdesc) + return NULL; + pud = (pud_t *)ptdesc_address(ptdesc); + + pud_init(pud); return pud; } diff --git a/arch/loongarch/mm/pgtable.c b/arch/loongarch/mm/pgtable.c index 36a6dc0148ae..ff07b8f1ef30 100644 --- a/arch/loongarch/mm/pgtable.c +++ b/arch/loongarch/mm/pgtable.c @@ -11,10 +11,11 @@ pgd_t *pgd_alloc(struct mm_struct *mm) { - pgd_t *ret, *init; + pgd_t *init, *ret = NULL; + struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, 0); - ret = (pgd_t *) __get_free_page(GFP_KERNEL); - if (ret) { + if (ptdesc) { + ret = (pgd_t *)ptdesc_address(ptdesc); init = pgd_offset(&init_mm, 0UL); pgd_init(ret); memcpy(ret + USER_PTRS_PER_PGD, init + USER_PTRS_PER_PGD, From patchwork Mon Apr 17 20:50:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214644 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0E2F2C7EE22 for ; Mon, 17 Apr 2023 20:55:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230463AbjDQUy7 (ORCPT ); Mon, 17 Apr 2023 16:54:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231243AbjDQUxb (ORCPT ); Mon, 17 Apr 2023 16:53:31 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6295D7ED5; Mon, 17 Apr 2023 13:53:19 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id q2so32217997pll.7; Mon, 17 Apr 2023 13:53:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764798; x=1684356798; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nlmIG8UgshEYTOBASKXa1dMTltHF5q5LujwV+qMeaTE=; b=MO1PC1xt/KvuZ3BR0uFzYFvll2jFuloCAIP6GEEELaZGVsUuayg4+lNKAol0RlX2+3 Iz38d0dC9QwTB6d/mc2DYTZXN+iAw+HkmmeqA37ilN3ijvqPuJYcJPAfuZ5NSiq7Oy0y 5tk6CT5Flw0tO/XZTIWPNors/JAYwfVMvyqbu0GP1qyv9cYfy3iQHS4IKEIm4tGX0nuK r/0iigtw61JwQ8Pxh0uDlXeUn+URgKqdZ4DDg0uzUbspXwSfQOQ/MuFlx/Zb5Bn9cIh/ DsllyX0j/egvd/MrVX7PxBJ0K/+L/v/BhuCd1hvi1MmWpxA5K0b5kps6s+vigGrLYnHf Yhaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764798; x=1684356798; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nlmIG8UgshEYTOBASKXa1dMTltHF5q5LujwV+qMeaTE=; b=RRvXKcW5Ng4evII11ap44bGcHH5er3+Ljx5d/opDmWZN5KU2nThMig057LbTSUUZaE +fvSf1ApjLSKbJpMA5BXxC1Dlcj8oIcHwkTOwjo8GHoYX4/e9wSWXsyIqwOLwyHoe254 rno6O5f0ahmJWb5CWIFUrDxqKUbQBLyxj2r0VaRw0zcnTYsqk1YpFrQdmnbxKpwr1flX ZhXb7UERbLLezcWV5zcc9npRcE1/nJa1l7ZjJ50ia9OPcwyK9PqsTysG9ZyIvpsLLSzT RnHnrU1kWvhpYMQG91xg+h6JUONSZgj01sScVvQhU1CpoTexbHnDJe36bkW7daBh2uYI UH+w== X-Gm-Message-State: AAQBX9d8l7GU3t/bvoR7kDc4neBxwHMS8Yt6iD8Wg2N4mEfq3kqL1wgB gC9IfE81n9OQQbhXVqnsT4g= X-Google-Smtp-Source: AKy350aJaFntHhHbUG9MI8ooj+3D57RkURLGHHynthItQ5IDNnotpeWzZJLOOPG0rt2wtYTic7ZhtQ== X-Received: by 2002:a17:90a:7086:b0:247:83ed:7e5d with SMTP id g6-20020a17090a708600b0024783ed7e5dmr6464463pjk.18.1681764798565; Mon, 17 Apr 2023 13:53:18 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:18 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 24/33] m68k: Convert various functions to use ptdescs Date: Mon, 17 Apr 2023 13:50:39 -0700 Message-Id: <20230417205048.15870-25-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use ptdesc_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) --- arch/m68k/include/asm/mcf_pgalloc.h | 41 ++++++++++++++-------------- arch/m68k/include/asm/sun3_pgalloc.h | 8 +++--- arch/m68k/mm/motorola.c | 4 +-- 3 files changed, 27 insertions(+), 26 deletions(-) diff --git a/arch/m68k/include/asm/mcf_pgalloc.h b/arch/m68k/include/asm/mcf_pgalloc.h index 5c2c0a864524..b0e909e23e14 100644 --- a/arch/m68k/include/asm/mcf_pgalloc.h +++ b/arch/m68k/include/asm/mcf_pgalloc.h @@ -7,20 +7,19 @@ extern inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) { - free_page((unsigned long) pte); + ptdesc_free(virt_to_ptdesc(pte)); } extern const char bad_pmd_string[]; extern inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) { - unsigned long page = __get_free_page(GFP_DMA); + struct ptdesc *ptdesc = ptdesc_alloc(GFP_DMA | __GFP_ZERO, 0); - if (!page) + if (!ptdesc) return NULL; - memset((void *)page, 0, PAGE_SIZE); - return (pte_t *) (page); + return (pte_t *) (ptdesc_address(ptdesc)); } extern inline pmd_t *pmd_alloc_kernel(pgd_t *pgd, unsigned long address) @@ -35,36 +34,36 @@ extern inline pmd_t *pmd_alloc_kernel(pgd_t *pgd, unsigned long address) static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pgtable, unsigned long address) { - struct page *page = virt_to_page(pgtable); + struct ptdesc *ptdesc = virt_to_ptdesc(pgtable); - pgtable_pte_page_dtor(page); - __free_page(page); + ptdesc_pte_dtor(ptdesc); + ptdesc_free(ptdesc); } static inline pgtable_t pte_alloc_one(struct mm_struct *mm) { - struct page *page = alloc_pages(GFP_DMA, 0); + struct ptdesc *ptdesc = ptdesc_alloc(GFP_DMA, 0); pte_t *pte; - if (!page) + if (!ptdesc) return NULL; - if (!pgtable_pte_page_ctor(page)) { - __free_page(page); + if (!ptdesc_pte_ctor(ptdesc)) { + ptdesc_free(ptdesc); return NULL; } - pte = page_address(page); - clear_page(pte); + pte = ptdesc_address(ptdesc); + ptdesc_clear(pte); return pte; } static inline void pte_free(struct mm_struct *mm, pgtable_t pgtable) { - struct page *page = virt_to_page(pgtable); + struct ptdesc *ptdesc = virt_to_ptdesc(ptdesc); - pgtable_pte_page_dtor(page); - __free_page(page); + ptdesc_pte_dtor(ptdesc); + ptdesc_free(ptdesc); } /* @@ -75,16 +74,18 @@ static inline void pte_free(struct mm_struct *mm, pgtable_t pgtable) static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) { - free_page((unsigned long) pgd); + ptdesc_free(virt_to_ptdesc(pgd)); } static inline pgd_t *pgd_alloc(struct mm_struct *mm) { pgd_t *new_pgd; + struct ptdesc *ptdesc = ptdesc_alloc(GFP_DMA | GFP_NOWARN, 0); - new_pgd = (pgd_t *)__get_free_page(GFP_DMA | __GFP_NOWARN); - if (!new_pgd) + if (!ptdesc) return NULL; + new_pgd = (pgd_t *) ptdesc_address(ptdesc); + memcpy(new_pgd, swapper_pg_dir, PTRS_PER_PGD * sizeof(pgd_t)); memset(new_pgd, 0, PAGE_OFFSET >> PGDIR_SHIFT); return new_pgd; diff --git a/arch/m68k/include/asm/sun3_pgalloc.h b/arch/m68k/include/asm/sun3_pgalloc.h index 198036aff519..013d375fc239 100644 --- a/arch/m68k/include/asm/sun3_pgalloc.h +++ b/arch/m68k/include/asm/sun3_pgalloc.h @@ -17,10 +17,10 @@ extern const char bad_pmd_string[]; -#define __pte_free_tlb(tlb,pte,addr) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), pte); \ +#define __pte_free_tlb(tlb, pte, addr) \ +do { \ + ptdesc_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ } while (0) static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index 911301224078..1e47b977bcf1 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -161,7 +161,7 @@ void *get_pointer_table(int type) * m68k doesn't have SPLIT_PTE_PTLOCKS for not having * SMP. */ - pgtable_pte_page_ctor(virt_to_page(page)); + ptdesc_pte_ctor(virt_to_ptdesc(page)); } mmu_page_ctor(page); @@ -201,7 +201,7 @@ int free_pointer_table(void *table, int type) list_del(dp); mmu_page_dtor((void *)page); if (type == TABLE_PTE) - pgtable_pte_page_dtor(virt_to_page(page)); + ptdesc_pte_dtor(virt_to_ptdesc(page)); free_page (page); return 1; } else if (ptable_list[type].next != dp) { From patchwork Mon Apr 17 20:50:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214645 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 973CDC77B7F for ; Mon, 17 Apr 2023 20:55:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231169AbjDQUz2 (ORCPT ); Mon, 17 Apr 2023 16:55:28 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40036 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231259AbjDQUxc (ORCPT ); Mon, 17 Apr 2023 16:53:32 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 946144C10; Mon, 17 Apr 2023 13:53:20 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id hg14-20020a17090b300e00b002471efa7a8fso14357551pjb.0; Mon, 17 Apr 2023 13:53:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764800; x=1684356800; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kyicI3UFSJTH3R33HGjLFGEyUDrTPXlIkqJCgIfzdrc=; b=DwA2KoT+Gkb+TSG9gl3xh2UjcEOrukpYDh0FxXwVNpDTsL49sodGBFe/MxZJLiffh8 Okfe4RTRfpJn4SFFKcweRaXMogMu3aLJXW8S0ijOu/y8rquyNOCuhOYqIlMfHtHNwFXl 0RIVF9f99J773/WS0WxWYyLGFniLvieyUAz5torJ7RwCjhPN4iKRxNqBQLtPHzo1HkHq UDiYLsTcdIpx6G4pcrHTW0RLQiAILxALX6WF9BC1qxq+9QtqiExh0N2abkWgWZYyYNmB dTtZwe83/hP4XI3Q8dsRpB6pTlzqUQszpTQqyb8BBRRnRiGvSruO77w1spjFd7OVj31k rIHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764800; x=1684356800; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kyicI3UFSJTH3R33HGjLFGEyUDrTPXlIkqJCgIfzdrc=; b=f6XEFEwvGArYZP3GGplMlAHhpKYi4cHAhy6Fgax41YX1bdWzMwjV6xvQrYdlNI/JAR N30qSXN+hxUzbgYWW027U7ZnpxEVx+AoCDRYVaRD5BgwkXwoKZ/FK0EmFJiVI0y34s8/ lMUFy3fXlf/bdxTT/OuESh3bkMNB/0t6K/J9vk6esHieMbBcSiUqdd6QmNl3bnFih9Qs hY/XwKrmriiosuMkdraG6Gi6XXbWyb53MRqD/XWPmRHpE0Dz/5uP/0Y+MwttTfwOay/n 4M/lh0rNdpHWlaF3z6XEbTKDaqEIThV8OVle/v93iIlswpMWRWU+acBMgUmJjUSfhHDj 8DrQ== X-Gm-Message-State: AAQBX9eZGKmFrQpH0EopmCi4PtQhSCnSf1a9ypiJL3W4qCb9LYZTCJiX E15veHMK9As65hD/yfoMbGo= X-Google-Smtp-Source: AKy350b8thcmecOoaOKiNwMZgeAazmiEiQ9V6nLrivAsLyOgVvPy05KEMe/JXWQl3cGgcJMZ1iU0KA== X-Received: by 2002:a17:90b:3848:b0:247:4ad1:f69b with SMTP id nl8-20020a17090b384800b002474ad1f69bmr11748993pjb.26.1681764800112; Mon, 17 Apr 2023 13:53:20 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:19 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 25/33] mips: Convert various functions to use ptdescs Date: Mon, 17 Apr 2023 13:50:40 -0700 Message-Id: <20230417205048.15870-26-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use ptdesc_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) --- arch/mips/include/asm/pgalloc.h | 31 +++++++++++++++++-------------- arch/mips/mm/pgtable.c | 7 ++++--- 2 files changed, 21 insertions(+), 17 deletions(-) diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h index f72e737dda21..7f7cc3140b27 100644 --- a/arch/mips/include/asm/pgalloc.h +++ b/arch/mips/include/asm/pgalloc.h @@ -51,13 +51,13 @@ extern pgd_t *pgd_alloc(struct mm_struct *mm); static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) { - free_pages((unsigned long)pgd, PGD_TABLE_ORDER); + ptdesc_free(virt_to_ptdesc(pgd)); } -#define __pte_free_tlb(tlb,pte,address) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), pte); \ +#define __pte_free_tlb(tlb, pte, address) \ +do { \ + ptdesc_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ } while (0) #ifndef __PAGETABLE_PMD_FOLDED @@ -65,18 +65,18 @@ do { \ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) { pmd_t *pmd; - struct page *pg; + struct ptdesc *ptdesc; - pg = alloc_pages(GFP_KERNEL_ACCOUNT, PMD_TABLE_ORDER); - if (!pg) + ptdesc = ptdesc_alloc(GFP_KERNEL_ACCOUNT, PMD_TABLE_ORDER); + if (!ptdesc) return NULL; - if (!pgtable_pmd_page_ctor(pg)) { - __free_pages(pg, PMD_TABLE_ORDER); + if (!ptdesc_pmd_ctor(ptdesc)) { + ptdesc_free(ptdesc); return NULL; } - pmd = (pmd_t *)page_address(pg); + pmd = (pmd_t *)ptdesc_address(ptdesc); pmd_init(pmd); return pmd; } @@ -90,10 +90,13 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address) { pud_t *pud; + struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, PUD_TABLE_ORDER); - pud = (pud_t *) __get_free_pages(GFP_KERNEL, PUD_TABLE_ORDER); - if (pud) - pud_init(pud); + if (!ptdesc) + return NULL; + pud = (pud_t *)ptdesc_address(ptdesc); + + pud_init(pud); return pud; } diff --git a/arch/mips/mm/pgtable.c b/arch/mips/mm/pgtable.c index b13314be5d0e..d626db9ac224 100644 --- a/arch/mips/mm/pgtable.c +++ b/arch/mips/mm/pgtable.c @@ -10,10 +10,11 @@ pgd_t *pgd_alloc(struct mm_struct *mm) { - pgd_t *ret, *init; + pgd_t *init, *ret = NULL; + struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, PGD_TABLE_ORDER); - ret = (pgd_t *) __get_free_pages(GFP_KERNEL, PGD_TABLE_ORDER); - if (ret) { + if (ptdesc) { + ret = (pgd_t *) ptdesc_address(ptdesc); init = pgd_offset(&init_mm, 0UL); pgd_init(ret); memcpy(ret + USER_PTRS_PER_PGD, init + USER_PTRS_PER_PGD, From patchwork Mon Apr 17 20:50:41 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214646 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 041A8C7EE26 for ; Mon, 17 Apr 2023 20:55:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231231AbjDQUzc (ORCPT ); Mon, 17 Apr 2023 16:55:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40112 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231293AbjDQUxg (ORCPT ); Mon, 17 Apr 2023 16:53:36 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2E5F65267; Mon, 17 Apr 2023 13:53:22 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id w11so27337322plp.13; Mon, 17 Apr 2023 13:53:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764801; x=1684356801; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tniPUuVL526Jec1TVIaHRLAUx7OruO8jRIKlA5wqt8Q=; b=p/jfRaNPQGaoJd3ujqaeLDuUP7t0WkHa2oZwiGdpehhESyVFAOdOmg0fusWkZ2XuJJ Yw6K8nYA6/5N6tLufuYh20gi7M7wxwYV1IodMymMAu3e2vjlzp/s4l9Vf7kZibjxovMR cghNEQkYGyGSVWGuIAzI28gyVhrT8GgnWs6YLc4dnuPB71AZU/a5qfpWgaMEJ0Qfl1S+ OG2NEJp7fE32TkPWsKahDfFv2MiE7qQWnDtWKvNsVMNbEDAHg0y2Zsbd1R1MkIpv2dly 3HDaLyPuLXfOS4PxJXenmM3XNPnXJ5LYctnHIHC9Ub4peO/zkx0GIKpeh+s1nuyRNP2C xoAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764801; x=1684356801; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tniPUuVL526Jec1TVIaHRLAUx7OruO8jRIKlA5wqt8Q=; b=dfrJvADnJaSu1YlSEMqBJOVQM4TeR5Xr55oQrourXegWYlFTWhyJE8YZCPQl4+GVKY Zk3zcoyFXARqrPPdqwDZ9yxdvGuM8az/UPlKcH+hagJqzBwcZdAFQY+R0AUw9Zp0VzGs cDURFVZlRORUPEbhLIeXWaN4wUn/qW8QlKO49hZefnmAsbwrbx63u3gpR8O01Hu4w6Uv RlpPQh3LDxxTdberElwPmIXlPadBMid0RgGf5Xpgd67L/K0K7oIxjmnvY4bdF5En6Wkx cV4IvsnSQgoXexpLbuQoLhRtgzjqtNoyNscRdmAjmnToCAgWxs66Xgfo9JlgRmH9C2Z9 ybDw== X-Gm-Message-State: AAQBX9dmtKtMVgzwVtApYX7UWybmYisvDjWQpwfAeJKpl5df8xKGrGOX RMjrDzSHG53mrQM9iV/dMM8= X-Google-Smtp-Source: AKy350YG/84D/pba4O2jep64IWrC+mjA+ueyHqImP8NgwjI0qVBUMRNq8Z2uYJzIP+ywt0PW/sWA2w== X-Received: by 2002:a17:90a:8048:b0:247:78c0:125e with SMTP id e8-20020a17090a804800b0024778c0125emr8083390pjw.15.1681764801496; Mon, 17 Apr 2023 13:53:21 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:21 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 26/33] nios2: Convert __pte_free_tlb() to use ptdescs Date: Mon, 17 Apr 2023 13:50:41 -0700 Message-Id: <20230417205048.15870-27-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents. Signed-off-by: Vishal Moola (Oracle) --- arch/nios2/include/asm/pgalloc.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/nios2/include/asm/pgalloc.h b/arch/nios2/include/asm/pgalloc.h index ecd1657bb2ce..ed868f4c0ca9 100644 --- a/arch/nios2/include/asm/pgalloc.h +++ b/arch/nios2/include/asm/pgalloc.h @@ -28,10 +28,10 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, extern pgd_t *pgd_alloc(struct mm_struct *mm); -#define __pte_free_tlb(tlb, pte, addr) \ - do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), (pte)); \ +#define __pte_free_tlb(tlb, pte, addr) \ + do { \ + ptdesc_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) #endif /* _ASM_NIOS2_PGALLOC_H */ From patchwork Mon Apr 17 20:50:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214724 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2B263C7EE21 for ; Mon, 17 Apr 2023 20:55:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231248AbjDQUzg (ORCPT ); Mon, 17 Apr 2023 16:55:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40468 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230371AbjDQUxj (ORCPT ); Mon, 17 Apr 2023 16:53:39 -0400 Received: from mail-pj1-x1031.google.com (mail-pj1-x1031.google.com [IPv6:2607:f8b0:4864:20::1031]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 87D6710FE; Mon, 17 Apr 2023 13:53:23 -0700 (PDT) Received: by mail-pj1-x1031.google.com with SMTP id l9-20020a17090a3f0900b0023d32684e7fso256115pjc.1; Mon, 17 Apr 2023 13:53:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764803; x=1684356803; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=M2PCFvKq9JWcIC4Exd9oDYO0v0PQOHyXRap+yEgbOb4=; b=Sg9gXChNrNLShRRgfwOKl/OTTCvmKSrllpuIiui1MavGQQvyi4/hV9+QaVJ3t9hABY 7LNyY9zC8MZPe/8uyGF6BwZFFCebqKsRkCF7WD2/MczVElNAeTdXAZoU8m9zjTh6P7Zy 0Ui6eydpR2I1tZyrhRUO19d2JsU/fHL/oyFe81PNxO86Jxw/ya4bgHlBy023YgAYk0Ea ro6AAOx594UY8fA/m7D7bEpjpZMo/caINuRe5NJsUpyuSeut+/oOBeuT5hXbXmkAYI67 Pav/++zqcV0Pbp5FUcdmVoTNUO7tgK9EwNCc3eK51W0dnm+P+Og2+jgMY/+82hRUmWz8 imHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764803; x=1684356803; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=M2PCFvKq9JWcIC4Exd9oDYO0v0PQOHyXRap+yEgbOb4=; b=hnulMdvSj16phQfa6Bq/kGGvzyy4S7/6B97FE4KPLDZpaErpF57TJlyiU/xpsHt5lt SUQdV+NLCN825EOUU11aX5XHNt0pnG/Ktpg7fs+xH2ilLg88B1RpY5IL/DzkXCADfmJD xp2BQF1sm+ol1WbXyl8Y5GUIVk0aUWgw2TgOaO7/ViclYwRUh9KuygppoDn6/FO2Emdi 0ADx5Pgt2rkoLzvevUxkai3l2gHSLUNyD2nJmpeX/eXRGLzMhoOAX90D1bNmLaPTWtk6 ypb2msdZiWt/LNA8YNxc+MvKJK+CbH8YXqjL5lYzPPxySKHFzqNR1V1KLDVoL9J7a7D4 ZcQw== X-Gm-Message-State: AAQBX9ecfF72mMJHuPHoF+XQKFZWJid5e41ER4OvGuI1RChGqR9sAH0G XN0FtI9OU0FhSwu899qf4iU= X-Google-Smtp-Source: AKy350ZH2s2lfJdxSYj+T2S9Ef0UoYzkgLYY3eeGq1S7Q2hAbd0lFGX52BFTYQkOd2khPVGSck6LXA== X-Received: by 2002:a17:90a:e28f:b0:247:4200:7432 with SMTP id d15-20020a17090ae28f00b0024742007432mr11696070pjz.40.1681764802858; Mon, 17 Apr 2023 13:53:22 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:22 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 27/33] openrisc: Convert __pte_free_tlb() to use ptdescs Date: Mon, 17 Apr 2023 13:50:42 -0700 Message-Id: <20230417205048.15870-28-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents. Signed-off-by: Vishal Moola (Oracle) --- arch/openrisc/include/asm/pgalloc.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/openrisc/include/asm/pgalloc.h b/arch/openrisc/include/asm/pgalloc.h index b7b2b8d16fad..14e641686281 100644 --- a/arch/openrisc/include/asm/pgalloc.h +++ b/arch/openrisc/include/asm/pgalloc.h @@ -66,10 +66,10 @@ extern inline pgd_t *pgd_alloc(struct mm_struct *mm) extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm); -#define __pte_free_tlb(tlb, pte, addr) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), (pte)); \ +#define __pte_free_tlb(tlb, pte, addr) \ +do { \ + ptdesc_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) #endif From patchwork Mon Apr 17 20:50:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214725 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A10CDC7EE22 for ; Mon, 17 Apr 2023 20:55:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231239AbjDQUzn (ORCPT ); Mon, 17 Apr 2023 16:55:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231158AbjDQUxk (ORCPT ); Mon, 17 Apr 2023 16:53:40 -0400 Received: from mail-pj1-x1032.google.com (mail-pj1-x1032.google.com [IPv6:2607:f8b0:4864:20::1032]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E55FA7EC4; Mon, 17 Apr 2023 13:53:24 -0700 (PDT) Received: by mail-pj1-x1032.google.com with SMTP id hg14-20020a17090b300e00b002471efa7a8fso14357888pjb.0; Mon, 17 Apr 2023 13:53:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764804; x=1684356804; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uzUB7crKqn7Of69Iz0IiuKaVcaC8bpzNVtqjj02cJJs=; b=CQxyKkc21JMWEGZgaxo+DzkT7hmUe/EPeRLNy9307TK3NRBVO8qYFVHUBh0a+wrrFK REDRofZ4pQ77HB/xKTTnWP/BT1cyKC/pHZN9NhEm5iKL1tflp83IUSNn/z3JIjWng98Y 6RfpzxV5XfPWxGKOX4DdYV/wLmmlXNt332CwEYLgb9fOwCusASfTkzDGq48OmDuUGGpq tIsA4gMM5D4jkK0NC3tYS5RarJ0ZWQtkI1tM39+DRuLci+gK44bt1F+fDRaPwS3T5T9g S4uLRylS37AnplaeXGFnk7T6Qzad8RMawrqwBO6UKOhHtptnemCZZZWNN4xJh7uqqhEQ vrGQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764804; x=1684356804; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uzUB7crKqn7Of69Iz0IiuKaVcaC8bpzNVtqjj02cJJs=; b=M1V/fCH5SiOPWKYPpudc+o/nfzFL5CK1gTsrxvUlK4rZbc+Ara9OfF9WGh1fZuSifS nayUp46R7uJMMoQNlKUWj5PS8jl5P2aPkESi2rBtzbXzYJJ5t2qpoC0VI05pYoglYeTz kOvgOFBPgjf0OpUNkKf998wF2p7RGSDaGK9qKSWxCaRE+SipBYHraMoGYLkY4pesOG63 3BY0t81LKQwnkRFaZMAoQVK4nwvlyIGG+ILsSAxEoDsIqquTMS2LRqo11TBTlJOtt52k f5o21zJvSsFnd/S3Ykjs6KaAHM0X3EInGebSt4uEuJIyVDVevxz3vqcnm7h2stXkfITw ItyA== X-Gm-Message-State: AAQBX9fRYPJWttBz3lAQCqH+SJG+zbIIYoe7gNEbbyZ+o5xTjglx+/Gp LSQWV3ehyhiDebZr48YQ7Ps= X-Google-Smtp-Source: AKy350avL9PdXaEjhmJuV1ybk5n54BLKq2822oDesoZKEo2GpNTKDMiGoy71eNPEPZjYfA/nM1qr3w== X-Received: by 2002:a17:90a:1286:b0:246:bb61:4a5b with SMTP id g6-20020a17090a128600b00246bb614a5bmr16053424pja.8.1681764804243; Mon, 17 Apr 2023 13:53:24 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:23 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 28/33] riscv: Convert alloc_{pmd, pte}_late() to use ptdescs Date: Mon, 17 Apr 2023 13:50:43 -0700 Message-Id: <20230417205048.15870-29-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use ptdesc_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) --- arch/riscv/include/asm/pgalloc.h | 8 ++++---- arch/riscv/mm/init.c | 16 ++++++---------- 2 files changed, 10 insertions(+), 14 deletions(-) diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index 59dc12b5b7e8..cb5536403bd8 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -153,10 +153,10 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) #endif /* __PAGETABLE_PMD_FOLDED */ -#define __pte_free_tlb(tlb, pte, buf) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), pte); \ +#define __pte_free_tlb(tlb, pte, buf) \ +do { \ + ptdesc_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), page_ptdesc(pte));\ } while (0) #endif /* CONFIG_MMU */ diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 0f14f4a8d179..2737cbc4ad12 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -346,12 +346,10 @@ static inline phys_addr_t __init alloc_pte_fixmap(uintptr_t va) static phys_addr_t __init alloc_pte_late(uintptr_t va) { - unsigned long vaddr; - - vaddr = __get_free_page(GFP_KERNEL); - BUG_ON(!vaddr || !pgtable_pte_page_ctor(virt_to_page(vaddr))); + struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, 0); - return __pa(vaddr); + BUG_ON(!ptdesc || !ptdesc_pte_ctor(ptdesc)); + return __pa((pte_t *)ptdesc_address(ptdesc)); } static void __init create_pte_mapping(pte_t *ptep, @@ -429,12 +427,10 @@ static phys_addr_t __init alloc_pmd_fixmap(uintptr_t va) static phys_addr_t __init alloc_pmd_late(uintptr_t va) { - unsigned long vaddr; - - vaddr = __get_free_page(GFP_KERNEL); - BUG_ON(!vaddr || !pgtable_pmd_page_ctor(virt_to_page(vaddr))); + struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL, 0); - return __pa(vaddr); + BUG_ON(!ptdesc || !ptdesc_pmd_ctor(ptdesc)); + return __pa((pmd_t *)ptdesc_address(ptdesc)); } static void __init create_pmd_mapping(pmd_t *pmdp, From patchwork Mon Apr 17 20:50:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214726 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEFB9C77B78 for ; Mon, 17 Apr 2023 20:55:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231304AbjDQUzt (ORCPT ); Mon, 17 Apr 2023 16:55:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231319AbjDQUxs (ORCPT ); Mon, 17 Apr 2023 16:53:48 -0400 Received: from mail-pl1-x636.google.com (mail-pl1-x636.google.com [IPv6:2607:f8b0:4864:20::636]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4096140FD; Mon, 17 Apr 2023 13:53:26 -0700 (PDT) Received: by mail-pl1-x636.google.com with SMTP id w1so4515600plg.6; Mon, 17 Apr 2023 13:53:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764806; x=1684356806; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=1imWRfaXT28Tln454VWhUpR4BzNKR6UHESFdZqWEUY0=; b=bMzg27GqJpInZYbq5g7DlCbGyMQLVTO8/WaIXUjb9+Y1sP2Yg8U4pzTwcvHRDHiFNW xnksAVwNdJxrKURy2fgxnULAI227N9ToKvbiocqPRvOKs7RzP8ExmOpSZqrcqZUrdfu0 Ly3bYv1kY3GNKQVg3ZctvJCnyIezYAOXblSfIzPoK3U4x0lQOFf2Xm+WIvk4tnnJrN+1 YHPfjXznGMNehBha6m//o8BZSI/Chio1eErrXl3QeUgRmQPwnQk3cvQT2roiB3zkJZil tzHED45+1PigKvsFpmle9q7jl79yHSt/NZKlZjrviShfr3b19IL3cewZ6zZfWBjcahF9 kN0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764806; x=1684356806; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=1imWRfaXT28Tln454VWhUpR4BzNKR6UHESFdZqWEUY0=; b=LCAj2fHp/ZNZlbd0XBPcDcoH8pqghclANhern1hzggJO6zplL3xtT/fxbK0xmdin/1 AUzHK1zy4kkOjW4NMTKqNxtLR0FzVG5PiqF+YYn5d2VNCawbmjn2tNU47BwM2Sm1p8yq H43tEFtaDEMHD0YQEK0w08uhtm0ElPa6Tq9qPny+pDOqcZ6EVX6bSwUCoTM6xbaI8BkR 8PJkNC9HeYbbdSthiC8tflTm6lAliuF/tVqR0AoET+4p34VZFW35rCAhg3Z32mPY+1KA 6JnPVqh9RHT4FFRPvsqigyaFFqvg3to0H/e3DoOpjnQdyrIglDgC3vXE8a2E1MrFaIgI DzSw== X-Gm-Message-State: AAQBX9feLgC1kXpJ1J641N0+AVaYgufLNphC8nt+caCf/32FyJcwPFiE o+ctAgVlC+OzA91iYqR5EBE= X-Google-Smtp-Source: AKy350Z5+qBQNd0Z/JNfcMKaitdnDjfxbnOB0qE080qoviMZywL0T3VtP+5OcGBInDjHaerFJSFHYg== X-Received: by 2002:a17:90a:d142:b0:246:5fbb:43bf with SMTP id t2-20020a17090ad14200b002465fbb43bfmr15902207pjw.4.1681764805699; Mon, 17 Apr 2023 13:53:25 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:25 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 29/33] sh: Convert pte_free_tlb() to use ptdescs Date: Mon, 17 Apr 2023 13:50:44 -0700 Message-Id: <20230417205048.15870-30-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents. Also cleans up some spacing issues. Signed-off-by: Vishal Moola (Oracle) --- arch/sh/include/asm/pgalloc.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/sh/include/asm/pgalloc.h b/arch/sh/include/asm/pgalloc.h index a9e98233c4d4..30e1823d2347 100644 --- a/arch/sh/include/asm/pgalloc.h +++ b/arch/sh/include/asm/pgalloc.h @@ -31,10 +31,10 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, set_pmd(pmd, __pmd((unsigned long)page_address(pte))); } -#define __pte_free_tlb(tlb,pte,addr) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), (pte)); \ +#define __pte_free_tlb(tlb, pte, addr) \ +do { \ + ptdesc_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) #endif /* __ASM_SH_PGALLOC_H */ From patchwork Mon Apr 17 20:50:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214727 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A957C77B7C for ; Mon, 17 Apr 2023 20:55:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231332AbjDQUzy (ORCPT ); Mon, 17 Apr 2023 16:55:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40178 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231197AbjDQUxy (ORCPT ); Mon, 17 Apr 2023 16:53:54 -0400 Received: from mail-pl1-x631.google.com (mail-pl1-x631.google.com [IPv6:2607:f8b0:4864:20::631]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7774286AE; Mon, 17 Apr 2023 13:53:27 -0700 (PDT) Received: by mail-pl1-x631.google.com with SMTP id w11so27337726plp.13; Mon, 17 Apr 2023 13:53:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764807; x=1684356807; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uIsUNYOwSPNxjYFxvF4LgB4Ah8JH2T7Nmtlw8Cz+gsw=; b=fsz9HBXpyixYv0zGEIQNfLPb4EjMqtI/28pihon+vRduZpglxs4kGo7QlnsgzbeVep MUefXrtmBIPAEKph1/xE0Hixz+XFSj9eLb4OXqcCPGwVWZta2b+6OV8DcUNUc3MYWho8 foQqBfLzT6QF4GBhL/zISw/ZQfIxqSnZu2sd2q6CE2U54ygVclXX08i47edCNHvoHxB+ sBBNY+ovhIlsXyYoYKNVSm2boGQ3Di1szM5293gnTSGOnWYTxtAAUVimXGr6dZV0Zyrz TtkMrcCltGZOVK2SbO9vEj4TQWwRgKIU6/H0ozoMQVL6Kfmq0ZrnJgq0p7Ga0/De5Qi0 sDcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764807; x=1684356807; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uIsUNYOwSPNxjYFxvF4LgB4Ah8JH2T7Nmtlw8Cz+gsw=; b=WkEkdx8F4ryg6buFkGgLvnCO6coFzodqFxzQuQJRKc7Ml6PQb1t4b05aHh/zSj+oc+ uLsZDPE9qC78OI8I/LLvU5yhc2GFs8eKz5gcTR1G7oFk0gWLbxLOxixoC5t2+mA2J5Gu Qkoib55p5K4qC6j/vfTbnoS0OHMB1Kg2UmSKVa3feAO+uUB6UGCZSpLHWMBdPKi3lkHK Z5dCLrYIlD1As2+AgYhkNwtdPnYURD6Pm8ht37HfxWU0s0mFs8Hv0TCrYQ0gy3TC7/4m MYQRZ2y6Y+6JgTYL6BWFnL/7SnANYxfJphF+MvzQHExvedvSVpObX00kvpRag7p2imn/ qPkg== X-Gm-Message-State: AAQBX9cO8PJN4/ch0zwkrluVO2QO8prxFT6onf1zriMMmDQ8l8C2ZP92 K3BiF/2BHeNhPw7jxjnkoGM= X-Google-Smtp-Source: AKy350YfnaT8KE6LOLQ9wRoeSIO99cnRXYKnv4nnzRXnI3u1QAONlMDFC8rrZFtHP4TZD3hWsCKEag== X-Received: by 2002:a17:90a:fa3:b0:237:3dfb:9095 with SMTP id 32-20020a17090a0fa300b002373dfb9095mr16284735pjz.6.1681764806946; Mon, 17 Apr 2023 13:53:26 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:26 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 30/33] sparc64: Convert various functions to use ptdescs Date: Mon, 17 Apr 2023 13:50:45 -0700 Message-Id: <20230417205048.15870-31-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Signed-off-by: Vishal Moola (Oracle) --- arch/sparc/mm/init_64.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 04f9db0c3111..eedb3e03b1fe 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -2893,14 +2893,15 @@ pte_t *pte_alloc_one_kernel(struct mm_struct *mm) pgtable_t pte_alloc_one(struct mm_struct *mm) { - struct page *page = alloc_page(GFP_KERNEL | __GFP_ZERO); - if (!page) + struct ptdesc *ptdesc = ptdesc_alloc(GFP_KERNEL | __GFP_ZERO, 0); + + if (!ptdesc) return NULL; - if (!pgtable_pte_page_ctor(page)) { - __free_page(page); + if (!ptdesc_pte_ctor(ptdesc)) { + ptdesc_free(ptdesc); return NULL; } - return (pte_t *) page_address(page); + return (pte_t *) ptdesc_address(ptdesc); } void pte_free_kernel(struct mm_struct *mm, pte_t *pte) @@ -2910,10 +2911,10 @@ void pte_free_kernel(struct mm_struct *mm, pte_t *pte) static void __pte_free(pgtable_t pte) { - struct page *page = virt_to_page(pte); + struct ptdesc *ptdesc = virt_to_ptdesc(pte); - pgtable_pte_page_dtor(page); - __free_page(page); + ptdesc_pte_dtor(ptdesc); + ptdesc_free(ptdesc); } void pte_free(struct mm_struct *mm, pgtable_t pte) From patchwork Mon Apr 17 20:50:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214728 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEF28C7EE2A for ; Mon, 17 Apr 2023 20:55:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230372AbjDQUzy (ORCPT ); Mon, 17 Apr 2023 16:55:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231177AbjDQUyJ (ORCPT ); Mon, 17 Apr 2023 16:54:09 -0400 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3047F83F2; Mon, 17 Apr 2023 13:53:29 -0700 (PDT) Received: by mail-pj1-x1035.google.com with SMTP id b2-20020a17090a6e0200b002470b249e59so16987538pjk.4; Mon, 17 Apr 2023 13:53:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764808; x=1684356808; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CSVYEUhM0zGHlDR1yqa5KIpwwPynV1rh1Z3AvIT239o=; b=Bs6MBnbjQkX1GScZCbLklar3CQYsHMcNQJhdEJMSRoz85XciXhpnQ0YtDU9kdG/nWk TxMCtPr19eviQJI5aQ6Z23KjvD0/UYdwUUF801SK8Z3tk/XuPH8n6ufK4lU6OPOoTkEa tcxsOYaiL60cLVhSI6wekl2Ojt4T3pVi0l4/Kpd6dmP8a2GTqJPhoPlibhYfl7AzaVkC 8dHP626PMZuKBlHUuWDeE5Ws0Xf5nvhx1+eNXqdjJYxnU1JpJLYxfuGpVyKb6jdKhgqK l+AozFzFnnh8JDe5VzEpQS6lDqTnIa40opwHXXYLbmcB20BMSqSIISIYF0OD142rh4KG 4Sbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764808; x=1684356808; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CSVYEUhM0zGHlDR1yqa5KIpwwPynV1rh1Z3AvIT239o=; b=HYgtvAJ1p/8+QKYWp5sc98ZH761rOeRxR4pO07KYqH0ooVCXOipJmcKdx4JrL1AyS2 4oByW+nuvDrh1apduoPt+gJghiqOV1EJPfmeBxSt5roa77OvR3PHX8lQbVFxUSmTm2Yp yose4eQY36XAdArL7Fb8afs+bdI/ew/FcgZAbrsEMGeqGcfbsXhjdKa1wLXyrLPydJuc 0yWd24YdFecHQFukBiMlBMJGMyNrb36KHhV82bXMqvIpcuPO4yGmlFqivcHJ7s28T0Z9 vOkza1DfBIylw643PrVdWE645yHRGspXVPdElE68sYub20UJH0879y3HVikRIEWkW5lj DKig== X-Gm-Message-State: AAQBX9ciyxSKNC9yyWAw95wjzpVqhaEIWxD4U/mNmtZvTa0BYAN1yqYS /nc/Dzcm6+08hOOSxDD50Uk= X-Google-Smtp-Source: AKy350a2GcdGbRUVrvD14pA7yb7zjU30cICJUnhJQSgTB8CRLuH7Cup5yhi/f0UjvURoMBG/RaZdVQ== X-Received: by 2002:a17:903:1206:b0:19f:1871:3dcd with SMTP id l6-20020a170903120600b0019f18713dcdmr272381plh.5.1681764808447; Mon, 17 Apr 2023 13:53:28 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:28 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 31/33] sparc: Convert pgtable_pte_page_{ctor, dtor}() to ptdesc equivalents Date: Mon, 17 Apr 2023 13:50:46 -0700 Message-Id: <20230417205048.15870-32-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable pte constructor/destructors with ptdesc equivalents. Signed-off-by: Vishal Moola (Oracle) --- arch/sparc/mm/srmmu.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c index 13f027afc875..964938aa7b88 100644 --- a/arch/sparc/mm/srmmu.c +++ b/arch/sparc/mm/srmmu.c @@ -355,7 +355,8 @@ pgtable_t pte_alloc_one(struct mm_struct *mm) return NULL; page = pfn_to_page(__nocache_pa((unsigned long)ptep) >> PAGE_SHIFT); spin_lock(&mm->page_table_lock); - if (page_ref_inc_return(page) == 2 && !pgtable_pte_page_ctor(page)) { + if (page_ref_inc_return(page) == 2 && + !ptdesc_pte_ctor(page_ptdesc(page))) { page_ref_dec(page); ptep = NULL; } @@ -371,7 +372,7 @@ void pte_free(struct mm_struct *mm, pgtable_t ptep) page = pfn_to_page(__nocache_pa((unsigned long)ptep) >> PAGE_SHIFT); spin_lock(&mm->page_table_lock); if (page_ref_dec_return(page) == 1) - pgtable_pte_page_dtor(page); + ptdesc_pte_dtor(page_ptdesc(page)); spin_unlock(&mm->page_table_lock); srmmu_free_nocache(ptep, SRMMU_PTE_TABLE_SIZE); From patchwork Mon Apr 17 20:50:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214729 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 566F9C7EE20 for ; Mon, 17 Apr 2023 20:56:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231362AbjDQU4E (ORCPT ); Mon, 17 Apr 2023 16:56:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40234 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230514AbjDQUyn (ORCPT ); Mon, 17 Apr 2023 16:54:43 -0400 Received: from mail-pl1-x62e.google.com (mail-pl1-x62e.google.com [IPv6:2607:f8b0:4864:20::62e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 524C2659A; Mon, 17 Apr 2023 13:53:30 -0700 (PDT) Received: by mail-pl1-x62e.google.com with SMTP id l21so9395668pla.5; Mon, 17 Apr 2023 13:53:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764810; x=1684356810; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=m8/b+eKq2obnq3wrn4AaChyOhPXbZKipyE2UTOB7XHg=; b=RTJID0Q/WW7iedj/afuCWToOfH/aBrY/NrWNeyN7ByP/usee9y9Bzy2eV2Pryddh5k SgN8rXjbYlFESamvA2kpEGUYeNDAEQAcb5DBJGBzqZ39H1PUjeRb3TeWmdsscSkj+duB XdhsoeufQBiAlQ1VDbbJWcUzg1wdK3pe72OAkzLWRjQ1C0S075Nqq87qXF8IdfaSlDxW +Bud3Bjv7jwgjZOfa7R1sSZhrnf0y6c/vZkSvJ9Qym0uRrXk7trecdbr2o902twf3Tei jaw/7g2aCCpoNC5Ttid57NGpw+n0jzjqhu5TM+KCbKbeUljRTtHuGZwGu624c+XnB9eN r1BQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764810; x=1684356810; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=m8/b+eKq2obnq3wrn4AaChyOhPXbZKipyE2UTOB7XHg=; b=CtSkceqouBFbMt8hXA7jrAgfUBRZJvjlvnWFkrQCMy2LU/5UcGYPnP0gZRPAd+OAur 4EFahtcPLXLwN2uFomjeMaU+3hkCqiUiMUnkvEnKbLlkRR/1d+J/ZzC16fAqmNZK9JyL zFV9FlszdJXsEL+4J02TKFs4spy6RvINn7FxFXbDoLpvklFEtYOwXbsN6vir82G560wk AwBVNuLvx3XW/RHBJZddb0obbDX4XRtRqZsROlPrFlhj5anMx/HFx9KgVox3xPUVKzBg eI7QrFWmUaaQDHQmVCAoKXX0kKvzUFne1lT/UuwZ0fs76CsTOZt41VczSz8t4WZGGQwf O5IQ== X-Gm-Message-State: AAQBX9eeSvauDWouWS9Z8zfWUParQKudfd0CDk2SyjNEkfgXphejVWUZ 1zFH6kpoKKspm97nNCMo7xM= X-Google-Smtp-Source: AKy350Ygryz7WWGm7uHh1637qBCeDzWZK5X9wTShPkAHp+hvTpFfVrAPSXU4dDYY+IJtVCH6njxkhg== X-Received: by 2002:a17:90a:4144:b0:240:973d:b436 with SMTP id m4-20020a17090a414400b00240973db436mr14169767pjg.49.1681764809815; Mon, 17 Apr 2023 13:53:29 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:29 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 32/33] um: Convert {pmd, pte}_free_tlb() to use ptdescs Date: Mon, 17 Apr 2023 13:50:47 -0700 Message-Id: <20230417205048.15870-33-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents. Also cleans up some spacing issues. Signed-off-by: Vishal Moola (Oracle) --- arch/um/include/asm/pgalloc.h | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/um/include/asm/pgalloc.h b/arch/um/include/asm/pgalloc.h index 8ec7cd46dd96..760b029505c1 100644 --- a/arch/um/include/asm/pgalloc.h +++ b/arch/um/include/asm/pgalloc.h @@ -25,19 +25,19 @@ */ extern pgd_t *pgd_alloc(struct mm_struct *); -#define __pte_free_tlb(tlb,pte, address) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb),(pte)); \ +#define __pte_free_tlb(tlb, pte, address) \ +do { \ + ptdesc_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) #ifdef CONFIG_3_LEVEL_PGTABLES -#define __pmd_free_tlb(tlb, pmd, address) \ -do { \ - pgtable_pmd_page_dtor(virt_to_page(pmd)); \ - tlb_remove_page((tlb),virt_to_page(pmd)); \ -} while (0) \ +#define __pmd_free_tlb(tlb, pmd, address) \ +do { \ + ptdesc_pmd_dtor(virt_to_ptdesc(pmd)); \ + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pmd)); \ +} while (0) #endif From patchwork Mon Apr 17 20:50:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Vishal Moola X-Patchwork-Id: 13214730 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6736EC7EE22 for ; Mon, 17 Apr 2023 20:56:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231296AbjDQU4P (ORCPT ); Mon, 17 Apr 2023 16:56:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40276 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230361AbjDQUzI (ORCPT ); Mon, 17 Apr 2023 16:55:08 -0400 Received: from mail-pj1-x102b.google.com (mail-pj1-x102b.google.com [IPv6:2607:f8b0:4864:20::102b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DEC016A6E; Mon, 17 Apr 2023 13:53:31 -0700 (PDT) Received: by mail-pj1-x102b.google.com with SMTP id w11so27723749pjh.5; Mon, 17 Apr 2023 13:53:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681764811; x=1684356811; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=lI4mQ87WoWgY+PGEzL+RyVT2V9KzYHmGTUoQkkpMa78=; b=OlBOZiQXTEQ2yJNCXWXVeEV70jBd/5g1C1TEFSYKGJH4RB2t5MeymRQTJkCAMhqY8X xT0QIMwkP3fej3rBN/YVKJNeHnVxdz5dc1MiD+bCMEF9s0dHv6uN4wyDmNsH2Y2Ym+iY 2ZNsAP/RgJk51+uVp81qcQOj+bGNImtPBjaen/JrqoOw/PBUibNGoaM/ZCPpfG5uzvkW XBTMu6ICJd2ADipenkzYOjnqUxYIKWTiKPtzD1nHG87S/riR67Qes5ac47aDjBNBedgZ nWtoSmVXLxbkA7A5m2dqK2dJ5Frc5F3qBCVwCqlbD2BXSOmg77x4kXyTkenC4W3QlTw8 reIw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1681764811; x=1684356811; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=lI4mQ87WoWgY+PGEzL+RyVT2V9KzYHmGTUoQkkpMa78=; b=lVlYSTD1LuCsc+L0ebS06Q8vheOM+rsoe0fIojyurHvjcsFq0hQcs0aQHmjumkn4VS K1KQb5ruzAK3IhPituaQOiopXc3sBEFtIMcg7sdlii46vPYtbHJlczejiiPEdwlOclOZ QDFJ13W/FLEgg86995sW9MfA5UQqJuyLgySET45rnZi7JOVbXH8FYU3cdxVfPvmumCn9 i8fQ0z4SBRlO2lyYBEHlqRMFE9L7x1ZyRj7jLHJVpXAvV4+YMhq1J+41IbbAOoIz3jSM 6admw9+ohKNrSRe9SlhNeLuG1mnXdCNXG8sGKgSdM66a2kDiPt5rHeNo0w3E6ELbb+fX zz8w== X-Gm-Message-State: AAQBX9eZJrRqjSyTrtpJBZWn0lFEnISOie0tjP+Cgz+RWVo/FLRxpsb4 ZvJjZ3JFViKlDKixm/7agQA= X-Google-Smtp-Source: AKy350bto7gkYUEXZmV5HcAXbLEbfMM+TO53YOoTMdsdAnKO7c7cFoDN5d9Q3fv9Tr0h94wXesVw1Q== X-Received: by 2002:a17:902:ec82:b0:1a1:b137:4975 with SMTP id x2-20020a170902ec8200b001a1b1374975mr256275plg.49.1681764811306; Mon, 17 Apr 2023 13:53:31 -0700 (PDT) Received: from fedora.hsd1.ca.comcast.net ([2601:644:937f:7f20::c139]) by smtp.googlemail.com with ESMTPSA id h7-20020a17090ac38700b0022335f1dae2sm7609707pjt.22.2023.04.17.13.53.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Apr 2023 13:53:31 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, "Vishal Moola (Oracle)" Subject: [PATCH 33/33] mm: Remove pgtable_{pmd, pte}_page_{ctor, dtor}() wrappers Date: Mon, 17 Apr 2023 13:50:48 -0700 Message-Id: <20230417205048.15870-34-vishal.moola@gmail.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230417205048.15870-1-vishal.moola@gmail.com> References: <20230417205048.15870-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org These functions are no longer necessary. Remove them and cleanup Documentation referencing them. Signed-off-by: Vishal Moola (Oracle) --- Documentation/mm/split_page_table_lock.rst | 12 +++++------ .../zh_CN/mm/split_page_table_lock.rst | 14 ++++++------- include/linux/mm.h | 20 ------------------- 3 files changed, 13 insertions(+), 33 deletions(-) diff --git a/Documentation/mm/split_page_table_lock.rst b/Documentation/mm/split_page_table_lock.rst index 50ee0dfc95be..b3c612183135 100644 --- a/Documentation/mm/split_page_table_lock.rst +++ b/Documentation/mm/split_page_table_lock.rst @@ -53,7 +53,7 @@ Support of split page table lock by an architecture =================================================== There's no need in special enabling of PTE split page table lock: everything -required is done by pgtable_pte_page_ctor() and pgtable_pte_page_dtor(), which +required is done by ptdesc_pte_ctor() and ptdesc_pte_dtor(), which must be called on PTE table allocation / freeing. Make sure the architecture doesn't use slab allocator for page table @@ -63,8 +63,8 @@ This field shares storage with page->ptl. PMD split lock only makes sense if you have more than two page table levels. -PMD split lock enabling requires pgtable_pmd_page_ctor() call on PMD table -allocation and pgtable_pmd_page_dtor() on freeing. +PMD split lock enabling requires ptdesc_pmd_ctor() call on PMD table +allocation and ptdesc_pmd_dtor() on freeing. Allocation usually happens in pmd_alloc_one(), freeing in pmd_free() and pmd_free_tlb(), but make sure you cover all PMD table allocation / freeing @@ -72,7 +72,7 @@ paths: i.e X86_PAE preallocate few PMDs on pgd_alloc(). With everything in place you can set CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK. -NOTE: pgtable_pte_page_ctor() and pgtable_pmd_page_ctor() can fail -- it must +NOTE: ptdesc_pte_ctor() and ptdesc_pmd_ctor() can fail -- it must be handled properly. page->ptl @@ -92,7 +92,7 @@ trick: split lock with enabled DEBUG_SPINLOCK or DEBUG_LOCK_ALLOC, but costs one more cache line for indirect access; -The spinlock_t allocated in pgtable_pte_page_ctor() for PTE table and in -pgtable_pmd_page_ctor() for PMD table. +The spinlock_t allocated in ptdesc_pte_ctor() for PTE table and in +ptdesc_pmd_ctor() for PMD table. Please, never access page->ptl directly -- use appropriate helper. diff --git a/Documentation/translations/zh_CN/mm/split_page_table_lock.rst b/Documentation/translations/zh_CN/mm/split_page_table_lock.rst index 4fb7aa666037..a3323eb9dc40 100644 --- a/Documentation/translations/zh_CN/mm/split_page_table_lock.rst +++ b/Documentation/translations/zh_CN/mm/split_page_table_lock.rst @@ -56,16 +56,16 @@ Hugetlb特定的辅助函数: 架构对分页表锁的支持 ==================== -没有必要特别启用PTE分页表锁:所有需要的东西都由pgtable_pte_page_ctor() -和pgtable_pte_page_dtor()完成,它们必须在PTE表分配/释放时被调用。 +没有必要特别启用PTE分页表锁:所有需要的东西都由ptdesc_pte_ctor() +和ptdesc_pte_dtor()完成,它们必须在PTE表分配/释放时被调用。 确保架构不使用slab分配器来分配页表:slab使用page->slab_cache来分配其页 面。这个区域与page->ptl共享存储。 PMD分页锁只有在你有两个以上的页表级别时才有意义。 -启用PMD分页锁需要在PMD表分配时调用pgtable_pmd_page_ctor(),在释放时调 -用pgtable_pmd_page_dtor()。 +启用PMD分页锁需要在PMD表分配时调用ptdesc_pmd_ctor(),在释放时调 +用ptdesc_pmd_dtor()。 分配通常发生在pmd_alloc_one()中,释放发生在pmd_free()和pmd_free_tlb() 中,但要确保覆盖所有的PMD表分配/释放路径:即X86_PAE在pgd_alloc()中预先 @@ -73,7 +73,7 @@ PMD分页锁只有在你有两个以上的页表级别时才有意义。 一切就绪后,你可以设置CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK。 -注意:pgtable_pte_page_ctor()和pgtable_pmd_page_ctor()可能失败--必 +注意:ptdesc_pte_ctor()和ptdesc_pmd_ctor()可能失败--必 须正确处理。 page->ptl @@ -90,7 +90,7 @@ page->ptl用于访问分割页表锁,其中'page'是包含该表的页面struc 的指针并动态分配它。这允许在启用DEBUG_SPINLOCK或DEBUG_LOCK_ALLOC的 情况下使用分页锁,但由于间接访问而多花了一个缓存行。 -PTE表的spinlock_t分配在pgtable_pte_page_ctor()中,PMD表的spinlock_t -分配在pgtable_pmd_page_ctor()中。 +PTE表的spinlock_t分配在ptdesc_pte_ctor()中,PMD表的spinlock_t +分配在ptdesc_pmd_ctor()中。 请不要直接访问page->ptl - -使用适当的辅助函数。 diff --git a/include/linux/mm.h b/include/linux/mm.h index cb136d2fdf74..e08638dc58cf 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2858,11 +2858,6 @@ static inline bool ptdesc_pte_ctor(struct ptdesc *ptdesc) return true; } -static inline bool pgtable_pte_page_ctor(struct page *page) -{ - return ptdesc_pte_ctor(page_ptdesc(page)); -} - static inline void ptdesc_pte_dtor(struct ptdesc *ptdesc) { struct folio *folio = ptdesc_folio(ptdesc); @@ -2872,11 +2867,6 @@ static inline void ptdesc_pte_dtor(struct ptdesc *ptdesc) lruvec_stat_sub_folio(folio, NR_PAGETABLE); } -static inline void pgtable_pte_page_dtor(struct page *page) -{ - ptdesc_pte_dtor(page_ptdesc(page)); -} - #define pte_offset_map_lock(mm, pmd, address, ptlp) \ ({ \ spinlock_t *__ptl = pte_lockptr(mm, pmd); \ @@ -2967,11 +2957,6 @@ static inline bool ptdesc_pmd_ctor(struct ptdesc *ptdesc) return true; } -static inline bool pgtable_pmd_page_ctor(struct page *page) -{ - return ptdesc_pmd_ctor(page_ptdesc(page)); -} - static inline void ptdesc_pmd_dtor(struct ptdesc *ptdesc) { struct folio *folio = ptdesc_folio(ptdesc); @@ -2981,11 +2966,6 @@ static inline void ptdesc_pmd_dtor(struct ptdesc *ptdesc) lruvec_stat_sub_folio(folio, NR_PAGETABLE); } -static inline void pgtable_pmd_page_dtor(struct page *page) -{ - ptdesc_pmd_dtor(page_ptdesc(page)); -} - /* * No scalability reason to split PUD locks yet, but follow the same pattern * as the PMD locks to make it easier if we decide to. The VM should not be