From patchwork Mon Aug 7 23:04:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345365 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76C16C10F19 for ; Mon, 7 Aug 2023 23:06:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229986AbjHGXGR (ORCPT ); Mon, 7 Aug 2023 19:06:17 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45042 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229738AbjHGXGM (ORCPT ); Mon, 7 Aug 2023 19:06:12 -0400 Received: from mail-yb1-xb2b.google.com (mail-yb1-xb2b.google.com [IPv6:2607:f8b0:4864:20::b2b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A59301703; Mon, 7 Aug 2023 16:05:40 -0700 (PDT) Received: by mail-yb1-xb2b.google.com with SMTP id 3f1490d57ef6-d075a831636so5455725276.3; Mon, 07 Aug 2023 16:05:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449520; x=1692054320; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4fjhesd7+rMkDZLOr6Kc4cG8956dLxCQQb+ozfQX3UQ=; b=OAmqT2YppB9CjfKzA6MbkbWQuHnOV3YDebJzEzMnvrYaEBnGvyiGDdIgN+UFddYCO/ oWD9ALjnIUimu69EOXrUSOl0V3EQ3lBA7Y1MYmMc666cwORz5NLJr1mYNrQUbkaEMEWb UevlkxC+fhWM5pG0rXCbCmrqQYjTZk13aOAK39qUeW68Z9nV/2KJzWOSzi4xEeDeVL48 AtQrP4AvQ44B2EZwMOx7Ik4iUJ1TvzBS0XzBfxhLsaCN0UqBFIO7A8RIwj+QqN3o+IuJ Kl4ISRZis5dwTBUexOURKgSzbeO3folaxRuVtH/QGDwOd5SIJ+mmY1APEfvTznAxJRIC w6YQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449520; x=1692054320; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4fjhesd7+rMkDZLOr6Kc4cG8956dLxCQQb+ozfQX3UQ=; b=bWmMMfeEZcTa8Hglo1R1/p30Mtz9UOsaRL32zjor+UlQyQ8v5nRQ+uTxgHaSha7Rqu RWBO3As58A0SUUMqcBMnHZ9aFXpCvx5vCzZjnEndPSSJbvj0Mc4XwT5tPGMwpO70UVG1 wx60e392hE8tqErykrFA5ZBiHFZWgbf/voAWUGwVHkzLzdBsmovzLRS68WL47KFRdfnK iyXHaoIw7jIRSUIldn+Qyanqr4hbUhopaRbTeNkCMyOOmfQIHk9S82t4QNiZR1kWYv8m n9Ny4SsnXNL5QjrNAE4Qs4M9msUoBGflcM+gkYaC80IiEdi4a5P1LmtblDzktOZDgIoI Kssg== X-Gm-Message-State: AOJu0Yw2sOsolHD7scBGZNp7NRYObn/4RQD1zDmeyyrThgTaeZBWS8Ky AuAxFNKUOLCmtAfMgtkej8c= X-Google-Smtp-Source: AGHT+IEx2gi8PNow/7jZZuthXFdwcQBm4qVvrwXdHrr7o3CRr4bjoCnJoA9HiZN+Tg0Khw5JdZ/TIA== X-Received: by 2002:a25:6088:0:b0:d2c:32cb:c631 with SMTP id u130-20020a256088000000b00d2c32cbc631mr5682581ybb.27.1691449520366; Mon, 07 Aug 2023 16:05:20 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:20 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH mm-unstable v9 01/31] mm: Add PAGE_TYPE_OP folio functions Date: Mon, 7 Aug 2023 16:04:43 -0700 Message-Id: <20230807230513.102486-2-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org No folio equivalents for page type operations have been defined, so define them for later folio conversions. Also changes the Page##uname macros to take in const struct page* since we only read the memory here. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- include/linux/page-flags.h | 30 +++++++++++++++++++++++------- 1 file changed, 23 insertions(+), 7 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 92a2063a0a23..9218028caf33 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -908,6 +908,8 @@ static inline bool is_page_hwpoison(struct page *page) #define PageType(page, flag) \ ((page->page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE) +#define folio_test_type(folio, flag) \ + ((folio->page.page_type & (PAGE_TYPE_BASE | flag)) == PAGE_TYPE_BASE) static inline int page_type_has_type(unsigned int page_type) { @@ -919,27 +921,41 @@ static inline int page_has_type(struct page *page) return page_type_has_type(page->page_type); } -#define PAGE_TYPE_OPS(uname, lname) \ -static __always_inline int Page##uname(struct page *page) \ +#define PAGE_TYPE_OPS(uname, lname, fname) \ +static __always_inline int Page##uname(const struct page *page) \ { \ return PageType(page, PG_##lname); \ } \ +static __always_inline int folio_test_##fname(const struct folio *folio)\ +{ \ + return folio_test_type(folio, PG_##lname); \ +} \ static __always_inline void __SetPage##uname(struct page *page) \ { \ VM_BUG_ON_PAGE(!PageType(page, 0), page); \ page->page_type &= ~PG_##lname; \ } \ +static __always_inline void __folio_set_##fname(struct folio *folio) \ +{ \ + VM_BUG_ON_FOLIO(!folio_test_type(folio, 0), folio); \ + folio->page.page_type &= ~PG_##lname; \ +} \ static __always_inline void __ClearPage##uname(struct page *page) \ { \ VM_BUG_ON_PAGE(!Page##uname(page), page); \ page->page_type |= PG_##lname; \ -} +} \ +static __always_inline void __folio_clear_##fname(struct folio *folio) \ +{ \ + VM_BUG_ON_FOLIO(!folio_test_##fname(folio), folio); \ + folio->page.page_type |= PG_##lname; \ +} \ /* * PageBuddy() indicates that the page is free and in the buddy system * (see mm/page_alloc.c). */ -PAGE_TYPE_OPS(Buddy, buddy) +PAGE_TYPE_OPS(Buddy, buddy, buddy) /* * PageOffline() indicates that the page is logically offline although the @@ -963,7 +979,7 @@ PAGE_TYPE_OPS(Buddy, buddy) * pages should check PageOffline() and synchronize with such drivers using * page_offline_freeze()/page_offline_thaw(). */ -PAGE_TYPE_OPS(Offline, offline) +PAGE_TYPE_OPS(Offline, offline, offline) extern void page_offline_freeze(void); extern void page_offline_thaw(void); @@ -973,12 +989,12 @@ extern void page_offline_end(void); /* * Marks pages in use as page tables. */ -PAGE_TYPE_OPS(Table, table) +PAGE_TYPE_OPS(Table, table, pgtable) /* * Marks guardpages used with debug_pagealloc. */ -PAGE_TYPE_OPS(Guard, guard) +PAGE_TYPE_OPS(Guard, guard, guard) extern bool is_free_buddy_page(struct page *page); From patchwork Mon Aug 7 23:04:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345366 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7195C04A94 for ; Mon, 7 Aug 2023 23:06:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230431AbjHGXGY (ORCPT ); Mon, 7 Aug 2023 19:06:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45052 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229503AbjHGXGO (ORCPT ); Mon, 7 Aug 2023 19:06:14 -0400 Received: from mail-yb1-xb33.google.com (mail-yb1-xb33.google.com [IPv6:2607:f8b0:4864:20::b33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE3751FD0; Mon, 7 Aug 2023 16:05:41 -0700 (PDT) Received: by mail-yb1-xb33.google.com with SMTP id 3f1490d57ef6-d16889b3e93so5395956276.0; Mon, 07 Aug 2023 16:05:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449522; x=1692054322; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=a34f21esV50lWBersDNDqmppsx1/6ilmHOTavl/xl8o=; b=oFwky5a0u6RDMZneanUwrrMHlk718vp357up6yt/zwnqVYFChL1rvBLlof3lNCzIMa fNsEK2lTF1AVBptjZ1CNiCZUzihavKETKYN9gVvYQI6adZVOr5WVilhYnG1hRIdUadMS rZHYG0cY0MgC9spT/xtrGp0Z9RSAR89ub11+qTFSD/NHXR1SOhgGi5SO3dghfFZLOejg KputLo/y59tBrGZ+oHkk/Miwz+ltthb0+dR9j85/GLscklo1srlDZt2wotLowv37n6A5 LO+9uJnrNQ6hj9u600MpE0S2h1+9c+75UpIga0Og5F4xShPF0bPK2EhAmwZTlcxr2oZQ 4pBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449522; x=1692054322; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=a34f21esV50lWBersDNDqmppsx1/6ilmHOTavl/xl8o=; b=SKhznnzflX7V7+fbaaTWGxksJQ9joHHhTjLHProhsTdn/zfOtULB3qpgd8ovRgUwxr bulBQvKE8Snf8o3Ij4LiTpI+kPsxcD1Y1rcS0qScub/jq+8Uy45j0Ukxt1lCoX2J6rrz HmbY/zRv+wUcuFT316/OnoPi9uvKsrrK+m/58wbVaR3FMkGcdIENxFsJwqKL4YmS4fZz avEe33IgCHLre4r7yP4SiiQqda9H7tcjm9QmLg8P+wX9x/OnawbmXQ19IiOT/gHfp/OH 3eetoOzPkS60GsU3Hv11z22TtyI9IHdl7CKXcj20gaVMKmN4attIDqhv+dE8v2uAXUnZ GI3g== X-Gm-Message-State: AOJu0Yz+kseJKUSKRfyx2CePh/PxNS58U1+X7HZ6rGCgdFYwXhUqpt2I HKN/Dyv6ZjVPE+XuH7GjCxpo+orvOwjyRw== X-Google-Smtp-Source: AGHT+IHp9NHIhzlo/URxP7pkQYxSTPvQNJLHOzpXCbXbudXygK+XnGEho5KZf1Qzac+pXhrvkYZIBg== X-Received: by 2002:a25:320c:0:b0:d4b:64ac:a4f7 with SMTP id y12-20020a25320c000000b00d4b64aca4f7mr9519366yby.62.1691449522393; Mon, 07 Aug 2023 16:05:22 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:22 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH mm-unstable v9 02/31] pgtable: create struct ptdesc Date: Mon, 7 Aug 2023 16:04:44 -0700 Message-Id: <20230807230513.102486-3-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Currently, page table information is stored within struct page. As part of simplifying struct page, create struct ptdesc for page table information. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- include/linux/mm_types.h | 70 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 18c8c3d793b0..cb47438ae17f 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -397,6 +397,76 @@ FOLIO_MATCH(flags, _flags_2); FOLIO_MATCH(compound_head, _head_2); #undef FOLIO_MATCH +/** + * struct ptdesc - Memory descriptor for page tables. + * @__page_flags: Same as page flags. Unused for page tables. + * @pt_rcu_head: For freeing page table pages. + * @pt_list: List of used page tables. Used for s390 and x86. + * @_pt_pad_1: Padding that aliases with page's compound head. + * @pmd_huge_pte: Protected by ptdesc->ptl, used for THPs. + * @__page_mapping: Aliases with page->mapping. Unused for page tables. + * @pt_mm: Used for x86 pgds. + * @pt_frag_refcount: For fragmented page table tracking. Powerpc and s390 only. + * @_pt_pad_2: Padding to ensure proper alignment. + * @ptl: Lock for the page table. + * @__page_type: Same as page->page_type. Unused for page tables. + * @_refcount: Same as page refcount. Used for s390 page tables. + * @pt_memcg_data: Memcg data. Tracked for page tables here. + * + * This struct overlays struct page for now. Do not modify without a good + * understanding of the issues. + */ +struct ptdesc { + unsigned long __page_flags; + + union { + struct rcu_head pt_rcu_head; + struct list_head pt_list; + struct { + unsigned long _pt_pad_1; + pgtable_t pmd_huge_pte; + }; + }; + unsigned long __page_mapping; + + union { + struct mm_struct *pt_mm; + atomic_t pt_frag_refcount; + }; + + union { + unsigned long _pt_pad_2; +#if ALLOC_SPLIT_PTLOCKS + spinlock_t *ptl; +#else + spinlock_t ptl; +#endif + }; + unsigned int __page_type; + atomic_t _refcount; +#ifdef CONFIG_MEMCG + unsigned long pt_memcg_data; +#endif +}; + +#define TABLE_MATCH(pg, pt) \ + static_assert(offsetof(struct page, pg) == offsetof(struct ptdesc, pt)) +TABLE_MATCH(flags, __page_flags); +TABLE_MATCH(compound_head, pt_list); +TABLE_MATCH(compound_head, _pt_pad_1); +TABLE_MATCH(pmd_huge_pte, pmd_huge_pte); +TABLE_MATCH(mapping, __page_mapping); +TABLE_MATCH(pt_mm, pt_mm); +TABLE_MATCH(ptl, ptl); +TABLE_MATCH(rcu_head, pt_rcu_head); +TABLE_MATCH(page_type, __page_type); +TABLE_MATCH(_refcount, _refcount); +#ifdef CONFIG_MEMCG +TABLE_MATCH(memcg_data, pt_memcg_data); +#endif +#undef TABLE_MATCH +static_assert(sizeof(struct ptdesc) <= sizeof(struct page)); + /* * Used for sizing the vmemmap region on some architectures */ From patchwork Mon Aug 7 23:04:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345367 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CE78C07E8A for ; Mon, 7 Aug 2023 23:06:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230517AbjHGXGZ (ORCPT ); Mon, 7 Aug 2023 19:06:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229909AbjHGXGO (ORCPT ); Mon, 7 Aug 2023 19:06:14 -0400 Received: from mail-yb1-xb32.google.com (mail-yb1-xb32.google.com [IPv6:2607:f8b0:4864:20::b32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 531DA1FD4; Mon, 7 Aug 2023 16:05:43 -0700 (PDT) Received: by mail-yb1-xb32.google.com with SMTP id 3f1490d57ef6-d0b597e7ac1so5664493276.1; Mon, 07 Aug 2023 16:05:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449524; x=1692054324; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=CHNVeLQ4ERi2aiC2atfDR7Ra/O6uVctCEHn/pbNh+mQ=; b=L1Go0rwXz95r8RKjbEpMnhh6c018jiQRjrFUv0ntz442yZnhpaeW+aHqb02Xo98603 EPNoaJuJCjZc6fdKeFl/4FxJrSloFA7bUFX05fyGjegXA7V7cuLZVOde3eglcKPAc2dh 7JAUmVzhkgUSMtLYpwSzyd8Ca3OvJH/6CC+nMDpgrmjfqgC7q9SKrYP1x1rjn8+euG1c w/F3CAIitPe7XXZVoEz/F21xQq4ZfepD4AFcrR4pjaM61cX8qLF4fMMNmTw3E/GvPxLQ CGZCqn+AGAHhqM6FL6AO+cy3AaHEnYT+L7t2jnss6vAgyAEw9i6+eoE0/wpeJRdeDqZo nMqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449524; x=1692054324; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=CHNVeLQ4ERi2aiC2atfDR7Ra/O6uVctCEHn/pbNh+mQ=; b=hIxMaTcivUmpu7hCtpOUs0tu8JDtGcLeZY3XG5vp91I6cCWTHyPdQGYXPEy2cglvax i+ZzQP5txoJdD6AS5RvUq/5c9Yz+QY+764P/vK5xnjR/QSq6K+HiQRojfNpP3/s4Vc7Y lwFkmkAx515v0p77EFwuoNxo8fvZRhQPeA713AykwCfIrNrXa41muKAMgiiT8ha02plm BscMQ/BeK3Fd4O1YQqlFH88Hpv45Mme6v2muyqe9XGdRdv+Dw2qgdeY3H8FGd24XrRFX pAewPNK9yQYM1D7LD9K+4hbrqFK2Xi7QoskM/gG9UZOuQz3/jkDRzFPoWx+8iz1Rr3FF Jj/A== X-Gm-Message-State: AOJu0YwBwu1YfHO/Alouc+tldTacIskeA+j6gLIkIKD93E9M4JEgDuC5 5BowdtIg0TUqOglEiHaCbolpRjYzBLiWkw== X-Google-Smtp-Source: AGHT+IF30NcYMYJc6ATZDEFP+FAmEVcusnv37vl/vjImNQWmcJZtPuqM/aGhVMbEV/I/t7ROvZmYcQ== X-Received: by 2002:a25:ca58:0:b0:d15:9cdc:5d0c with SMTP id a85-20020a25ca58000000b00d159cdc5d0cmr11907374ybg.42.1691449524354; Mon, 07 Aug 2023 16:05:24 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:24 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable v9 03/31] mm: add utility functions for ptdesc Date: Mon, 7 Aug 2023 16:04:45 -0700 Message-Id: <20230807230513.102486-4-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Introduce utility functions setting the foundation for ptdescs. These will also assist in the splitting out of ptdesc from struct page. Functions that focus on the descriptor are prefixed with ptdesc_* while functions that focus on the pagetable are prefixed with pagetable_*. pagetable_alloc() is defined to allocate new ptdesc pages as compound pages. This is to standardize ptdescs by allowing for one allocation and one free function, in contrast to 2 allocation and 2 free functions. Signed-off-by: Vishal Moola (Oracle) --- include/asm-generic/tlb.h | 11 +++++++ include/linux/mm.h | 61 +++++++++++++++++++++++++++++++++++++++ include/linux/mm_types.h | 12 ++++++++ 3 files changed, 84 insertions(+) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index bc32a2284c56..129a3a759976 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -480,6 +480,17 @@ static inline void tlb_remove_page(struct mmu_gather *tlb, struct page *page) return tlb_remove_page_size(tlb, page, PAGE_SIZE); } +static inline void tlb_remove_ptdesc(struct mmu_gather *tlb, void *pt) +{ + tlb_remove_table(tlb, pt); +} + +/* Like tlb_remove_ptdesc, but for page-like page directories. */ +static inline void tlb_remove_page_ptdesc(struct mmu_gather *tlb, struct ptdesc *pt) +{ + tlb_remove_page(tlb, ptdesc_page(pt)); +} + static inline void tlb_change_page_size(struct mmu_gather *tlb, unsigned int page_size) { diff --git a/include/linux/mm.h b/include/linux/mm.h index ec15ebc6def1..54dc176b90ea 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2806,6 +2806,57 @@ static inline pmd_t *pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long a } #endif /* CONFIG_MMU */ +static inline struct ptdesc *virt_to_ptdesc(const void *x) +{ + return page_ptdesc(virt_to_page(x)); +} + +static inline void *ptdesc_to_virt(const struct ptdesc *pt) +{ + return page_to_virt(ptdesc_page(pt)); +} + +static inline void *ptdesc_address(const struct ptdesc *pt) +{ + return folio_address(ptdesc_folio(pt)); +} + +static inline bool pagetable_is_reserved(struct ptdesc *pt) +{ + return folio_test_reserved(ptdesc_folio(pt)); +} + +/** + * pagetable_alloc - Allocate pagetables + * @gfp: GFP flags + * @order: desired pagetable order + * + * pagetable_alloc allocates memory for page tables as well as a page table + * descriptor to describe that memory. + * + * Return: The ptdesc describing the allocated page tables. + */ +static inline struct ptdesc *pagetable_alloc(gfp_t gfp, unsigned int order) +{ + struct page *page = alloc_pages(gfp | __GFP_COMP, order); + + return page_ptdesc(page); +} + +/** + * pagetable_free - Free pagetables + * @pt: The page table descriptor + * + * pagetable_free frees the memory of all page tables described by a page + * table descriptor and the memory for the descriptor itself. + */ +static inline void pagetable_free(struct ptdesc *pt) +{ + struct page *page = ptdesc_page(pt); + + __free_pages(page, compound_order(page)); +} + #if USE_SPLIT_PTE_PTLOCKS #if ALLOC_SPLIT_PTLOCKS void __init ptlock_cache_init(void); @@ -2932,6 +2983,11 @@ static inline struct page *pmd_pgtable_page(pmd_t *pmd) return virt_to_page((void *)((unsigned long) pmd & mask)); } +static inline struct ptdesc *pmd_ptdesc(pmd_t *pmd) +{ + return page_ptdesc(pmd_pgtable_page(pmd)); +} + static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) { return ptlock_ptr(pmd_pgtable_page(pmd)); @@ -3044,6 +3100,11 @@ static inline void mark_page_reserved(struct page *page) adjust_managed_page_count(page, -1); } +static inline void free_reserved_ptdesc(struct ptdesc *pt) +{ + free_reserved_page(ptdesc_page(pt)); +} + /* * Default method to free all the __init memory into the buddy system. * The freed pages will be poisoned with pattern "poison" if it's within diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index cb47438ae17f..ea34b22b4cbf 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -467,6 +467,18 @@ TABLE_MATCH(memcg_data, pt_memcg_data); #undef TABLE_MATCH static_assert(sizeof(struct ptdesc) <= sizeof(struct page)); +#define ptdesc_page(pt) (_Generic((pt), \ + const struct ptdesc *: (const struct page *)(pt), \ + struct ptdesc *: (struct page *)(pt))) + +#define ptdesc_folio(pt) (_Generic((pt), \ + const struct ptdesc *: (const struct folio *)(pt), \ + struct ptdesc *: (struct folio *)(pt))) + +#define page_ptdesc(p) (_Generic((p), \ + const struct page *: (const struct ptdesc *)(p), \ + struct page *: (struct ptdesc *)(p))) + /* * Used for sizing the vmemmap region on some architectures */ From patchwork Mon Aug 7 23:04:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345369 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 983FDC04A94 for ; Mon, 7 Aug 2023 23:06:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229701AbjHGXGd (ORCPT ); Mon, 7 Aug 2023 19:06:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229461AbjHGXGX (ORCPT ); Mon, 7 Aug 2023 19:06:23 -0400 Received: from mail-yb1-xb29.google.com (mail-yb1-xb29.google.com [IPv6:2607:f8b0:4864:20::b29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C6611986; Mon, 7 Aug 2023 16:05:46 -0700 (PDT) Received: by mail-yb1-xb29.google.com with SMTP id 3f1490d57ef6-c5f98fc4237so4187554276.2; Mon, 07 Aug 2023 16:05:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449526; x=1692054326; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fEHGQmND3TVCalts2Xn8ptU7qEA2BpyVbaRRrUVwgj4=; b=g0HQFm0Ih+ONj9bbGpN0SAoMFwZH1Hyy1LHhk9Yh2p6W+FyLdnCJ7RPABmEq6UvV+/ I0mwYg/T2n8LaliwXnPTiC9GBqIvZu49Qi1V/N0MmNRCKbxincxnUS9jodhc4yydOwRP aDjCJ2ni1nNWMxe4NQznDsxKIox8DPgLSsmzUNzOiHFpehQua0QatbJlVlVObrSVTwwk ZCujc6+dVnog1ei5Zro9QqEP+ju61k33bEcuj8U0Qr32h8YYXwWu+8Jh8gouVCHVOaNS iYc6V/t51fkGJ7ybJ/9xFdZCGVtXquz95wk1/VRVK/X9HGuHOqtxoDT8S6kl5FuAfQB5 VZ5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449526; x=1692054326; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fEHGQmND3TVCalts2Xn8ptU7qEA2BpyVbaRRrUVwgj4=; b=d9VHx3hw1KlEq28bVwXmeK6Zt+Py7axud9zJNeVWiOzbiCe6kB++zXBdi45l+uaOAz t3j11o6sFbZLfb+J2PP5m4+brjfqn0An/3Nc3O3Ek6071cdLXGdvPaWQyoac9pPf0vo0 35D1i4NZ/SSJUckbOcUJmAYOaZQZHt5vnNojhZaWyl9dN9CcWuK7rz0jG5zS9Fijeopx EbjR4s1xPxDhlu+4nnuLPrkKvH12NyUdbt9k9s5cRD1XG8MGw9Us8XI4fvWeTjP1Fhr2 sZ0sWRH0ZoYXleUshhZVgW3fO+I+nL4sfB1iH3MT7IjwRqnvn8cDNNclviqrMa6/u7Jw Wl5A== X-Gm-Message-State: AOJu0Yxdj15CX4U0kZG0eMnkwnMRw/K1QHoCyGX8CT8Gq4rDNEZQUrE7 dOgev+MV1hTfRcsITTDdn4w= X-Google-Smtp-Source: AGHT+IE5Cb4oRalCjMZQpBv2Qvb3EAIojgwuAgurnfPzGGotvhmzGY7E65P/TqlBpHFwkOco7I+V2Q== X-Received: by 2002:a25:d56:0:b0:c67:77be:9ad9 with SMTP id 83-20020a250d56000000b00c6777be9ad9mr8584839ybn.30.1691449526389; Mon, 07 Aug 2023 16:05:26 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:26 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH mm-unstable v9 04/31] mm: Convert pmd_pgtable_page() callers to use pmd_ptdesc() Date: Mon, 7 Aug 2023 16:04:46 -0700 Message-Id: <20230807230513.102486-5-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Converts internal pmd_pgtable_page() callers to use pmd_ptdesc(). This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- include/linux/mm.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 54dc176b90ea..f6d14a5fe747 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2990,7 +2990,7 @@ static inline struct ptdesc *pmd_ptdesc(pmd_t *pmd) static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) { - return ptlock_ptr(pmd_pgtable_page(pmd)); + return ptlock_ptr(ptdesc_page(pmd_ptdesc(pmd))); } static inline bool pmd_ptlock_init(struct page *page) @@ -3009,7 +3009,7 @@ static inline void pmd_ptlock_free(struct page *page) ptlock_free(page); } -#define pmd_huge_pte(mm, pmd) (pmd_pgtable_page(pmd)->pmd_huge_pte) +#define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte) #else From patchwork Mon Aug 7 23:04:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345368 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CF4EC04FE1 for ; Mon, 7 Aug 2023 23:06:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229875AbjHGXGd (ORCPT ); Mon, 7 Aug 2023 19:06:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45300 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230179AbjHGXGY (ORCPT ); Mon, 7 Aug 2023 19:06:24 -0400 Received: from mail-oi1-x232.google.com (mail-oi1-x232.google.com [IPv6:2607:f8b0:4864:20::232]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 577E61BF0; Mon, 7 Aug 2023 16:05:49 -0700 (PDT) Received: by mail-oi1-x232.google.com with SMTP id 5614622812f47-3a3efebcc24so3748054b6e.1; Mon, 07 Aug 2023 16:05:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449528; x=1692054328; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=xo55EeeiRtqMJzFppY7LfosEC+4EKSW+Y1wbH/R/zko=; b=Q3CUliICB5ucW5n1+cyfiOVvgvAwhYv3ZRrcZge/C2xN1y7mPAKTaAhnTC4tg457In GS5yV5pdx2fEbQ6Paw0G9gZpAwxLk7uLiA6v3+SIa2nM9cjP9BC4TMrWHjABBg1TkaAc jFqkxuJnd0T58a+3c4oi7Pn3k6jPqHaryISvoGLWFF1pysLGpyRN4pc1ANxWoEwI+30p K69XzaaIceuwI5auke9JvMGfHaTbti7Lrj0CNiE6K9RZ1Cm9I6s2gJdVHKhyUn9Bc8fw XFhL6ezBClgRYA33kHsXSFVngHlISmggGRScePESZQUU6OPVK5MugkAHJIXH8vtrGuLn AiPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449528; x=1692054328; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xo55EeeiRtqMJzFppY7LfosEC+4EKSW+Y1wbH/R/zko=; b=kqaVRWqZw58CliANewfuxDdrWAHk4J9AzYQiCN1kN/PnwqSnckP30bgjdjnFR1eabK 9Pdvx86ExIGRqQr3HRxAAbk+aC+/kfggYPNWc9yRCVWU+5SrzKc8HSJuJI7kOisqs1NW /YDrxa8GAiBZ2abHpTKddG07leUOY7GHY996g6f/SybNOzF7YeY8QUR0lPXQ9vlkzR2a XqK+AkpwNQSjDrTuNP5mTP5usCVMHthUrQYxLqK/MHUavZbHk5MkWAJepGSR7GUHebNn KsBBTcIx62Gvgj61rP7mNtsIWGzPvZG0p6c2wzmWDmm+GHjQ8marzqBBFx/glKZ2H+kn KrHw== X-Gm-Message-State: AOJu0YyWX1S1OcIsBW9nidw+z1b4kVVYonwRjGcweIiMUEuvrhl6dRL0 rGdZiFdB7StUGFedSQWSVf8= X-Google-Smtp-Source: AGHT+IEuvpqTpP0gccwCdePAresyzfHn008a21Ty6dvsje55NxjSSMn6qMCPG3M+rc/X2+vc3Y/lYA== X-Received: by 2002:a54:4886:0:b0:3a7:238a:143e with SMTP id r6-20020a544886000000b003a7238a143emr10421463oic.2.1691449528538; Mon, 07 Aug 2023 16:05:28 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:28 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH mm-unstable v9 05/31] mm: Convert ptlock_alloc() to use ptdescs Date: Mon, 7 Aug 2023 16:04:47 -0700 Message-Id: <20230807230513.102486-6-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- include/linux/mm.h | 6 +++--- mm/memory.c | 4 ++-- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index f6d14a5fe747..6aea8fb671f1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2860,7 +2860,7 @@ static inline void pagetable_free(struct ptdesc *pt) #if USE_SPLIT_PTE_PTLOCKS #if ALLOC_SPLIT_PTLOCKS void __init ptlock_cache_init(void); -extern bool ptlock_alloc(struct page *page); +bool ptlock_alloc(struct ptdesc *ptdesc); extern void ptlock_free(struct page *page); static inline spinlock_t *ptlock_ptr(struct page *page) @@ -2872,7 +2872,7 @@ static inline void ptlock_cache_init(void) { } -static inline bool ptlock_alloc(struct page *page) +static inline bool ptlock_alloc(struct ptdesc *ptdesc) { return true; } @@ -2902,7 +2902,7 @@ static inline bool ptlock_init(struct page *page) * slab code uses page->slab_cache, which share storage with page->ptl. */ VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page); - if (!ptlock_alloc(page)) + if (!ptlock_alloc(page_ptdesc(page))) return false; spin_lock_init(ptlock_ptr(page)); return true; diff --git a/mm/memory.c b/mm/memory.c index 956aad8aff34..3606ef72ba70 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6134,14 +6134,14 @@ void __init ptlock_cache_init(void) SLAB_PANIC, NULL); } -bool ptlock_alloc(struct page *page) +bool ptlock_alloc(struct ptdesc *ptdesc) { spinlock_t *ptl; ptl = kmem_cache_alloc(page_ptl_cachep, GFP_KERNEL); if (!ptl) return false; - page->ptl = ptl; + ptdesc->ptl = ptl; return true; } From patchwork Mon Aug 7 23:04:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345372 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EB121C25B76 for ; Mon, 7 Aug 2023 23:06:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230094AbjHGXGo (ORCPT ); Mon, 7 Aug 2023 19:06:44 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45202 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230221AbjHGXGY (ORCPT ); Mon, 7 Aug 2023 19:06:24 -0400 Received: from mail-yb1-xb29.google.com (mail-yb1-xb29.google.com [IPv6:2607:f8b0:4864:20::b29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CF3D21BF3; Mon, 7 Aug 2023 16:05:53 -0700 (PDT) Received: by mail-yb1-xb29.google.com with SMTP id 3f1490d57ef6-d3522283441so4167835276.0; Mon, 07 Aug 2023 16:05:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449530; x=1692054330; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nhRFpHY9Gkw1pvDSPH6l3PWFnY0Equ+A7zpvUAn8VQI=; b=i3yG/elNSvAKIQpu+LnaQkf9aWr2RTSDWZ0bD6C1RLjObUv19W0XGp5nQ/WUHqv97k 3F3d9T+x+RGssi2D5sQsk0sbYNBszDeUty4Uz3WYnNNnOAOtum0uoqfNv64CjbJYI2vt 5W+yDKCq8CPlPCKNltLocOrpvWmx0Hi9m5r7qQtJGuYnsVLA+FFqMS9EVAuWM7bu2InG Y2OLsVw1zQ1ypzE/Wu4y0WN08yQxAbeMR9OdyOcp5d10YTszQgqk88e/T18v49VnUb5a Be+cx3DsqCXaxjSPwUg4xCjeVNpaDlwm4eDVW83zp+7fbc6H/u+IOUuZtg+F6GjbM9PA 0MXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449530; x=1692054330; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nhRFpHY9Gkw1pvDSPH6l3PWFnY0Equ+A7zpvUAn8VQI=; b=kzhCMnjR6GHE48ksqR9q85/PfeIlOOYvfhTbbFYyFNNuIHvrWTTQOwdyBAWKyoFKH/ cEFf6SJ5tK58mTAPKK3Pi6GZO9kVtEtyb6OimlkP41C9w6obOKvD6J/Lt9fZahVYQGMu fEAeNR5EW0AQjopCKdS+2IUhFulhfeDjxo3bOfmz200VSFZkJZ4Cc87lAgyyQESFL6eM Tqu9aN9wq9PGNYbZbVhZY/Vn90mGaPwMgOc3cnCtdnj9gLET0sTfop6ZTh4TViLS+ALH Z/DkU/ROT6L8NDgcsqGdV/Gsv1oex/YtoqzbTM4tdnNT+ShxED3lrxdKNniN7J3OHZT9 4tfw== X-Gm-Message-State: AOJu0Yxuq3CCHsyMumvMseuILS31fh9/teuDN7boiMqynBAFZzote0Aw KWYq6qa/cXzlmM04FypV8nE= X-Google-Smtp-Source: AGHT+IH0ZW2oaxdE5zMBv/D24ibnqwE3DqpI9lmZfRLWAk1KAOJsXcFTPw6pcD+tvJtRgJ78+52WaA== X-Received: by 2002:a25:fc3:0:b0:c21:4bc4:331d with SMTP id 186-20020a250fc3000000b00c214bc4331dmr6829336ybp.43.1691449530544; Mon, 07 Aug 2023 16:05:30 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:30 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH mm-unstable v9 06/31] mm: Convert ptlock_ptr() to use ptdescs Date: Mon, 7 Aug 2023 16:04:48 -0700 Message-Id: <20230807230513.102486-7-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- arch/x86/xen/mmu_pv.c | 2 +- include/linux/mm.h | 14 +++++++------- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c index e0a975165de7..8796ec310483 100644 --- a/arch/x86/xen/mmu_pv.c +++ b/arch/x86/xen/mmu_pv.c @@ -667,7 +667,7 @@ static spinlock_t *xen_pte_lock(struct page *page, struct mm_struct *mm) spinlock_t *ptl = NULL; #if USE_SPLIT_PTE_PTLOCKS - ptl = ptlock_ptr(page); + ptl = ptlock_ptr(page_ptdesc(page)); spin_lock_nest_lock(ptl, &mm->page_table_lock); #endif diff --git a/include/linux/mm.h b/include/linux/mm.h index 6aea8fb671f1..bc82a64e5f01 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2863,9 +2863,9 @@ void __init ptlock_cache_init(void); bool ptlock_alloc(struct ptdesc *ptdesc); extern void ptlock_free(struct page *page); -static inline spinlock_t *ptlock_ptr(struct page *page) +static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc) { - return page->ptl; + return ptdesc->ptl; } #else /* ALLOC_SPLIT_PTLOCKS */ static inline void ptlock_cache_init(void) @@ -2881,15 +2881,15 @@ static inline void ptlock_free(struct page *page) { } -static inline spinlock_t *ptlock_ptr(struct page *page) +static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc) { - return &page->ptl; + return &ptdesc->ptl; } #endif /* ALLOC_SPLIT_PTLOCKS */ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) { - return ptlock_ptr(pmd_page(*pmd)); + return ptlock_ptr(page_ptdesc(pmd_page(*pmd))); } static inline bool ptlock_init(struct page *page) @@ -2904,7 +2904,7 @@ static inline bool ptlock_init(struct page *page) VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page); if (!ptlock_alloc(page_ptdesc(page))) return false; - spin_lock_init(ptlock_ptr(page)); + spin_lock_init(ptlock_ptr(page_ptdesc(page))); return true; } @@ -2990,7 +2990,7 @@ static inline struct ptdesc *pmd_ptdesc(pmd_t *pmd) static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) { - return ptlock_ptr(ptdesc_page(pmd_ptdesc(pmd))); + return ptlock_ptr(pmd_ptdesc(pmd)); } static inline bool pmd_ptlock_init(struct page *page) From patchwork Mon Aug 7 23:04:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345371 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B952C04FE1 for ; Mon, 7 Aug 2023 23:06:44 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229827AbjHGXGn (ORCPT ); Mon, 7 Aug 2023 19:06:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45056 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230314AbjHGXGY (ORCPT ); Mon, 7 Aug 2023 19:06:24 -0400 Received: from mail-yb1-xb34.google.com (mail-yb1-xb34.google.com [IPv6:2607:f8b0:4864:20::b34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC6B419A2; Mon, 7 Aug 2023 16:05:55 -0700 (PDT) Received: by mail-yb1-xb34.google.com with SMTP id 3f1490d57ef6-d43930354bcso2766503276.3; Mon, 07 Aug 2023 16:05:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449532; x=1692054332; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dV51lKtiOhMWQgAl7GNvRpOxJtkn3o5Ia4Rm5PKGOL0=; b=WW7U9RxJoy+3YY4y3qa+Z9OsxlwmyOU3wMBaX5rDPLFmIEHt/X6v4NdYOpBq8PRkh5 HGAvbPPuVPcaXp+7bIkcS5zKLOts7aoRAq2rmpfhl809GhQ5Z7qhyD2D1O9jUw02JT8D ByQrQ+74LtD1rxejz5iV5Ji3t3biqjBaLILP+cGmcXMABNbKMRqAqcPv+Q5eVbDOwvco pnySxrcubM/hR4SWo6309aJr0ghUYXqxorm9LprpxBDG4pKG7rw8FnMSIxuCIl+L3G3f pQ7sJFVehZ1e4LudKLyLwUTD+K2Hcr+cYbBqgmDPIUau/hEmU0pgvAzvLPGNFfNzyEIP WwxA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449532; x=1692054332; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dV51lKtiOhMWQgAl7GNvRpOxJtkn3o5Ia4Rm5PKGOL0=; b=N1KiKxhNi+/gyepfJx/lQaxGMIpgl8/zb+LFKAPwYI4MW0kOUnXz7qC06tn81kcbW8 0SWwehWsaHrekjcnzr62i5j69lCPZYogZ10+PftTxbiTjQ04xAjqBV3sv+M84aavEJss mI/mDUwJTNrx7yoEsmOvSOAjkvaBeYI4sSkvGUJus9OYW/X+oX4S0OZPXZK7266KkB43 QwZjTol+Ft0Gm6rVde6zb8/U8fnblSW5IzpYiDlPZbN6+GgDgqnZPriqC93xe2Rro3pa 9/9accQDFfKyy/sI1hFfgu2uZZvmmDsPdS4hAzfCZSV5ybyk7cgndEzwsnuZcFBGB/US J80Q== X-Gm-Message-State: AOJu0YwJ4MrRR7nxvOcGTmVt5/Cdew2scjkQ/NE0VrJ3DcOl5BtzdusR 0kJwVWgtTKrxyB+ADim1H3U= X-Google-Smtp-Source: AGHT+IEvrDCY+UE05HZYU4gf4n6Ovk9myxbAlbrTcNmkwiJKi0vLSKG9giGoSfLWvI4ILhQ42gUDlQ== X-Received: by 2002:a25:bd7:0:b0:d14:6e28:69a4 with SMTP id 206-20020a250bd7000000b00d146e2869a4mr9317696ybl.29.1691449532541; Mon, 07 Aug 2023 16:05:32 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:32 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH mm-unstable v9 07/31] mm: Convert pmd_ptlock_init() to use ptdescs Date: Mon, 7 Aug 2023 16:04:49 -0700 Message-Id: <20230807230513.102486-8-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- include/linux/mm.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index bc82a64e5f01..040982fe9063 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2993,12 +2993,12 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return ptlock_ptr(pmd_ptdesc(pmd)); } -static inline bool pmd_ptlock_init(struct page *page) +static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - page->pmd_huge_pte = NULL; + ptdesc->pmd_huge_pte = NULL; #endif - return ptlock_init(page); + return ptlock_init(ptdesc_page(ptdesc)); } static inline void pmd_ptlock_free(struct page *page) @@ -3018,7 +3018,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) return &mm->page_table_lock; } -static inline bool pmd_ptlock_init(struct page *page) { return true; } +static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) { return true; } static inline void pmd_ptlock_free(struct page *page) {} #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte) @@ -3034,7 +3034,7 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) static inline bool pgtable_pmd_page_ctor(struct page *page) { - if (!pmd_ptlock_init(page)) + if (!pmd_ptlock_init(page_ptdesc(page))) return false; __SetPageTable(page); inc_lruvec_page_state(page, NR_PAGETABLE); From patchwork Mon Aug 7 23:04:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345370 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6FB9DC25B7A for ; Mon, 7 Aug 2023 23:06:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229889AbjHGXGm (ORCPT ); Mon, 7 Aug 2023 19:06:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45322 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230388AbjHGXGY (ORCPT ); Mon, 7 Aug 2023 19:06:24 -0400 Received: from mail-yb1-xb2e.google.com (mail-yb1-xb2e.google.com [IPv6:2607:f8b0:4864:20::b2e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1032510C0; Mon, 7 Aug 2023 16:05:56 -0700 (PDT) Received: by mail-yb1-xb2e.google.com with SMTP id 3f1490d57ef6-d16889b3e93so5396097276.0; Mon, 07 Aug 2023 16:05:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449534; x=1692054334; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qSt61pcnVgNBhKAHMe8eO3a6G9ChrNL9Px0NKJkznSI=; b=doFuHRFvbyMJN61tPnMQYXox+6stb50QdP4/3C93sFPuLhCWwxu1z+tfNZJ4b6Wpl+ E3E2D7c/fVsu9NK1PuuCIKOAph+/dD0UwiupTdpLzihtqamhwaXNn1+eSFxg36XE1nAW 9YCYbT+6Netay+dE7igP1FbsCv8QYjddLC3kVasHRyFjUe3vhlHJ5pPzAaFOmunT+EhT TypcY3libiW+1xTSZsvLq3CfCOXA2FWQUiU+MigDUHm3gt0TV6nV3mcKzhVGZgr0Qhni PeipZMs+ho2QZuOrtBA9RQFaTSU/S0o5wnK6eQ2hY8zhXlVshk/VfS/NqhE6JdbcRNTR jImw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449534; x=1692054334; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qSt61pcnVgNBhKAHMe8eO3a6G9ChrNL9Px0NKJkznSI=; b=KjZBBStwiAcomgy/jGr5WB6aDhmzEtLWEK+Fcmv1Rdte96n/hoCFJIkcN0Dm8Sz1rV Df5GFAw06iocu4sxg484040jbVX69DcT3T0wKPwp+R3AQ1voWi8ALdl4Fu3EsgTGynpR m1RuqtSK2BMHkQQNq8TN24ABZeZ6iHjhUUoZKWCkWR8+vLyyMPXgOiqxDTASHKiV605y bM9T2JS3RWD/y03aa/FH17nMhx/O33FQOt60h7s6raeSDfWYaBQxy9RRUCV6wPDgA7op PPtJT+ULin6MZyxrPI9n9IN7Gn8plKN9B9KTKIds+x8hZh2wEo6FFLEtehohHnjLuctR tf1Q== X-Gm-Message-State: AOJu0YwEXpRJMHhr+0uumCA8OLxg1LwF3qT4L2spb0+GDcq3WZticB9P +oKy4ZXkuqa/jxhZBMKp0uwa8p55Yy0oKg== X-Google-Smtp-Source: AGHT+IH78clSYv8jjRwrTCz7hdEyV6ONxxFaBIBEI75jS+XpTOJin8WaSgmj5CyKgW8Moog6FuLvdA== X-Received: by 2002:a25:4c86:0:b0:d0a:d15b:3b0f with SMTP id z128-20020a254c86000000b00d0ad15b3b0fmr10002278yba.33.1691449534448; Mon, 07 Aug 2023 16:05:34 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:34 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH mm-unstable v9 08/31] mm: Convert ptlock_init() to use ptdescs Date: Mon, 7 Aug 2023 16:04:50 -0700 Message-Id: <20230807230513.102486-9-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- include/linux/mm.h | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 040982fe9063..13947b17f25e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2892,7 +2892,7 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) return ptlock_ptr(page_ptdesc(pmd_page(*pmd))); } -static inline bool ptlock_init(struct page *page) +static inline bool ptlock_init(struct ptdesc *ptdesc) { /* * prep_new_page() initialize page->private (and therefore page->ptl) @@ -2901,10 +2901,10 @@ static inline bool ptlock_init(struct page *page) * It can happen if arch try to use slab for page table allocation: * slab code uses page->slab_cache, which share storage with page->ptl. */ - VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page); - if (!ptlock_alloc(page_ptdesc(page))) + VM_BUG_ON_PAGE(*(unsigned long *)&ptdesc->ptl, ptdesc_page(ptdesc)); + if (!ptlock_alloc(ptdesc)) return false; - spin_lock_init(ptlock_ptr(page_ptdesc(page))); + spin_lock_init(ptlock_ptr(ptdesc)); return true; } @@ -2917,13 +2917,13 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) return &mm->page_table_lock; } static inline void ptlock_cache_init(void) {} -static inline bool ptlock_init(struct page *page) { return true; } +static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; } static inline void ptlock_free(struct page *page) {} #endif /* USE_SPLIT_PTE_PTLOCKS */ static inline bool pgtable_pte_page_ctor(struct page *page) { - if (!ptlock_init(page)) + if (!ptlock_init(page_ptdesc(page))) return false; __SetPageTable(page); inc_lruvec_page_state(page, NR_PAGETABLE); @@ -2998,7 +2998,7 @@ static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) #ifdef CONFIG_TRANSPARENT_HUGEPAGE ptdesc->pmd_huge_pte = NULL; #endif - return ptlock_init(ptdesc_page(ptdesc)); + return ptlock_init(ptdesc); } static inline void pmd_ptlock_free(struct page *page) From patchwork Mon Aug 7 23:04:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345373 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58000C04FE1 for ; Mon, 7 Aug 2023 23:07:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231289AbjHGXHL (ORCPT ); Mon, 7 Aug 2023 19:07:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45084 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231131AbjHGXG0 (ORCPT ); Mon, 7 Aug 2023 19:06:26 -0400 Received: from mail-yb1-xb33.google.com (mail-yb1-xb33.google.com [IPv6:2607:f8b0:4864:20::b33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1D7A819AD; Mon, 7 Aug 2023 16:06:00 -0700 (PDT) Received: by mail-yb1-xb33.google.com with SMTP id 3f1490d57ef6-d4b74a4a6daso2620662276.2; Mon, 07 Aug 2023 16:06:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449536; x=1692054336; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=dL6AQetgHURU9peF5jMm+1QPkGPdtGd2X/zb9xq5Tzw=; b=ipvaU8x/4/BvbPbL1JSP5UKyrvOXFJin5b02qnpk0+g2WqZ4eJ+PO+ktIgA+6wNoSm aaVqpEKee+fuDlQKCW2FfgF7SgMBsBqJt3e1nMH+AXWpM2n5M5vJ6ySCVS0SQ6gxdoiI D+Vz3KZ0hmlesWiSA/IngK6t1yMsQcmq6H5TLlpCkzZtF+sAfosr9KpkynHtpBKwq7VS GLoFH093mbAem14YxFU+jWVsiLanVvmYVX25waYX2eZafZZIUXESTosUd2Oq+mhwT6KE DFWO6vdcB84fh2CYUtre/BRV5qGnoLxfl2hREphZC8Q8m4iPBusEVSAtzxRpIAKFC+MO xrMQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449536; x=1692054336; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dL6AQetgHURU9peF5jMm+1QPkGPdtGd2X/zb9xq5Tzw=; b=KXs1Yv0xKPMqkzM9VMTNw46rhIAhkVIbYQErthhEHsLlHxjWmEwA99rceVOPJwtup+ 6IhXVMn67Mw6sexNw/sI8kv0oEWXBomS0+qzbzP9elpdEwPKtsWpBW3DYzr0UnZ8MLh6 nH266NZvlHvL6b47h5JKRfRULn3k9vvbmHALeU9yFa2Oe6Pd9rtW5qh/ALs7/dscCJnU Li3fokFCkYQRTwBaK4LMQtsw93LgBDKBuEaHusJNHndHqXCriOKG9ThESfyqChb9aI+n lV+RlO66qjIgm9P6SNb5HKai3vuBoCdCm1aUmgk8YLZ/zvI0IxDryoqhdFYX/PZ7HXsf A5hg== X-Gm-Message-State: AOJu0Yw8Lma43awVZDl7LQRPBCpdnyuq5wIOMKRPT1N7YOkknw0+r75a Ydgou8TBc0fOVUP4eeI9xKI= X-Google-Smtp-Source: AGHT+IEvxj9+7jMT4q6zEzLr5Igqcb/a7jma6HML30e7U0QAJ8Bd3AhAgHjN0UpvuwRaK7JUtc9G0w== X-Received: by 2002:a25:738b:0:b0:d09:7f94:6ea3 with SMTP id o133-20020a25738b000000b00d097f946ea3mr11456718ybc.65.1691449536455; Mon, 07 Aug 2023 16:05:36 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:36 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH mm-unstable v9 09/31] mm: Convert pmd_ptlock_free() to use ptdescs Date: Mon, 7 Aug 2023 16:04:51 -0700 Message-Id: <20230807230513.102486-10-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- include/linux/mm.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 13947b17f25e..aa6f77c71453 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3001,12 +3001,12 @@ static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) return ptlock_init(ptdesc); } -static inline void pmd_ptlock_free(struct page *page) +static inline void pmd_ptlock_free(struct ptdesc *ptdesc) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - VM_BUG_ON_PAGE(page->pmd_huge_pte, page); + VM_BUG_ON_PAGE(ptdesc->pmd_huge_pte, ptdesc_page(ptdesc)); #endif - ptlock_free(page); + ptlock_free(ptdesc_page(ptdesc)); } #define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte) @@ -3019,7 +3019,7 @@ static inline spinlock_t *pmd_lockptr(struct mm_struct *mm, pmd_t *pmd) } static inline bool pmd_ptlock_init(struct ptdesc *ptdesc) { return true; } -static inline void pmd_ptlock_free(struct page *page) {} +static inline void pmd_ptlock_free(struct ptdesc *ptdesc) {} #define pmd_huge_pte(mm, pmd) ((mm)->pmd_huge_pte) @@ -3043,7 +3043,7 @@ static inline bool pgtable_pmd_page_ctor(struct page *page) static inline void pgtable_pmd_page_dtor(struct page *page) { - pmd_ptlock_free(page); + pmd_ptlock_free(page_ptdesc(page)); __ClearPageTable(page); dec_lruvec_page_state(page, NR_PAGETABLE); } From patchwork Mon Aug 7 23:04:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345374 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75BB9C07E8A for ; Mon, 7 Aug 2023 23:07:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231343AbjHGXHM (ORCPT ); Mon, 7 Aug 2023 19:07:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45342 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230178AbjHGXG0 (ORCPT ); Mon, 7 Aug 2023 19:06:26 -0400 Received: from mail-yb1-xb2f.google.com (mail-yb1-xb2f.google.com [IPv6:2607:f8b0:4864:20::b2f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 73C541BC2; Mon, 7 Aug 2023 16:06:00 -0700 (PDT) Received: by mail-yb1-xb2f.google.com with SMTP id 3f1490d57ef6-d2b8437d825so5087529276.3; Mon, 07 Aug 2023 16:06:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449538; x=1692054338; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rO3hRWVBo6Y9vRkkphMYimV7kP+YdlMpTAZzrYQT330=; b=Xru6G6uiZe/OuMY5nm6nRGhKKZT+cMjFr3ywYmjJVtK6bJQBvGQe4lb67TeNeIpRR1 vU6ROYd5sljoul9DQ182dcYNUtIjSE0g9ryH9BBQnyweF2i9F3a0tNr39lnubSpEzEzx D9dlEQ3L7Z6V9tMyRKOuG3jp1VK2LuxNqNXMsa/r8JF3v1c5M1/VvDBiBALX9JEbGcjF BYD5zunEjYtVpsA/+5S4bzMaVg5Xls19tVXYJS5Y3GWWrR9cQxHo66Iare/M20jic/7P QScGaLnDnqCa6oKmv6Q3fNTRqyTkrnI7S8dkrsktkT3ibHnZIlamT/1RRAzAV+HV4mnX P1Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449538; x=1692054338; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rO3hRWVBo6Y9vRkkphMYimV7kP+YdlMpTAZzrYQT330=; b=Q7vgn/EXMeJXFoXLViRcSebZvAWl6kIq6AG0gZr+GiMyvDLUqdlOPXudplVLK94KAh ykN3rSe6hHbWcHDuKIDnSaBFi+0FSnrtf3i/h3EPA41cfTGgNQE39klPASjLYehBAf7S rqshlK3rB6OqF8sL166TALG78f6TzgAGCI8RJ2StJUXLqADB+R52Vrv7rEq4Xv1aVbkT MWkCnFY+GlRRJ9mV8ipwO1UT0YXZj4DqZWHoJsLCUHcwakzQDgMztAS7CYjd/X+jyq9R 1AabyfGs2xn3ay0BTQ8hwz0dgNeot/TEK13EmoY9PHzZLO+xG8PFjaeStYQhJhUqQnMt B8Qw== X-Gm-Message-State: AOJu0YybTlL61psycY00h/WG9vJpM6zejTVW5fz+8y6HIKqfEtvMNDkm sMZrVYjgekPnKlYOdxMlT24= X-Google-Smtp-Source: AGHT+IFQ1sRvJZAfCY+VUYCk9fa9DxUgaBX+22XersqRcqzji4wcHGi6IN2gPixXkKaq4oGn1g4mnw== X-Received: by 2002:a25:8001:0:b0:d0f:ea4b:1dff with SMTP id m1-20020a258001000000b00d0fea4b1dffmr9335056ybk.8.1691449538435; Mon, 07 Aug 2023 16:05:38 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:38 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH mm-unstable v9 10/31] mm: Convert ptlock_free() to use ptdescs Date: Mon, 7 Aug 2023 16:04:52 -0700 Message-Id: <20230807230513.102486-11-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org This removes some direct accesses to struct page, working towards splitting out struct ptdesc from struct page. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- include/linux/mm.h | 10 +++++----- mm/memory.c | 4 ++-- 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index aa6f77c71453..94984d49ab01 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2861,7 +2861,7 @@ static inline void pagetable_free(struct ptdesc *pt) #if ALLOC_SPLIT_PTLOCKS void __init ptlock_cache_init(void); bool ptlock_alloc(struct ptdesc *ptdesc); -extern void ptlock_free(struct page *page); +void ptlock_free(struct ptdesc *ptdesc); static inline spinlock_t *ptlock_ptr(struct ptdesc *ptdesc) { @@ -2877,7 +2877,7 @@ static inline bool ptlock_alloc(struct ptdesc *ptdesc) return true; } -static inline void ptlock_free(struct page *page) +static inline void ptlock_free(struct ptdesc *ptdesc) { } @@ -2918,7 +2918,7 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd) } static inline void ptlock_cache_init(void) {} static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; } -static inline void ptlock_free(struct page *page) {} +static inline void ptlock_free(struct ptdesc *ptdesc) {} #endif /* USE_SPLIT_PTE_PTLOCKS */ static inline bool pgtable_pte_page_ctor(struct page *page) @@ -2932,7 +2932,7 @@ static inline bool pgtable_pte_page_ctor(struct page *page) static inline void pgtable_pte_page_dtor(struct page *page) { - ptlock_free(page); + ptlock_free(page_ptdesc(page)); __ClearPageTable(page); dec_lruvec_page_state(page, NR_PAGETABLE); } @@ -3006,7 +3006,7 @@ static inline void pmd_ptlock_free(struct ptdesc *ptdesc) #ifdef CONFIG_TRANSPARENT_HUGEPAGE VM_BUG_ON_PAGE(ptdesc->pmd_huge_pte, ptdesc_page(ptdesc)); #endif - ptlock_free(ptdesc_page(ptdesc)); + ptlock_free(ptdesc); } #define pmd_huge_pte(mm, pmd) (pmd_ptdesc(pmd)->pmd_huge_pte) diff --git a/mm/memory.c b/mm/memory.c index 3606ef72ba70..d003076b218d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6145,8 +6145,8 @@ bool ptlock_alloc(struct ptdesc *ptdesc) return true; } -void ptlock_free(struct page *page) +void ptlock_free(struct ptdesc *ptdesc) { - kmem_cache_free(page_ptl_cachep, page->ptl); + kmem_cache_free(page_ptl_cachep, ptdesc->ptl); } #endif From patchwork Mon Aug 7 23:04:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345375 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4EA5FC10F00 for ; Mon, 7 Aug 2023 23:07:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231362AbjHGXHN (ORCPT ); Mon, 7 Aug 2023 19:07:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231164AbjHGXG0 (ORCPT ); Mon, 7 Aug 2023 19:06:26 -0400 Received: from mail-yb1-xb33.google.com (mail-yb1-xb33.google.com [IPv6:2607:f8b0:4864:20::b33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBEF619BC; Mon, 7 Aug 2023 16:06:02 -0700 (PDT) Received: by mail-yb1-xb33.google.com with SMTP id 3f1490d57ef6-d0728058651so5368051276.1; Mon, 07 Aug 2023 16:06:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449540; x=1692054340; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=XBjJ/e2vPX4j94DTkAO9XQqqbwuOMqryYrtpznFEgGY=; b=aasPlAtm5Pcd0yp6duryP473IaipZJS0bWZ4Z8XEGwFlj43AOqpIIC6FhirKlPve9C F1vUSIrLW1oYFLLrLilYCJzqaLsw1rmRGBb+Z5eKzeBY0HVz8f1HGASCxMQ2qqkeDt1/ NUUsoQwBHF6NBLlkSQcHcy9LC4jgcOdJmOlAy/6h3iyfEMCFdLo7Fk1Bu4KWGLpiNsE1 KzSFxNl6DRl46MY838sL7FF+wIJkc1YVhg5kJdFIQoWHplt5NksftMQPYk7wCakllMw/ EceX3OKTdEGMJ4GdFrkX3TffvwqD7i/AFN0kCIklmMB/JORnudk4URx0r/bcoO+F1Yl1 rD5w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449540; x=1692054340; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=XBjJ/e2vPX4j94DTkAO9XQqqbwuOMqryYrtpznFEgGY=; b=athysRw1zpTQA63d2IcX0+LwErVLcnuSJztv4QxQyjCTUk24mp5Wx8jz2UTUeMsudJ t0nHCwAr4bQZiHhsF0kBMfy4UjezmpF2+3F2iMGjmGThGDp3kzx6lJ7bR/DMK9arWBKb Vrw3sqCfTF8XKkbveT0yiuSggZenZJ/92lqo69tflEW1YNKvqX85d/5WalVFQpgRko6W 4/CgUqNku+StUwDbBmEDDqoDdWb6REfNtUaAHCI7Gm3xezjW21qcdAMLlq+vsS86NSNq Xi4kgC219D2R2CTSGJ9Niy/eLZ51SQXXEBx0PK4/3VZDSa028+PifrAr6AEQ6bUU0USW BZfQ== X-Gm-Message-State: AOJu0YzNNN6HtKtt4cQDgT9zaRGYhiEWSfQSsJl0HUuvbKMkXE67+e5t tVY8n5lEsgAoj+sKAN+duwo= X-Google-Smtp-Source: AGHT+IEw8LfaxYvauKrU5QG8lz4qzDMmMrsNvSvomqfZDfXz4mmURgWJbxHCG9LEh9h57v96do4e4w== X-Received: by 2002:a25:8251:0:b0:cb9:41ad:8938 with SMTP id d17-20020a258251000000b00cb941ad8938mr12329076ybn.3.1691449540449; Mon, 07 Aug 2023 16:05:40 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:40 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH mm-unstable v9 11/31] mm: Create ptdesc equivalents for pgtable_{pte,pmd}_page_{ctor,dtor} Date: Mon, 7 Aug 2023 16:04:53 -0700 Message-Id: <20230807230513.102486-12-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Create pagetable_pte_ctor(), pagetable_pmd_ctor(), pagetable_pte_dtor(), and pagetable_pmd_dtor() and make the original pgtable constructor/destructors wrappers. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- include/linux/mm.h | 56 ++++++++++++++++++++++++++++++++++------------ 1 file changed, 42 insertions(+), 14 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 94984d49ab01..6310e0c59efe 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2921,20 +2921,34 @@ static inline bool ptlock_init(struct ptdesc *ptdesc) { return true; } static inline void ptlock_free(struct ptdesc *ptdesc) {} #endif /* USE_SPLIT_PTE_PTLOCKS */ -static inline bool pgtable_pte_page_ctor(struct page *page) +static inline bool pagetable_pte_ctor(struct ptdesc *ptdesc) { - if (!ptlock_init(page_ptdesc(page))) + struct folio *folio = ptdesc_folio(ptdesc); + + if (!ptlock_init(ptdesc)) return false; - __SetPageTable(page); - inc_lruvec_page_state(page, NR_PAGETABLE); + __folio_set_pgtable(folio); + lruvec_stat_add_folio(folio, NR_PAGETABLE); return true; } +static inline bool pgtable_pte_page_ctor(struct page *page) +{ + return pagetable_pte_ctor(page_ptdesc(page)); +} + +static inline void pagetable_pte_dtor(struct ptdesc *ptdesc) +{ + struct folio *folio = ptdesc_folio(ptdesc); + + ptlock_free(ptdesc); + __folio_clear_pgtable(folio); + lruvec_stat_sub_folio(folio, NR_PAGETABLE); +} + static inline void pgtable_pte_page_dtor(struct page *page) { - ptlock_free(page_ptdesc(page)); - __ClearPageTable(page); - dec_lruvec_page_state(page, NR_PAGETABLE); + pagetable_pte_dtor(page_ptdesc(page)); } pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp); @@ -3032,20 +3046,34 @@ static inline spinlock_t *pmd_lock(struct mm_struct *mm, pmd_t *pmd) return ptl; } -static inline bool pgtable_pmd_page_ctor(struct page *page) +static inline bool pagetable_pmd_ctor(struct ptdesc *ptdesc) { - if (!pmd_ptlock_init(page_ptdesc(page))) + struct folio *folio = ptdesc_folio(ptdesc); + + if (!pmd_ptlock_init(ptdesc)) return false; - __SetPageTable(page); - inc_lruvec_page_state(page, NR_PAGETABLE); + __folio_set_pgtable(folio); + lruvec_stat_add_folio(folio, NR_PAGETABLE); return true; } +static inline bool pgtable_pmd_page_ctor(struct page *page) +{ + return pagetable_pmd_ctor(page_ptdesc(page)); +} + +static inline void pagetable_pmd_dtor(struct ptdesc *ptdesc) +{ + struct folio *folio = ptdesc_folio(ptdesc); + + pmd_ptlock_free(ptdesc); + __folio_clear_pgtable(folio); + lruvec_stat_sub_folio(folio, NR_PAGETABLE); +} + static inline void pgtable_pmd_page_dtor(struct page *page) { - pmd_ptlock_free(page_ptdesc(page)); - __ClearPageTable(page); - dec_lruvec_page_state(page, NR_PAGETABLE); + pagetable_pmd_dtor(page_ptdesc(page)); } /* From patchwork Mon Aug 7 23:04:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345376 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 08DBBC25B5E for ; Mon, 7 Aug 2023 23:07:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229895AbjHGXHS (ORCPT ); Mon, 7 Aug 2023 19:07:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45540 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229469AbjHGXGf (ORCPT ); Mon, 7 Aug 2023 19:06:35 -0400 Received: from mail-yw1-x112f.google.com (mail-yw1-x112f.google.com [IPv6:2607:f8b0:4864:20::112f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0064A1BCD; Mon, 7 Aug 2023 16:06:06 -0700 (PDT) Received: by mail-yw1-x112f.google.com with SMTP id 00721157ae682-586b78aa26eso33590177b3.1; Mon, 07 Aug 2023 16:06:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449542; x=1692054342; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=e+EvCLSRHEjRMcrqyzlI7nWwK61/1iPQ6CBOQpr2Pw8=; b=rLWnqjmP7sRfTz6wua9GaZP1/2cNvCnICKswTRMyXqBTRclhVFHiAas9xRu3tkibLy LYukTLEi8jX6/fTdkYMDCO0fnSCk16sHemMQ8U4gAHGrsaCCSIeDnfo7DvoQ9Gwt06kp MQCzbSxxIL1LtUE5+14WQYgbc9OVNCm8xO/RCzTEhtXZ96UEc4E8RE98LQSraVgUZ0Fd 6Fmm1G6c9NfTk9/aagcsWCP/kTtnrTjipyvOL5pIzWA4ypsawqK6CLLX9Fl4WFHyAZw8 htPBGGPp0caQCANc0qVpI3FsKnfIB5iUg+Zk2hPevHBBDnF1Z5MJXUMKGZAmqsL9A8b7 0bsg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449542; x=1692054342; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=e+EvCLSRHEjRMcrqyzlI7nWwK61/1iPQ6CBOQpr2Pw8=; b=bqolWY1bFGAbXNpiTXwsvjahxndvvkOFa0Ny8XYgW7GDvHjLkz/VX/0yvMyT8+uzsF lmSpDdO1F2oYfDKdUAWnY8RWKD60SiH7tCED/q/JzyZRpAlwBFEWoCQzTKbvbVI5qg+Q nYoJelI7SgsH89y27l9w1WpiAQy3KF3mQplRh67pkMMabkrLO8Z8w7p+MCbMjrpdPcoq haUM/AsV6Sx3z/15Wtvvv3yrh79+R98JxFw5ffi71DfjCBA0v/w8awGmeLiV2N0KDg0x jx/rSnZhr6MwvRNeA0n8DUQcMN4b7vR9HxIEzM6pSY6J+ALGEUOkb/gYxrkUyRtAf1sp /tKw== X-Gm-Message-State: AOJu0YwjTbTTGPCc0OTwSUFx54OUp2oKo2Q2I0KNRbDtNFys0Y62o91z BSGIeGhRmGTcbY0yAaTzqo8= X-Google-Smtp-Source: AGHT+IHORUBL3sdiWUXupMQ7Z7czVcAhY9T4N3g/NpV+o0AxUxjNA5Mmjsd58yiAAKDk1m0/Oelmjw== X-Received: by 2002:a25:ba84:0:b0:d11:d704:dd7f with SMTP id s4-20020a25ba84000000b00d11d704dd7fmr11892403ybg.32.1691449542531; Mon, 07 Aug 2023 16:05:42 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:42 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Christophe Leroy , Mike Rapoport Subject: [PATCH mm-unstable v9 12/31] powerpc: Convert various functions to use ptdescs Date: Mon, 7 Aug 2023 16:04:54 -0700 Message-Id: <20230807230513.102486-13-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org In order to split struct ptdesc from struct page, convert various functions to use ptdescs. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- arch/powerpc/mm/book3s64/mmu_context.c | 10 ++--- arch/powerpc/mm/book3s64/pgtable.c | 32 +++++++------- arch/powerpc/mm/pgtable-frag.c | 58 +++++++++++++------------- 3 files changed, 50 insertions(+), 50 deletions(-) diff --git a/arch/powerpc/mm/book3s64/mmu_context.c b/arch/powerpc/mm/book3s64/mmu_context.c index c766e4c26e42..1715b07c630c 100644 --- a/arch/powerpc/mm/book3s64/mmu_context.c +++ b/arch/powerpc/mm/book3s64/mmu_context.c @@ -246,15 +246,15 @@ static void destroy_contexts(mm_context_t *ctx) static void pmd_frag_destroy(void *pmd_frag) { int count; - struct page *page; + struct ptdesc *ptdesc; - page = virt_to_page(pmd_frag); + ptdesc = virt_to_ptdesc(pmd_frag); /* drop all the pending references */ count = ((unsigned long)pmd_frag & ~PAGE_MASK) >> PMD_FRAG_SIZE_SHIFT; /* We allow PTE_FRAG_NR fragments from a PTE page */ - if (atomic_sub_and_test(PMD_FRAG_NR - count, &page->pt_frag_refcount)) { - pgtable_pmd_page_dtor(page); - __free_page(page); + if (atomic_sub_and_test(PMD_FRAG_NR - count, &ptdesc->pt_frag_refcount)) { + pagetable_pmd_dtor(ptdesc); + pagetable_free(ptdesc); } } diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c index 75b938268b04..1498ccd08367 100644 --- a/arch/powerpc/mm/book3s64/pgtable.c +++ b/arch/powerpc/mm/book3s64/pgtable.c @@ -384,22 +384,22 @@ static pmd_t *get_pmd_from_cache(struct mm_struct *mm) static pmd_t *__alloc_for_pmdcache(struct mm_struct *mm) { void *ret = NULL; - struct page *page; + struct ptdesc *ptdesc; gfp_t gfp = GFP_KERNEL_ACCOUNT | __GFP_ZERO; if (mm == &init_mm) gfp &= ~__GFP_ACCOUNT; - page = alloc_page(gfp); - if (!page) + ptdesc = pagetable_alloc(gfp, 0); + if (!ptdesc) return NULL; - if (!pgtable_pmd_page_ctor(page)) { - __free_pages(page, 0); + if (!pagetable_pmd_ctor(ptdesc)) { + pagetable_free(ptdesc); return NULL; } - atomic_set(&page->pt_frag_refcount, 1); + atomic_set(&ptdesc->pt_frag_refcount, 1); - ret = page_address(page); + ret = ptdesc_address(ptdesc); /* * if we support only one fragment just return the * allocated page. @@ -409,12 +409,12 @@ static pmd_t *__alloc_for_pmdcache(struct mm_struct *mm) spin_lock(&mm->page_table_lock); /* - * If we find pgtable_page set, we return + * If we find ptdesc_page set, we return * the allocated page with single fragment * count. */ if (likely(!mm->context.pmd_frag)) { - atomic_set(&page->pt_frag_refcount, PMD_FRAG_NR); + atomic_set(&ptdesc->pt_frag_refcount, PMD_FRAG_NR); mm->context.pmd_frag = ret + PMD_FRAG_SIZE; } spin_unlock(&mm->page_table_lock); @@ -435,15 +435,15 @@ pmd_t *pmd_fragment_alloc(struct mm_struct *mm, unsigned long vmaddr) void pmd_fragment_free(unsigned long *pmd) { - struct page *page = virt_to_page(pmd); + struct ptdesc *ptdesc = virt_to_ptdesc(pmd); - if (PageReserved(page)) - return free_reserved_page(page); + if (pagetable_is_reserved(ptdesc)) + return free_reserved_ptdesc(ptdesc); - BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0); - if (atomic_dec_and_test(&page->pt_frag_refcount)) { - pgtable_pmd_page_dtor(page); - __free_page(page); + BUG_ON(atomic_read(&ptdesc->pt_frag_refcount) <= 0); + if (atomic_dec_and_test(&ptdesc->pt_frag_refcount)) { + pagetable_pmd_dtor(ptdesc); + pagetable_free(ptdesc); } } diff --git a/arch/powerpc/mm/pgtable-frag.c b/arch/powerpc/mm/pgtable-frag.c index 0c6b68130025..8c31802f97e8 100644 --- a/arch/powerpc/mm/pgtable-frag.c +++ b/arch/powerpc/mm/pgtable-frag.c @@ -18,15 +18,15 @@ void pte_frag_destroy(void *pte_frag) { int count; - struct page *page; + struct ptdesc *ptdesc; - page = virt_to_page(pte_frag); + ptdesc = virt_to_ptdesc(pte_frag); /* drop all the pending references */ count = ((unsigned long)pte_frag & ~PAGE_MASK) >> PTE_FRAG_SIZE_SHIFT; /* We allow PTE_FRAG_NR fragments from a PTE page */ - if (atomic_sub_and_test(PTE_FRAG_NR - count, &page->pt_frag_refcount)) { - pgtable_pte_page_dtor(page); - __free_page(page); + if (atomic_sub_and_test(PTE_FRAG_NR - count, &ptdesc->pt_frag_refcount)) { + pagetable_pte_dtor(ptdesc); + pagetable_free(ptdesc); } } @@ -55,25 +55,25 @@ static pte_t *get_pte_from_cache(struct mm_struct *mm) static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel) { void *ret = NULL; - struct page *page; + struct ptdesc *ptdesc; if (!kernel) { - page = alloc_page(PGALLOC_GFP | __GFP_ACCOUNT); - if (!page) + ptdesc = pagetable_alloc(PGALLOC_GFP | __GFP_ACCOUNT, 0); + if (!ptdesc) return NULL; - if (!pgtable_pte_page_ctor(page)) { - __free_page(page); + if (!pagetable_pte_ctor(ptdesc)) { + pagetable_free(ptdesc); return NULL; } } else { - page = alloc_page(PGALLOC_GFP); - if (!page) + ptdesc = pagetable_alloc(PGALLOC_GFP, 0); + if (!ptdesc) return NULL; } - atomic_set(&page->pt_frag_refcount, 1); + atomic_set(&ptdesc->pt_frag_refcount, 1); - ret = page_address(page); + ret = ptdesc_address(ptdesc); /* * if we support only one fragment just return the * allocated page. @@ -82,12 +82,12 @@ static pte_t *__alloc_for_ptecache(struct mm_struct *mm, int kernel) return ret; spin_lock(&mm->page_table_lock); /* - * If we find pgtable_page set, we return + * If we find ptdesc_page set, we return * the allocated page with single fragment * count. */ if (likely(!pte_frag_get(&mm->context))) { - atomic_set(&page->pt_frag_refcount, PTE_FRAG_NR); + atomic_set(&ptdesc->pt_frag_refcount, PTE_FRAG_NR); pte_frag_set(&mm->context, ret + PTE_FRAG_SIZE); } spin_unlock(&mm->page_table_lock); @@ -108,28 +108,28 @@ pte_t *pte_fragment_alloc(struct mm_struct *mm, int kernel) static void pte_free_now(struct rcu_head *head) { - struct page *page; + struct ptdesc *ptdesc; - page = container_of(head, struct page, rcu_head); - pgtable_pte_page_dtor(page); - __free_page(page); + ptdesc = container_of(head, struct ptdesc, pt_rcu_head); + pagetable_pte_dtor(ptdesc); + pagetable_free(ptdesc); } void pte_fragment_free(unsigned long *table, int kernel) { - struct page *page = virt_to_page(table); + struct ptdesc *ptdesc = virt_to_ptdesc(table); - if (PageReserved(page)) - return free_reserved_page(page); + if (pagetable_is_reserved(ptdesc)) + return free_reserved_ptdesc(ptdesc); - BUG_ON(atomic_read(&page->pt_frag_refcount) <= 0); - if (atomic_dec_and_test(&page->pt_frag_refcount)) { + BUG_ON(atomic_read(&ptdesc->pt_frag_refcount) <= 0); + if (atomic_dec_and_test(&ptdesc->pt_frag_refcount)) { if (kernel) - __free_page(page); - else if (TestClearPageActive(page)) - call_rcu(&page->rcu_head, pte_free_now); + pagetable_free(ptdesc); + else if (folio_test_clear_active(ptdesc_folio(ptdesc))) + call_rcu(&ptdesc->pt_rcu_head, pte_free_now); else - pte_free_now(&page->rcu_head); + pte_free_now(&ptdesc->pt_rcu_head); } } From patchwork Mon Aug 7 23:04:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345377 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 667BEC04E69 for ; Mon, 7 Aug 2023 23:07:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229580AbjHGXH1 (ORCPT ); Mon, 7 Aug 2023 19:07:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229810AbjHGXGg (ORCPT ); Mon, 7 Aug 2023 19:06:36 -0400 Received: from mail-oo1-xc29.google.com (mail-oo1-xc29.google.com [IPv6:2607:f8b0:4864:20::c29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E2A9C10F5; Mon, 7 Aug 2023 16:06:07 -0700 (PDT) Received: by mail-oo1-xc29.google.com with SMTP id 006d021491bc7-56cd753b31cso2655476eaf.1; Mon, 07 Aug 2023 16:06:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449544; x=1692054344; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=sltDOqqNfHEtvz8jLXUjW/r74sJCuRKUMVTEbSHBWCo=; b=UhBUMZPilOAQo3tCXIAniV7oZUOB/4OUCgf91AGw0VUxPIy18IwE3Hi+xzTQPR3j5J o20MFNZLnYngdlpCV7c18tsU830Io9A8DWdX3VSs8IZGcaz3l+m4OaxP2VVwR45ldftn z2kW31b5xPxYeP3g5UA46i7qsam8HRjTgG7guCbc8HyGEcYktnlbPJA40mgTTCO6Cfp5 i9UcQWRlBkevWdSR2pIJgYC6roUzquc+QXKeembvcRSerjkiIbuoDs27MkJKPDd/Xbpo rTtWQ/L28eqbGOEHbuk8Zg9jgIBeYXnwkb82KyyF5XFV7KZrtQoqdZyN2qkwYGLx08Fy p3Sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449544; x=1692054344; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=sltDOqqNfHEtvz8jLXUjW/r74sJCuRKUMVTEbSHBWCo=; b=YtmPEwbCxXp1FNnQaHFrhhrSa0rFjJkyfor0V1uDNYaj2e3Fy+CEwDYoVj5sQjylIp VNdTd0ZbcZfEHJjjCgBKSlj0v+Ug3LJy6au+Cu3BfqmcyfIS5LJ87jSPNOKKiq86WHz2 UJJ/1QHgF0qYrnZqsuxRclY5ny5e18K56WEn+BIh6xsr28X3fCX+vtp7mGaeeQqG/E9p 2Fr3iaAhhiX961k8QEq3zE4idvYsD6IZ37+oPGrSk2cbSQ4rl6A2S71uPqytxD3HdauY qtdDwb8d16g8cS4UJdpj6b10UK4rtIB6I6bUIygfB4jut9dLuSxj7zrciBDBxoPo4Q+S Nz8Q== X-Gm-Message-State: AOJu0Ywhk1SbAFtIQVSu8ghWmzCH6ZBcr4zBvgnJSJtnGZ2fGicBYH3B nbl3ynZ/LWOv9DFLjZc0oIQ= X-Google-Smtp-Source: AGHT+IHsTvUJyWq2o9mZJc1NQ3FrwP0s/yEHbaA4sDhb4UDEkTpN/SnOXwKPkJpn25/vbSLatpkp3w== X-Received: by 2002:a05:6808:df3:b0:3a4:8140:97e8 with SMTP id g51-20020a0568080df300b003a4814097e8mr8989196oic.14.1691449544486; Mon, 07 Aug 2023 16:05:44 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:44 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Dave Hansen Subject: [PATCH mm-unstable v9 13/31] x86: Convert various functions to use ptdescs Date: Mon, 7 Aug 2023 16:04:55 -0700 Message-Id: <20230807230513.102486-14-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org In order to split struct ptdesc from struct page, convert various functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use pagetable_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) --- arch/x86/mm/pgtable.c | 47 ++++++++++++++++++++++++++----------------- 1 file changed, 28 insertions(+), 19 deletions(-) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 15a8009a4480..d3a93e8766ee 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -52,7 +52,7 @@ early_param("userpte", setup_userpte); void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) { - pgtable_pte_page_dtor(pte); + pagetable_pte_dtor(page_ptdesc(pte)); paravirt_release_pte(page_to_pfn(pte)); paravirt_tlb_remove_table(tlb, pte); } @@ -60,7 +60,7 @@ void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) #if CONFIG_PGTABLE_LEVELS > 2 void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) { - struct page *page = virt_to_page(pmd); + struct ptdesc *ptdesc = virt_to_ptdesc(pmd); paravirt_release_pmd(__pa(pmd) >> PAGE_SHIFT); /* * NOTE! For PAE, any changes to the top page-directory-pointer-table @@ -69,8 +69,8 @@ void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) #ifdef CONFIG_X86_PAE tlb->need_flush_all = 1; #endif - pgtable_pmd_page_dtor(page); - paravirt_tlb_remove_table(tlb, page); + pagetable_pmd_dtor(ptdesc); + paravirt_tlb_remove_table(tlb, ptdesc_page(ptdesc)); } #if CONFIG_PGTABLE_LEVELS > 3 @@ -92,16 +92,16 @@ void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d) static inline void pgd_list_add(pgd_t *pgd) { - struct page *page = virt_to_page(pgd); + struct ptdesc *ptdesc = virt_to_ptdesc(pgd); - list_add(&page->lru, &pgd_list); + list_add(&ptdesc->pt_list, &pgd_list); } static inline void pgd_list_del(pgd_t *pgd) { - struct page *page = virt_to_page(pgd); + struct ptdesc *ptdesc = virt_to_ptdesc(pgd); - list_del(&page->lru); + list_del(&ptdesc->pt_list); } #define UNSHARED_PTRS_PER_PGD \ @@ -112,12 +112,12 @@ static inline void pgd_list_del(pgd_t *pgd) static void pgd_set_mm(pgd_t *pgd, struct mm_struct *mm) { - virt_to_page(pgd)->pt_mm = mm; + virt_to_ptdesc(pgd)->pt_mm = mm; } struct mm_struct *pgd_page_get_mm(struct page *page) { - return page->pt_mm; + return page_ptdesc(page)->pt_mm; } static void pgd_ctor(struct mm_struct *mm, pgd_t *pgd) @@ -213,11 +213,14 @@ void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmd) static void free_pmds(struct mm_struct *mm, pmd_t *pmds[], int count) { int i; + struct ptdesc *ptdesc; for (i = 0; i < count; i++) if (pmds[i]) { - pgtable_pmd_page_dtor(virt_to_page(pmds[i])); - free_page((unsigned long)pmds[i]); + ptdesc = virt_to_ptdesc(pmds[i]); + + pagetable_pmd_dtor(ptdesc); + pagetable_free(ptdesc); mm_dec_nr_pmds(mm); } } @@ -230,18 +233,24 @@ static int preallocate_pmds(struct mm_struct *mm, pmd_t *pmds[], int count) if (mm == &init_mm) gfp &= ~__GFP_ACCOUNT; + gfp &= ~__GFP_HIGHMEM; for (i = 0; i < count; i++) { - pmd_t *pmd = (pmd_t *)__get_free_page(gfp); - if (!pmd) + pmd_t *pmd = NULL; + struct ptdesc *ptdesc = pagetable_alloc(gfp, 0); + + if (!ptdesc) failed = true; - if (pmd && !pgtable_pmd_page_ctor(virt_to_page(pmd))) { - free_page((unsigned long)pmd); - pmd = NULL; + if (ptdesc && !pagetable_pmd_ctor(ptdesc)) { + pagetable_free(ptdesc); + ptdesc = NULL; failed = true; } - if (pmd) + if (ptdesc) { mm_inc_nr_pmds(mm); + pmd = ptdesc_address(ptdesc); + } + pmds[i] = pmd; } @@ -830,7 +839,7 @@ int pud_free_pmd_page(pud_t *pud, unsigned long addr) free_page((unsigned long)pmd_sv); - pgtable_pmd_page_dtor(virt_to_page(pmd)); + pagetable_pmd_dtor(virt_to_ptdesc(pmd)); free_page((unsigned long)pmd); return 1; From patchwork Mon Aug 7 23:04:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345378 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D07E2C25B5C for ; Mon, 7 Aug 2023 23:07:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231164AbjHGXHi (ORCPT ); Mon, 7 Aug 2023 19:07:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45326 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230410AbjHGXG5 (ORCPT ); Mon, 7 Aug 2023 19:06:57 -0400 Received: from mail-yw1-x1133.google.com (mail-yw1-x1133.google.com [IPv6:2607:f8b0:4864:20::1133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9E8FC172C; Mon, 7 Aug 2023 16:06:09 -0700 (PDT) Received: by mail-yw1-x1133.google.com with SMTP id 00721157ae682-5861116fd74so47611597b3.0; Mon, 07 Aug 2023 16:06:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449546; x=1692054346; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KTxrIV2g+RQp8x48kTjbks2V3Qkjrf9giqtKYKQ6wMQ=; b=bESfOv3fJjl6S2bDdKG26ChK560t/KurCf2BuIVrJugWlIetZAY8dkm3BlRqTb5F9h MmCd+fJD3FYKnm/uPsZqzHHfYqRAfZJ6edPW59b/mfQy9OnVPplxToH74IAxvuQBpZIp niQgR35WlLiV2T1WK1prVa+1DAnr9uCehUPSrpK2bRog9p7trBe4LuQ5wQtTiaXMkeR6 5WiqV9YMt9t6YbF2oMIpXApkWHFh9QOYyGn6c4/FtI6WKINzWjkgGMiBWrQM5L6cS45e +OyFkksfoYTyAHJrwLrJApoTt8RQXAw2BsSDClQbHPobWbdTJ6uW2JVwGx2BjRuJHUGC 6BBQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449546; x=1692054346; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=KTxrIV2g+RQp8x48kTjbks2V3Qkjrf9giqtKYKQ6wMQ=; b=FuvXWxTrkNtI4Li0KgsJuT092l1JSBwMuqck9DkLAHH1lIiSpRFBjzeffbbtdlDikU Y87PlNVO0n/WhpO5IpY/e5H6GlVvr7ICBIhIrUTWGYQLK0a93aKoQk1zVHq2IM2PeW/o 4nK+ChkRTSksg6p/JO8hC1dBhS/Ug+SslF8I6M2+Az6KmN9GqYc8Lp5XalX/iFFpVIR7 6pm04GUoqtQgWwtqOB4zTi9ynbNXOcgpDeTjlmQhytCY1cgKZB8ZSXZ1NzJOXEUQLjyG C7sVHSNqNbH0yWVuXOgD3h35yaoOMsp6FCjcevOlcpNCsCqJqH8abtWUGZRbZMQnzfOf fa3A== X-Gm-Message-State: AOJu0YwPwkIosWmujdYlRs80PcTz9WTMjbTwGcRFmt/k4WRFBL14c/1g RBOboM5AYKBcUjRVYfv3pYLgU90uemluSg== X-Google-Smtp-Source: AGHT+IEbeU2d+G/u3Omk8ALzIFtGElW92WFOhX86na7K18OIxELxvZjq1aqGQvka/NS+TcgqvjsoPw== X-Received: by 2002:a25:d08:0:b0:d1c:7549:4e8b with SMTP id 8-20020a250d08000000b00d1c75494e8bmr8202835ybn.29.1691449546593; Mon, 07 Aug 2023 16:05:46 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:46 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , David Hildenbrand , Claudio Imbrenda , Mike Rapoport Subject: [PATCH mm-unstable v9 14/31] s390: Convert various pgalloc functions to use ptdescs Date: Mon, 7 Aug 2023 16:04:56 -0700 Message-Id: <20230807230513.102486-15-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use pagetable_alloc() and ptdesc_address() instead to help standardize page tables further. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- arch/s390/include/asm/pgalloc.h | 4 +- arch/s390/include/asm/tlb.h | 4 +- arch/s390/mm/pgalloc.c | 128 ++++++++++++++++---------------- 3 files changed, 69 insertions(+), 67 deletions(-) diff --git a/arch/s390/include/asm/pgalloc.h b/arch/s390/include/asm/pgalloc.h index 89a9d5ef94f8..376b4b23bdaa 100644 --- a/arch/s390/include/asm/pgalloc.h +++ b/arch/s390/include/asm/pgalloc.h @@ -86,7 +86,7 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long vmaddr) if (!table) return NULL; crst_table_init(table, _SEGMENT_ENTRY_EMPTY); - if (!pgtable_pmd_page_ctor(virt_to_page(table))) { + if (!pagetable_pmd_ctor(virt_to_ptdesc(table))) { crst_table_free(mm, table); return NULL; } @@ -97,7 +97,7 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) { if (mm_pmd_folded(mm)) return; - pgtable_pmd_page_dtor(virt_to_page(pmd)); + pagetable_pmd_dtor(virt_to_ptdesc(pmd)); crst_table_free(mm, (unsigned long *) pmd); } diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h index b91f4a9b044c..383b1f91442c 100644 --- a/arch/s390/include/asm/tlb.h +++ b/arch/s390/include/asm/tlb.h @@ -89,12 +89,12 @@ static inline void pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd, { if (mm_pmd_folded(tlb->mm)) return; - pgtable_pmd_page_dtor(virt_to_page(pmd)); + pagetable_pmd_dtor(virt_to_ptdesc(pmd)); __tlb_adjust_range(tlb, address, PAGE_SIZE); tlb->mm->context.flush_mm = 1; tlb->freed_tables = 1; tlb->cleared_puds = 1; - tlb_remove_table(tlb, pmd); + tlb_remove_ptdesc(tlb, pmd); } /* diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c index d7374add7820..07fc660a24aa 100644 --- a/arch/s390/mm/pgalloc.c +++ b/arch/s390/mm/pgalloc.c @@ -43,17 +43,17 @@ __initcall(page_table_register_sysctl); unsigned long *crst_table_alloc(struct mm_struct *mm) { - struct page *page = alloc_pages(GFP_KERNEL, CRST_ALLOC_ORDER); + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL, CRST_ALLOC_ORDER); - if (!page) + if (!ptdesc) return NULL; - arch_set_page_dat(page, CRST_ALLOC_ORDER); - return (unsigned long *) page_to_virt(page); + arch_set_page_dat(ptdesc_page(ptdesc), CRST_ALLOC_ORDER); + return (unsigned long *) ptdesc_to_virt(ptdesc); } void crst_table_free(struct mm_struct *mm, unsigned long *table) { - free_pages((unsigned long)table, CRST_ALLOC_ORDER); + pagetable_free(virt_to_ptdesc(table)); } static void __crst_table_upgrade(void *arg) @@ -140,21 +140,21 @@ static inline unsigned int atomic_xor_bits(atomic_t *v, unsigned int bits) struct page *page_table_alloc_pgste(struct mm_struct *mm) { - struct page *page; + struct ptdesc *ptdesc; u64 *table; - page = alloc_page(GFP_KERNEL); - if (page) { - table = (u64 *)page_to_virt(page); + ptdesc = pagetable_alloc(GFP_KERNEL, 0); + if (ptdesc) { + table = (u64 *)ptdesc_to_virt(ptdesc); memset64(table, _PAGE_INVALID, PTRS_PER_PTE); memset64(table + PTRS_PER_PTE, 0, PTRS_PER_PTE); } - return page; + return ptdesc_page(ptdesc); } void page_table_free_pgste(struct page *page) { - __free_page(page); + pagetable_free(page_ptdesc(page)); } #endif /* CONFIG_PGSTE */ @@ -242,7 +242,7 @@ void page_table_free_pgste(struct page *page) unsigned long *page_table_alloc(struct mm_struct *mm) { unsigned long *table; - struct page *page; + struct ptdesc *ptdesc; unsigned int mask, bit; /* Try to get a fragment of a 4K page as a 2K page table */ @@ -250,9 +250,9 @@ unsigned long *page_table_alloc(struct mm_struct *mm) table = NULL; spin_lock_bh(&mm->context.lock); if (!list_empty(&mm->context.pgtable_list)) { - page = list_first_entry(&mm->context.pgtable_list, - struct page, lru); - mask = atomic_read(&page->_refcount) >> 24; + ptdesc = list_first_entry(&mm->context.pgtable_list, + struct ptdesc, pt_list); + mask = atomic_read(&ptdesc->_refcount) >> 24; /* * The pending removal bits must also be checked. * Failure to do so might lead to an impossible @@ -264,13 +264,13 @@ unsigned long *page_table_alloc(struct mm_struct *mm) */ mask = (mask | (mask >> 4)) & 0x03U; if (mask != 0x03U) { - table = (unsigned long *) page_to_virt(page); + table = (unsigned long *) ptdesc_to_virt(ptdesc); bit = mask & 1; /* =1 -> second 2K */ if (bit) table += PTRS_PER_PTE; - atomic_xor_bits(&page->_refcount, + atomic_xor_bits(&ptdesc->_refcount, 0x01U << (bit + 24)); - list_del_init(&page->lru); + list_del_init(&ptdesc->pt_list); } } spin_unlock_bh(&mm->context.lock); @@ -278,28 +278,28 @@ unsigned long *page_table_alloc(struct mm_struct *mm) return table; } /* Allocate a fresh page */ - page = alloc_page(GFP_KERNEL); - if (!page) + ptdesc = pagetable_alloc(GFP_KERNEL, 0); + if (!ptdesc) return NULL; - if (!pgtable_pte_page_ctor(page)) { - __free_page(page); + if (!pagetable_pte_ctor(ptdesc)) { + pagetable_free(ptdesc); return NULL; } - arch_set_page_dat(page, 0); + arch_set_page_dat(ptdesc_page(ptdesc), 0); /* Initialize page table */ - table = (unsigned long *) page_to_virt(page); + table = (unsigned long *) ptdesc_to_virt(ptdesc); if (mm_alloc_pgste(mm)) { /* Return 4K page table with PGSTEs */ - INIT_LIST_HEAD(&page->lru); - atomic_xor_bits(&page->_refcount, 0x03U << 24); + INIT_LIST_HEAD(&ptdesc->pt_list); + atomic_xor_bits(&ptdesc->_refcount, 0x03U << 24); memset64((u64 *)table, _PAGE_INVALID, PTRS_PER_PTE); memset64((u64 *)table + PTRS_PER_PTE, 0, PTRS_PER_PTE); } else { /* Return the first 2K fragment of the page */ - atomic_xor_bits(&page->_refcount, 0x01U << 24); + atomic_xor_bits(&ptdesc->_refcount, 0x01U << 24); memset64((u64 *)table, _PAGE_INVALID, 2 * PTRS_PER_PTE); spin_lock_bh(&mm->context.lock); - list_add(&page->lru, &mm->context.pgtable_list); + list_add(&ptdesc->pt_list, &mm->context.pgtable_list); spin_unlock_bh(&mm->context.lock); } return table; @@ -322,19 +322,18 @@ static void page_table_release_check(struct page *page, void *table, static void pte_free_now(struct rcu_head *head) { - struct page *page; + struct ptdesc *ptdesc; - page = container_of(head, struct page, rcu_head); - pgtable_pte_page_dtor(page); - __free_page(page); + ptdesc = container_of(head, struct ptdesc, pt_rcu_head); + pagetable_pte_dtor(ptdesc); + pagetable_free(ptdesc); } void page_table_free(struct mm_struct *mm, unsigned long *table) { unsigned int mask, bit, half; - struct page *page; + struct ptdesc *ptdesc = virt_to_ptdesc(table); - page = virt_to_page(table); if (!mm_alloc_pgste(mm)) { /* Free 2K page table fragment of a 4K page */ bit = ((unsigned long) table & ~PAGE_MASK)/(PTRS_PER_PTE*sizeof(pte_t)); @@ -344,51 +343,50 @@ void page_table_free(struct mm_struct *mm, unsigned long *table) * will happen outside of the critical section from this * function or from __tlb_remove_table() */ - mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24)); + mask = atomic_xor_bits(&ptdesc->_refcount, 0x11U << (bit + 24)); mask >>= 24; - if ((mask & 0x03U) && !PageActive(page)) { + if ((mask & 0x03U) && !folio_test_active(ptdesc_folio(ptdesc))) { /* * Other half is allocated, and neither half has had * its free deferred: add page to head of list, to make * this freed half available for immediate reuse. */ - list_add(&page->lru, &mm->context.pgtable_list); + list_add(&ptdesc->pt_list, &mm->context.pgtable_list); } else { /* If page is on list, now remove it. */ - list_del_init(&page->lru); + list_del_init(&ptdesc->pt_list); } spin_unlock_bh(&mm->context.lock); - mask = atomic_xor_bits(&page->_refcount, 0x10U << (bit + 24)); + mask = atomic_xor_bits(&ptdesc->_refcount, 0x10U << (bit + 24)); mask >>= 24; if (mask != 0x00U) return; half = 0x01U << bit; } else { half = 0x03U; - mask = atomic_xor_bits(&page->_refcount, 0x03U << 24); + mask = atomic_xor_bits(&ptdesc->_refcount, 0x03U << 24); mask >>= 24; } - page_table_release_check(page, table, half, mask); - if (TestClearPageActive(page)) - call_rcu(&page->rcu_head, pte_free_now); + page_table_release_check(ptdesc_page(ptdesc), table, half, mask); + if (folio_test_clear_active(ptdesc_folio(ptdesc))) + call_rcu(&ptdesc->pt_rcu_head, pte_free_now); else - pte_free_now(&page->rcu_head); + pte_free_now(&ptdesc->pt_rcu_head); } void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table, unsigned long vmaddr) { struct mm_struct *mm; - struct page *page; unsigned int bit, mask; + struct ptdesc *ptdesc = virt_to_ptdesc(table); mm = tlb->mm; - page = virt_to_page(table); if (mm_alloc_pgste(mm)) { gmap_unlink(mm, table, vmaddr); table = (unsigned long *) ((unsigned long)table | 0x03U); - tlb_remove_table(tlb, table); + tlb_remove_ptdesc(tlb, table); return; } bit = ((unsigned long) table & ~PAGE_MASK) / (PTRS_PER_PTE*sizeof(pte_t)); @@ -398,19 +396,19 @@ void page_table_free_rcu(struct mmu_gather *tlb, unsigned long *table, * outside of the critical section from __tlb_remove_table() or from * page_table_free() */ - mask = atomic_xor_bits(&page->_refcount, 0x11U << (bit + 24)); + mask = atomic_xor_bits(&ptdesc->_refcount, 0x11U << (bit + 24)); mask >>= 24; - if ((mask & 0x03U) && !PageActive(page)) { + if ((mask & 0x03U) && !folio_test_active(ptdesc_folio(ptdesc))) { /* * Other half is allocated, and neither half has had * its free deferred: add page to end of list, to make * this freed half available for reuse once its pending * bit has been cleared by __tlb_remove_table(). */ - list_add_tail(&page->lru, &mm->context.pgtable_list); + list_add_tail(&ptdesc->pt_list, &mm->context.pgtable_list); } else { /* If page is on list, now remove it. */ - list_del_init(&page->lru); + list_del_init(&ptdesc->pt_list); } spin_unlock_bh(&mm->context.lock); table = (unsigned long *) ((unsigned long) table | (0x01U << bit)); @@ -421,30 +419,30 @@ void __tlb_remove_table(void *_table) { unsigned int mask = (unsigned long) _table & 0x03U, half = mask; void *table = (void *)((unsigned long) _table ^ mask); - struct page *page = virt_to_page(table); + struct ptdesc *ptdesc = virt_to_ptdesc(table); switch (half) { case 0x00U: /* pmd, pud, or p4d */ - free_pages((unsigned long)table, CRST_ALLOC_ORDER); + pagetable_free(ptdesc); return; case 0x01U: /* lower 2K of a 4K page table */ case 0x02U: /* higher 2K of a 4K page table */ - mask = atomic_xor_bits(&page->_refcount, mask << (4 + 24)); + mask = atomic_xor_bits(&ptdesc->_refcount, mask << (4 + 24)); mask >>= 24; if (mask != 0x00U) return; break; case 0x03U: /* 4K page table with pgstes */ - mask = atomic_xor_bits(&page->_refcount, 0x03U << 24); + mask = atomic_xor_bits(&ptdesc->_refcount, 0x03U << 24); mask >>= 24; break; } - page_table_release_check(page, table, half, mask); - if (TestClearPageActive(page)) - call_rcu(&page->rcu_head, pte_free_now); + page_table_release_check(ptdesc_page(ptdesc), table, half, mask); + if (folio_test_clear_active(ptdesc_folio(ptdesc))) + call_rcu(&ptdesc->pt_rcu_head, pte_free_now); else - pte_free_now(&page->rcu_head); + pte_free_now(&ptdesc->pt_rcu_head); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -488,16 +486,20 @@ static void base_pgt_free(unsigned long *table) static unsigned long *base_crst_alloc(unsigned long val) { unsigned long *table; + struct ptdesc *ptdesc; - table = (unsigned long *)__get_free_pages(GFP_KERNEL, CRST_ALLOC_ORDER); - if (table) - crst_table_init(table, val); + ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, CRST_ALLOC_ORDER); + if (!ptdesc) + return NULL; + table = ptdesc_address(ptdesc); + + crst_table_init(table, val); return table; } static void base_crst_free(unsigned long *table) { - free_pages((unsigned long)table, CRST_ALLOC_ORDER); + pagetable_free(virt_to_ptdesc(table)); } #define BASE_ADDR_END_FUNC(NAME, SIZE) \ From patchwork Mon Aug 7 23:04:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345380 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 499D0C25B79 for ; Mon, 7 Aug 2023 23:07:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230041AbjHGXHk (ORCPT ); Mon, 7 Aug 2023 19:07:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45076 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229717AbjHGXG5 (ORCPT ); Mon, 7 Aug 2023 19:06:57 -0400 Received: from mail-yb1-xb2a.google.com (mail-yb1-xb2a.google.com [IPv6:2607:f8b0:4864:20::b2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E785F1705; Mon, 7 Aug 2023 16:06:10 -0700 (PDT) Received: by mail-yb1-xb2a.google.com with SMTP id 3f1490d57ef6-d479d128596so3766300276.1; Mon, 07 Aug 2023 16:06:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449548; x=1692054348; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=djDHSA50q7R+UyBiZfR1OlHjGJmDs73q9nwlLjbVi90=; b=A70KAiN4sx+tlFlhyX9BkgJstVMLVR3VqHlhU0rx7I/q1gK7xadYJiksKbuvoZ1TCU vtYAPdjYMcohI4FUcCDaUSvo3KExkCCd8i4ZAJlLsd7GGEFa35HZ6ueu66zMG9Pjessv IfHL3oLMSZbSj2Xq1C5vYTgwWzjwrMyTCzKH7w07IcqqBjGPHvlKkPkr5yx2TrtnD3V5 vilM5UK235QUkkDj7spFvnAsyD47IB2DFcBmstKvpCiE4ENrGAtmtfggM5j3FhO6Urma yNzpPXlABiRMwP18oUglZegWP3XimvmMFaP4ALUxkyZmGiszwVYNBF+f3K+ck5Grr2Wp NJ/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449548; x=1692054348; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=djDHSA50q7R+UyBiZfR1OlHjGJmDs73q9nwlLjbVi90=; b=BNrzF4QxxvEdfX/WMRJQL7FRb9YYYpimsE59T/drqo81PhVKgTbCfm5jzbwfpv2Syc Y6WmSYZ7DNlAA0uNxMHE2Pfs8YGZ9qNnPtdB2WloKdmr7Efa7SVK8HOXV+fI72qDq3ad BbLv9ubTRm8Zpdoa9tDczB1SRsMytvUjA8cixfoXr7dQOmIplEe9iqAWwbjM0wTiNydu 77hIjch4M7OdMvGcZB9knH4kvU17kK7UbnrlRyCZOkHV7ebMiA7rTmKXR17X0jHHlP2L lDMXVYqAxk+gsLb4kOgM7+3nLoSBNYqRCMt8nYDoHT5s4d1zXmTdzUYcyOkw9dOLeSCn uxbg== X-Gm-Message-State: AOJu0YzfRrzyUS+IbMPOn8pX2SfrXLRMwEEo1qZlRKdFBOLJKjw3bx8C G2hOPGkAHQcUqYa74wQX1h8= X-Google-Smtp-Source: AGHT+IGTzP73V6D5bA3Obfkv2DqAOxUsUVIIigLLB5NQKEC9VyMSvyrKel3HyW7I0Mz7ccDZUXFbIA== X-Received: by 2002:a25:d8c4:0:b0:d05:2616:3363 with SMTP id p187-20020a25d8c4000000b00d0526163363mr9753736ybg.26.1691449548551; Mon, 07 Aug 2023 16:05:48 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:48 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH mm-unstable v9 15/31] mm: remove page table members from struct page Date: Mon, 7 Aug 2023 16:04:57 -0700 Message-Id: <20230807230513.102486-16-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org The page table members are now split out into their own ptdesc struct. Remove them from struct page. Signed-off-by: Vishal Moola (Oracle) Acked-by: Mike Rapoport (IBM) --- include/linux/mm_types.h | 21 --------------------- 1 file changed, 21 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index ea34b22b4cbf..f5ba5b0bc836 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -141,24 +141,6 @@ struct page { struct { /* Tail pages of compound page */ unsigned long compound_head; /* Bit zero is set */ }; - struct { /* Page table pages */ - unsigned long _pt_pad_1; /* compound_head */ - pgtable_t pmd_huge_pte; /* protected by page->ptl */ - /* - * A PTE page table page might be freed by use of - * rcu_head: which overlays those two fields above. - */ - unsigned long _pt_pad_2; /* mapping */ - union { - struct mm_struct *pt_mm; /* x86 pgds only */ - atomic_t pt_frag_refcount; /* powerpc */ - }; -#if ALLOC_SPLIT_PTLOCKS - spinlock_t *ptl; -#else - spinlock_t ptl; -#endif - }; struct { /* ZONE_DEVICE pages */ /** @pgmap: Points to the hosting device page map. */ struct dev_pagemap *pgmap; @@ -454,10 +436,7 @@ struct ptdesc { TABLE_MATCH(flags, __page_flags); TABLE_MATCH(compound_head, pt_list); TABLE_MATCH(compound_head, _pt_pad_1); -TABLE_MATCH(pmd_huge_pte, pmd_huge_pte); TABLE_MATCH(mapping, __page_mapping); -TABLE_MATCH(pt_mm, pt_mm); -TABLE_MATCH(ptl, ptl); TABLE_MATCH(rcu_head, pt_rcu_head); TABLE_MATCH(page_type, __page_type); TABLE_MATCH(_refcount, _refcount); From patchwork Mon Aug 7 23:04:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345381 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BA66C05052 for ; Mon, 7 Aug 2023 23:07:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229717AbjHGXHl (ORCPT ); Mon, 7 Aug 2023 19:07:41 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45902 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230388AbjHGXG5 (ORCPT ); Mon, 7 Aug 2023 19:06:57 -0400 Received: from mail-yw1-x1136.google.com (mail-yw1-x1136.google.com [IPv6:2607:f8b0:4864:20::1136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 24ED81BDF; Mon, 7 Aug 2023 16:06:11 -0700 (PDT) Received: by mail-yw1-x1136.google.com with SMTP id 00721157ae682-58969d4f1b6so7592957b3.2; Mon, 07 Aug 2023 16:06:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449550; x=1692054350; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=bBkB4loM/Mt006iaxsYk7j8UnaAx8a9/NTd5FkPootg=; b=VY/s2U3lT3eb7t5IpiSg8gZgrV6zdEznmL00awlF+lYtU+Ed+18ZVDdsEO6HtU7Vpy dstrUN4SchMkl2fom9aAptCCOd47DbaRbKQrrssm7FlhTEAcsGfKdLEpxi/zNmVX8fbV m0BaPQ+7+Y3ljNnqxJV89KTzVnBKrAEa7+VtN/uK6yP5RttPNi+e44fLcNeyDJwviR59 fKsVvf58pKEDMlQLmejzlB0KwEwg/f+nuBnJ5WOjDekR3PGb3d8Uh4xNSu90knT7V2KQ ue+Oh5SumW1Q6VzG3Nvx4HCQCUch1OMetRhRQnyZsGJky+dnZgy775ZLVF7RMseL02QV h21w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449550; x=1692054350; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bBkB4loM/Mt006iaxsYk7j8UnaAx8a9/NTd5FkPootg=; b=R0wXm1G/qvNMXyg1RVPH2VbpcSGoLkY7xa7p3d0/HuG/d4athxMY4zyran/zM7Nu+u ait8AkaDtQ4w1PGBQurhKdIfkXnsMqY9fwrIhI98XZ78cz+azvcCIyxFLeWH/Ult708M bxpxME31k1A7DXFH4neymaNumcjzJoS3vRGH+7RUaHV39J5UQfbXd3Is2zImaxhpP0RD ND5m5mD/BSShQdtLrcc7urThS6T/koteAYb91E8hOyo3HbOiD5W9koee7UlRAqsBodTh JWV9EJdVXbVirkKzqlXFQMipCiwAs9ggKBoYnthAe6ZS0nGw1YLZj2C5QRowsCKitZmG thlw== X-Gm-Message-State: AOJu0YyxTEGeIxn29TP/e1UvjXChHqbZiVFogQwnOU2L8Omnczi/mco3 aok9wFilhW9lSOrQzwyeTyU= X-Google-Smtp-Source: AGHT+IF/jyNrncOka/KAbgWZqRRy9OsJr4lD7fBxdAmIgyKSndALMkVGpelnzbmpBefUTJnYvKwx8g== X-Received: by 2002:a25:8001:0:b0:d0f:ea4b:1dff with SMTP id m1-20020a258001000000b00d0fea4b1dffmr9335352ybk.8.1691449550566; Mon, 07 Aug 2023 16:05:50 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:50 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" Subject: [PATCH mm-unstable v9 16/31] pgalloc: Convert various functions to use ptdescs Date: Mon, 7 Aug 2023 16:04:58 -0700 Message-Id: <20230807230513.102486-17-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use pagetable_alloc() and ptdesc_address() instead to help standardize page tables further. Signed-off-by: Vishal Moola (Oracle) --- include/asm-generic/pgalloc.h | 88 +++++++++++++++++++++-------------- 1 file changed, 52 insertions(+), 36 deletions(-) diff --git a/include/asm-generic/pgalloc.h b/include/asm-generic/pgalloc.h index a7cf825befae..c75d4a753849 100644 --- a/include/asm-generic/pgalloc.h +++ b/include/asm-generic/pgalloc.h @@ -8,7 +8,7 @@ #define GFP_PGTABLE_USER (GFP_PGTABLE_KERNEL | __GFP_ACCOUNT) /** - * __pte_alloc_one_kernel - allocate a page for PTE-level kernel page table + * __pte_alloc_one_kernel - allocate memory for a PTE-level kernel page table * @mm: the mm_struct of the current context * * This function is intended for architectures that need @@ -18,12 +18,17 @@ */ static inline pte_t *__pte_alloc_one_kernel(struct mm_struct *mm) { - return (pte_t *)__get_free_page(GFP_PGTABLE_KERNEL); + struct ptdesc *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & + ~__GFP_HIGHMEM, 0); + + if (!ptdesc) + return NULL; + return ptdesc_address(ptdesc); } #ifndef __HAVE_ARCH_PTE_ALLOC_ONE_KERNEL /** - * pte_alloc_one_kernel - allocate a page for PTE-level kernel page table + * pte_alloc_one_kernel - allocate memory for a PTE-level kernel page table * @mm: the mm_struct of the current context * * Return: pointer to the allocated memory or %NULL on error @@ -35,40 +40,40 @@ static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) #endif /** - * pte_free_kernel - free PTE-level kernel page table page + * pte_free_kernel - free PTE-level kernel page table memory * @mm: the mm_struct of the current context * @pte: pointer to the memory containing the page table */ static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) { - free_page((unsigned long)pte); + pagetable_free(virt_to_ptdesc(pte)); } /** - * __pte_alloc_one - allocate a page for PTE-level user page table + * __pte_alloc_one - allocate memory for a PTE-level user page table * @mm: the mm_struct of the current context * @gfp: GFP flags to use for the allocation * - * Allocates a page and runs the pgtable_pte_page_ctor(). + * Allocate memory for a page table and ptdesc and runs pagetable_pte_ctor(). * * This function is intended for architectures that need * anything beyond simple page allocation or must have custom GFP flags. * - * Return: `struct page` initialized as page table or %NULL on error + * Return: `struct page` referencing the ptdesc or %NULL on error */ static inline pgtable_t __pte_alloc_one(struct mm_struct *mm, gfp_t gfp) { - struct page *pte; + struct ptdesc *ptdesc; - pte = alloc_page(gfp); - if (!pte) + ptdesc = pagetable_alloc(gfp, 0); + if (!ptdesc) return NULL; - if (!pgtable_pte_page_ctor(pte)) { - __free_page(pte); + if (!pagetable_pte_ctor(ptdesc)) { + pagetable_free(ptdesc); return NULL; } - return pte; + return ptdesc_page(ptdesc); } #ifndef __HAVE_ARCH_PTE_ALLOC_ONE @@ -76,9 +81,9 @@ static inline pgtable_t __pte_alloc_one(struct mm_struct *mm, gfp_t gfp) * pte_alloc_one - allocate a page for PTE-level user page table * @mm: the mm_struct of the current context * - * Allocates a page and runs the pgtable_pte_page_ctor(). + * Allocate memory for a page table and ptdesc and runs pagetable_pte_ctor(). * - * Return: `struct page` initialized as page table or %NULL on error + * Return: `struct page` referencing the ptdesc or %NULL on error */ static inline pgtable_t pte_alloc_one(struct mm_struct *mm) { @@ -92,14 +97,16 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm) */ /** - * pte_free - free PTE-level user page table page + * pte_free - free PTE-level user page table memory * @mm: the mm_struct of the current context - * @pte_page: the `struct page` representing the page table + * @pte_page: the `struct page` referencing the ptdesc */ static inline void pte_free(struct mm_struct *mm, struct page *pte_page) { - pgtable_pte_page_dtor(pte_page); - __free_page(pte_page); + struct ptdesc *ptdesc = page_ptdesc(pte_page); + + pagetable_pte_dtor(ptdesc); + pagetable_free(ptdesc); } @@ -107,10 +114,11 @@ static inline void pte_free(struct mm_struct *mm, struct page *pte_page) #ifndef __HAVE_ARCH_PMD_ALLOC_ONE /** - * pmd_alloc_one - allocate a page for PMD-level page table + * pmd_alloc_one - allocate memory for a PMD-level page table * @mm: the mm_struct of the current context * - * Allocates a page and runs the pgtable_pmd_page_ctor(). + * Allocate memory for a page table and ptdesc and runs pagetable_pmd_ctor(). + * * Allocations use %GFP_PGTABLE_USER in user context and * %GFP_PGTABLE_KERNEL in kernel context. * @@ -118,28 +126,30 @@ static inline void pte_free(struct mm_struct *mm, struct page *pte_page) */ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr) { - struct page *page; + struct ptdesc *ptdesc; gfp_t gfp = GFP_PGTABLE_USER; if (mm == &init_mm) gfp = GFP_PGTABLE_KERNEL; - page = alloc_page(gfp); - if (!page) + ptdesc = pagetable_alloc(gfp, 0); + if (!ptdesc) return NULL; - if (!pgtable_pmd_page_ctor(page)) { - __free_page(page); + if (!pagetable_pmd_ctor(ptdesc)) { + pagetable_free(ptdesc); return NULL; } - return (pmd_t *)page_address(page); + return ptdesc_address(ptdesc); } #endif #ifndef __HAVE_ARCH_PMD_FREE static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) { + struct ptdesc *ptdesc = virt_to_ptdesc(pmd); + BUG_ON((unsigned long)pmd & (PAGE_SIZE-1)); - pgtable_pmd_page_dtor(virt_to_page(pmd)); - free_page((unsigned long)pmd); + pagetable_pmd_dtor(ptdesc); + pagetable_free(ptdesc); } #endif @@ -150,19 +160,25 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd) static inline pud_t *__pud_alloc_one(struct mm_struct *mm, unsigned long addr) { gfp_t gfp = GFP_PGTABLE_USER; + struct ptdesc *ptdesc; if (mm == &init_mm) gfp = GFP_PGTABLE_KERNEL; - return (pud_t *)get_zeroed_page(gfp); + gfp &= ~__GFP_HIGHMEM; + + ptdesc = pagetable_alloc(gfp, 0); + if (!ptdesc) + return NULL; + return ptdesc_address(ptdesc); } #ifndef __HAVE_ARCH_PUD_ALLOC_ONE /** - * pud_alloc_one - allocate a page for PUD-level page table + * pud_alloc_one - allocate memory for a PUD-level page table * @mm: the mm_struct of the current context * - * Allocates a page using %GFP_PGTABLE_USER for user context and - * %GFP_PGTABLE_KERNEL for kernel context. + * Allocate memory for a page table using %GFP_PGTABLE_USER for user context + * and %GFP_PGTABLE_KERNEL for kernel context. * * Return: pointer to the allocated memory or %NULL on error */ @@ -175,7 +191,7 @@ static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr) static inline void __pud_free(struct mm_struct *mm, pud_t *pud) { BUG_ON((unsigned long)pud & (PAGE_SIZE-1)); - free_page((unsigned long)pud); + pagetable_free(virt_to_ptdesc(pud)); } #ifndef __HAVE_ARCH_PUD_FREE @@ -190,7 +206,7 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud) #ifndef __HAVE_ARCH_PGD_FREE static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) { - free_page((unsigned long)pgd); + pagetable_free(virt_to_ptdesc(pgd)); } #endif From patchwork Mon Aug 7 23:04:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345379 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5550CC41513 for ; Mon, 7 Aug 2023 23:07:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230193AbjHGXHj (ORCPT ); Mon, 7 Aug 2023 19:07:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46170 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229640AbjHGXHJ (ORCPT ); Mon, 7 Aug 2023 19:07:09 -0400 Received: from mail-yb1-xb29.google.com (mail-yb1-xb29.google.com [IPv6:2607:f8b0:4864:20::b29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B29351FCC; Mon, 7 Aug 2023 16:06:13 -0700 (PDT) Received: by mail-yb1-xb29.google.com with SMTP id 3f1490d57ef6-d59da7115bdso263799276.3; Mon, 07 Aug 2023 16:06:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449552; x=1692054352; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mSWx8dFer3K04xJe2FS2V6S19uUh9FiSOjAjYMXRNjw=; b=mEF5uAs6LPH63FkNSBplLFspkb60DHxm6zOnUwvAQJc3E/pOwbhAIiNFZBUYHiCsiM p2tbQp60CKHpnlFx92kmBMq0iICGLFKDwXQ0LyWNPB5CTFYLszQ4c8mEuzgfzEni9mwM Ttmf1qUiq6AjtyU7518jshwfi+pChbC60Sh5kSbXAB+lvEOcHSUd/ZfP7MFS2PgNjGFo r7Bvb+zEwdMz3ovaFVdL7n7AY1JJbQH54hsJVFdRCSvSwoDtl5zIXSCLg5x4eEWrxL65 mP+yB5DlcuhDlbUdEBdiXWCJptk41KogbXQLbDgKd5NU/tlto4itztjaP5et2Hvpw0rt 7VEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449552; x=1692054352; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mSWx8dFer3K04xJe2FS2V6S19uUh9FiSOjAjYMXRNjw=; b=Ul2EbYoiZDnnnK5/OQTnd0EdeM1/s4FV+7iT16UpjsA/OvjlKZHiEmCV1rtwtcQ3O2 f/EMU3k4l+KtVWvRc5AOnUwNTMWLWnlw2FATQDBeUVaiLnsRwjUnAGzr+u7WmNc4BSzo 0zVZcL84gGC4oXBsvcLOBewI1dsMe/98mH+qTa7ahORI+hpGsXkM1zt6AD1lRbQQ2AIT N4afKZmS8HJGWUychkjL1yj7Y+OCU9Z8vqEgy+5EM9S+toApQch5a5tKm/cZv5Lyvop8 dotW1y27g4d7SUW+msNdhm4k1tvJwIy0Q2DYUS5iMPQWcrQxC+w9/gaxTs9Ln0uJFRFq ZEJw== X-Gm-Message-State: AOJu0YzvWQYrVUPNYM2cLVA7H34tscKN0x5RZLz7Y+bkDHx5b+4dc5FH HrZDunYtNN+y7Ad1bHRrTrU= X-Google-Smtp-Source: AGHT+IFZB5AVL/MxN7IM/ds0EZn0mK90E4BWht9Xyb7zBcTjMbr/Qp8cCY+l+/2imbi83tiWiC1i0w== X-Received: by 2002:a25:4e05:0:b0:d12:3108:f90f with SMTP id c5-20020a254e05000000b00d123108f90fmr10423741ybb.24.1691449552662; Mon, 07 Aug 2023 16:05:52 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:52 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Catalin Marinas , Mike Rapoport Subject: [PATCH mm-unstable v9 17/31] arm: Convert various functions to use ptdescs Date: Mon, 7 Aug 2023 16:04:59 -0700 Message-Id: <20230807230513.102486-18-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. late_alloc() also uses the __get_free_pages() helper function. Convert this to use pagetable_alloc() and ptdesc_address() instead to help standardize page tables further. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- arch/arm/include/asm/tlb.h | 12 +++++++----- arch/arm/mm/mmu.c | 7 ++++--- 2 files changed, 11 insertions(+), 8 deletions(-) diff --git a/arch/arm/include/asm/tlb.h b/arch/arm/include/asm/tlb.h index b8cbe03ad260..f40d06ad5d2a 100644 --- a/arch/arm/include/asm/tlb.h +++ b/arch/arm/include/asm/tlb.h @@ -39,7 +39,9 @@ static inline void __tlb_remove_table(void *_table) static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { - pgtable_pte_page_dtor(pte); + struct ptdesc *ptdesc = page_ptdesc(pte); + + pagetable_pte_dtor(ptdesc); #ifndef CONFIG_ARM_LPAE /* @@ -50,17 +52,17 @@ __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) __tlb_adjust_range(tlb, addr - PAGE_SIZE, 2 * PAGE_SIZE); #endif - tlb_remove_table(tlb, pte); + tlb_remove_ptdesc(tlb, ptdesc); } static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) { #ifdef CONFIG_ARM_LPAE - struct page *page = virt_to_page(pmdp); + struct ptdesc *ptdesc = virt_to_ptdesc(pmdp); - pgtable_pmd_page_dtor(page); - tlb_remove_table(tlb, page); + pagetable_pmd_dtor(ptdesc); + tlb_remove_ptdesc(tlb, ptdesc); #endif } diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c index c9981c23e8e9..674ed71573a8 100644 --- a/arch/arm/mm/mmu.c +++ b/arch/arm/mm/mmu.c @@ -737,11 +737,12 @@ static void __init *early_alloc(unsigned long sz) static void *__init late_alloc(unsigned long sz) { - void *ptr = (void *)__get_free_pages(GFP_PGTABLE_KERNEL, get_order(sz)); + void *ptdesc = pagetable_alloc(GFP_PGTABLE_KERNEL & ~__GFP_HIGHMEM, + get_order(sz)); - if (!ptr || !pgtable_pte_page_ctor(virt_to_page(ptr))) + if (!ptdesc || !pagetable_pte_ctor(ptdesc)) BUG(); - return ptr; + return ptdesc_to_virt(ptdesc); } static pte_t * __init arm_pte_alloc(pmd_t *pmd, unsigned long addr, From patchwork Mon Aug 7 23:05:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345486 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDCABC27C55 for ; Mon, 7 Aug 2023 23:08:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231597AbjHGXIv (ORCPT ); Mon, 7 Aug 2023 19:08:51 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46188 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230038AbjHGXHK (ORCPT ); Mon, 7 Aug 2023 19:07:10 -0400 Received: from mail-yb1-xb29.google.com (mail-yb1-xb29.google.com [IPv6:2607:f8b0:4864:20::b29]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DA2851FE2; Mon, 7 Aug 2023 16:06:15 -0700 (PDT) Received: by mail-yb1-xb29.google.com with SMTP id 3f1490d57ef6-d593a63e249so512408276.3; Mon, 07 Aug 2023 16:06:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449554; x=1692054354; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=d6BZos7Ybf69Lr9ubs2a6Azcthgsxnr+LK2TqOKvMHI=; b=p8G6Pt9QZsVgDIw5B0YZsM1HO9y9fkrb/cw0HZOe9s94xOMFlEATGdftneo/KGkdaX TwSO+l67/znn0WvrOrRHHntMUuj9nNFXoTE85WKOjzGwb94eB3i84/XFPlTd02w2CLrJ DwESQ9jUYWS2DSwjhw7JNYWMJXOT94ZS609EVQWWoXQdRFNt0hhd7LN+kuo1QQmNKeS0 XHBIPjPF8dy/T4snUiw6ibSJ1ZirvD/sPtDvke5ir6zweBiFnq3ooQs72HZNE6w+OGww eTDTJaym+WVQphZfxFF+BeQU7jP4VlgJoz6RKQntFEtdMFIN0fhGKatr4Jikr9avLVk6 gEYA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449554; x=1692054354; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=d6BZos7Ybf69Lr9ubs2a6Azcthgsxnr+LK2TqOKvMHI=; b=fHeV0McCYBOWqQkNi6l4FSu8UnUrljkSRZOV84vmJCGgOG9sjYa0n1W97us1lWEEUu E9kwq/HuBRs1PN39eFCbPCtDdzCSEEY+8PJf6w45p/SPVNtzOTHAvWU5XxH8TpiCncIq Bl/ktMPntV9UdpDoR3YAWI8QQ+Ya5xwvcYFuysE5Q3u8W8YWzNF7I9yuGopLoEvaSk8V 5QKgvmcXMs4bV3fteIuyo/FW4Nt9iw+DA3bl3DyARJkck2uWleX+SnwD8ayMkm5kVs7b OgiwxINezDwKezKqtNzgt3ya1PXtf0AZ5BMB10+IMwR9Dyh62/jiuU/VJ0Q1+BvXGfF1 zwiA== X-Gm-Message-State: AOJu0YyhEpgdhFrc+wq8aRavBDw/7dug8K/WLgQsGcjY2/QzoBbjU9+e L0f3l3p0YuBI6yohT8TDSW0= X-Google-Smtp-Source: AGHT+IHnEVFzuxaIlZevFxX1u5N6TSfYAnl0Ngd19zcPS1yyZn6+F2gzJTtnG1yf2qQFJp4HGPdIaw== X-Received: by 2002:a25:aea5:0:b0:d0b:fe82:8b99 with SMTP id b37-20020a25aea5000000b00d0bfe828b99mr10007354ybj.44.1691449554679; Mon, 07 Aug 2023 16:05:54 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:54 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport , Catalin Marinas Subject: [PATCH mm-unstable v9 18/31] arm64: Convert various functions to use ptdescs Date: Mon, 7 Aug 2023 16:05:00 -0700 Message-Id: <20230807230513.102486-19-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Acked-by: Mike Rapoport (IBM) Acked-by: Catalin Marinas Signed-off-by: Vishal Moola (Oracle) --- arch/arm64/include/asm/tlb.h | 14 ++++++++------ arch/arm64/mm/mmu.c | 7 ++++--- 2 files changed, 12 insertions(+), 9 deletions(-) diff --git a/arch/arm64/include/asm/tlb.h b/arch/arm64/include/asm/tlb.h index c995d1f4594f..2c29239d05c3 100644 --- a/arch/arm64/include/asm/tlb.h +++ b/arch/arm64/include/asm/tlb.h @@ -75,18 +75,20 @@ static inline void tlb_flush(struct mmu_gather *tlb) static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pte, unsigned long addr) { - pgtable_pte_page_dtor(pte); - tlb_remove_table(tlb, pte); + struct ptdesc *ptdesc = page_ptdesc(pte); + + pagetable_pte_dtor(ptdesc); + tlb_remove_ptdesc(tlb, ptdesc); } #if CONFIG_PGTABLE_LEVELS > 2 static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, unsigned long addr) { - struct page *page = virt_to_page(pmdp); + struct ptdesc *ptdesc = virt_to_ptdesc(pmdp); - pgtable_pmd_page_dtor(page); - tlb_remove_table(tlb, page); + pagetable_pmd_dtor(ptdesc); + tlb_remove_ptdesc(tlb, ptdesc); } #endif @@ -94,7 +96,7 @@ static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmdp, static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pudp, unsigned long addr) { - tlb_remove_table(tlb, virt_to_page(pudp)); + tlb_remove_ptdesc(tlb, virt_to_ptdesc(pudp)); } #endif diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 95d360805f8a..47781bec6171 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -426,6 +426,7 @@ static phys_addr_t __pgd_pgtable_alloc(int shift) static phys_addr_t pgd_pgtable_alloc(int shift) { phys_addr_t pa = __pgd_pgtable_alloc(shift); + struct ptdesc *ptdesc = page_ptdesc(phys_to_page(pa)); /* * Call proper page table ctor in case later we need to @@ -433,12 +434,12 @@ static phys_addr_t pgd_pgtable_alloc(int shift) * this pre-allocated page table. * * We don't select ARCH_ENABLE_SPLIT_PMD_PTLOCK if pmd is - * folded, and if so pgtable_pmd_page_ctor() becomes nop. + * folded, and if so pagetable_pte_ctor() becomes nop. */ if (shift == PAGE_SHIFT) - BUG_ON(!pgtable_pte_page_ctor(phys_to_page(pa))); + BUG_ON(!pagetable_pte_ctor(ptdesc)); else if (shift == PMD_SHIFT) - BUG_ON(!pgtable_pmd_page_ctor(phys_to_page(pa))); + BUG_ON(!pagetable_pmd_ctor(ptdesc)); return pa; } From patchwork Mon Aug 7 23:05:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345495 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D41BEC07E8F for ; Mon, 7 Aug 2023 23:09:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229459AbjHGXJB (ORCPT ); Mon, 7 Aug 2023 19:09:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230474AbjHGXHL (ORCPT ); Mon, 7 Aug 2023 19:07:11 -0400 Received: from mail-yb1-xb33.google.com (mail-yb1-xb33.google.com [IPv6:2607:f8b0:4864:20::b33]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2BC931FEB; Mon, 7 Aug 2023 16:06:17 -0700 (PDT) Received: by mail-yb1-xb33.google.com with SMTP id 3f1490d57ef6-d35a9d7a5bdso5116784276.0; Mon, 07 Aug 2023 16:06:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449557; x=1692054357; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=H/qh+9u1BcCIl2V5thva5G2rletbRXaxfcBgzJGGgXU=; b=pCbA+UxrLZ96Iu/LappjpSZmLEozYOYmdO5RTxaM7BlCH20e3MEhp3Bn4lGiK7HrnF zI5vTymTyexPs2fEWKMf3YOixGPBtUiqT1+gX7fLWBa+Q9gs2M2cYtqmojJG+cqpxy8a yiuXYP2euWfRpeTnK/Ph7t69G3h70UOk68zLodvepBaU3Ub4ZFK2KFzJhB7TlCbjRGdX ie8hnx40ZzCuPRz3xDX8QGC9rAdvd3I0ubBMPCct9yJ64tgNkqrnZ4vxqqMZLVPIiaaP rYqmFrrQlxG0A3en5YH5Vcd7HyjhL8swCun/14090vxoeSz0eujRX7UW9eNuwqOQts53 n0Fw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449557; x=1692054357; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=H/qh+9u1BcCIl2V5thva5G2rletbRXaxfcBgzJGGgXU=; b=iNcXzuxp1fznxU1DohhOUAMN0DmqpzyPzEKA8je5gZajgOpZEwKk981dxXf3GBeqZR ZthPpAyUk4SHH2A63X+hbBAccchWDcEmqwKa9L7hv/FzkhEkLNWdemsMJ50DVs5xqP7x VZKTpJzLFvlQTv1geKOtHDSgpwfdJDNjQsKffF5kBwbM0tJfH391AfXjmkUlZTPNn3Pe EXgN0Xfg9OU7EFQOQxhvLbkjbKw/ZVlS97o6NRyEORrIGI7usP9BSztULU1DLHBPfxE/ t6nCkNF4SScgPjufEvbI+8niHbxOGY9wHsruq6B+2MCAZMcTah/cTDmaJAvCsc6cadq1 VGLA== X-Gm-Message-State: AOJu0YwiDWbfGcVp41LI7Wed0Aie9SIU6tdl4gh7mYxEfaz19x6Zh1i0 DK6NlK8eKc6JGQb/vIzcOf4= X-Google-Smtp-Source: AGHT+IGb7qTBQz9qcaNa56xxCbdxPZmnKDiha96eaUPQlL4XzY5YUB1UJSqCKxKHPrzOS/VVDboJfg== X-Received: by 2002:a05:6902:4c4:b0:d0f:65db:dc0e with SMTP id v4-20020a05690204c400b00d0f65dbdc0emr10052463ybs.39.1691449556886; Mon, 07 Aug 2023 16:05:56 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:56 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Guo Ren , Mike Rapoport Subject: [PATCH mm-unstable v9 19/31] csky: Convert __pte_free_tlb() to use ptdescs Date: Mon, 7 Aug 2023 16:05:01 -0700 Message-Id: <20230807230513.102486-20-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents. Acked-by: Guo Ren Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- arch/csky/include/asm/pgalloc.h | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/csky/include/asm/pgalloc.h b/arch/csky/include/asm/pgalloc.h index 7d57e5da0914..9c84c9012e53 100644 --- a/arch/csky/include/asm/pgalloc.h +++ b/arch/csky/include/asm/pgalloc.h @@ -63,8 +63,8 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) #define __pte_free_tlb(tlb, pte, address) \ do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page(tlb, pte); \ + pagetable_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc(tlb, page_ptdesc(pte)); \ } while (0) extern void pagetable_init(void); From patchwork Mon Aug 7 23:05:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345493 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 93459C27C7A for ; Mon, 7 Aug 2023 23:09:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231527AbjHGXI7 (ORCPT ); Mon, 7 Aug 2023 19:08:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45360 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230493AbjHGXHL (ORCPT ); Mon, 7 Aug 2023 19:07:11 -0400 Received: from mail-yb1-xb35.google.com (mail-yb1-xb35.google.com [IPv6:2607:f8b0:4864:20::b35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8855C1FF5; Mon, 7 Aug 2023 16:06:17 -0700 (PDT) Received: by mail-yb1-xb35.google.com with SMTP id 3f1490d57ef6-d4789fd9317so2925205276.1; Mon, 07 Aug 2023 16:06:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449559; x=1692054359; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=w5tIdqLMLMEIndKmulGbjbVGcWP2eiFuISuM6iDOZlQ=; b=gXJTXcEvyVT7IoetVDXBwB8IPw8UIwzytrLAqRYME0pNQ9C4uggCnI6RPeBYUmiABc NGa1RjP9x6Az35hUll5l4sYYgp98uBkULFey+10bJthE9R/J1BfMB4fjSuNmCpJ1YXG1 ARmLg59bINaXUvevzM80HySnkL9LRFKwNZ83rXMUa4P0U6g1vYwZDB00ToF81qaBEF1+ SCcl4G/9B02tlTVDHjLxfkMGL5hqppCgW9NlCKGjCO3XjVlW37imlXi2cPPjZHQnQLsK WyhHmorW6P7DI7CdyMJrfi88WQs4xcY1ld9fhabvXIyG+tqrpY5PfVFDlBq81eHbR+r8 HFBw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449559; x=1692054359; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=w5tIdqLMLMEIndKmulGbjbVGcWP2eiFuISuM6iDOZlQ=; b=doFs5kRbHs3Fn18NV5P2QZ1vA3BSuULbL1sreWKlZFM2aG0uUkMlvA0xf7zhUq4Ub0 FXJMF0zjBJRc2r1Y/QPBHqjW9n3ARE9UAlFwAmDndZQCdB8EwLG+VVmvzx7VMpzQ8jto gVhSZn6MO+ON82egRsSoGMGARJv1iUwTIYX/rKhi/UkNgqXcgHjbqbsxqJNBddRz53De kumQxmnqupx6kdovWKNN3O3vKZ9xSrf36Zm9kjFORrKORyxrxAlX2tSdB5PdwAFUPVR3 YfE6luJsIf+Se7kw0mo0MBtBzEdt7By+DHeKJE2E9BkYEbhpEQG07KasLEqBatclRDBS m0bw== X-Gm-Message-State: AOJu0Yyn/9UqdoFYx/uoJkOI1MCcs6eHhtPiSsaG1aMmvHF6Mt4e8iRm +VV+fMHME/zSLiSCUi2zjtM= X-Google-Smtp-Source: AGHT+IHgbeu9fOmMS+FNlr/kvyOX24cSJUZ4nRlF6aWfNWe7W/NjYriU1Zmx+jVELg0QXXl18qiXKg== X-Received: by 2002:a25:4b81:0:b0:d4b:ec36:bb85 with SMTP id y123-20020a254b81000000b00d4bec36bb85mr7290071yba.50.1691449559015; Mon, 07 Aug 2023 16:05:59 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:05:58 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH mm-unstable v9 20/31] hexagon: Convert __pte_free_tlb() to use ptdescs Date: Mon, 7 Aug 2023 16:05:02 -0700 Message-Id: <20230807230513.102486-21-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- arch/hexagon/include/asm/pgalloc.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/hexagon/include/asm/pgalloc.h b/arch/hexagon/include/asm/pgalloc.h index f0c47e6a7427..55988625e6fb 100644 --- a/arch/hexagon/include/asm/pgalloc.h +++ b/arch/hexagon/include/asm/pgalloc.h @@ -87,10 +87,10 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, max_kernel_seg = pmdindex; } -#define __pte_free_tlb(tlb, pte, addr) \ -do { \ - pgtable_pte_page_dtor((pte)); \ - tlb_remove_page((tlb), (pte)); \ +#define __pte_free_tlb(tlb, pte, addr) \ +do { \ + pagetable_pte_dtor((page_ptdesc(pte))); \ + tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) #endif From patchwork Mon Aug 7 23:05:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345487 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18A0EC25B76 for ; Mon, 7 Aug 2023 23:08:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231612AbjHGXIx (ORCPT ); Mon, 7 Aug 2023 19:08:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46218 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231224AbjHGXHL (ORCPT ); Mon, 7 Aug 2023 19:07:11 -0400 Received: from mail-yb1-xb2c.google.com (mail-yb1-xb2c.google.com [IPv6:2607:f8b0:4864:20::b2c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 76DC51FFA; Mon, 7 Aug 2023 16:06:18 -0700 (PDT) Received: by mail-yb1-xb2c.google.com with SMTP id 3f1490d57ef6-d59da7115bdso263865276.3; Mon, 07 Aug 2023 16:06:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449561; x=1692054361; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VD2saKmHCgP979K/4Ur2DZD5lv+pCsAy+26MDZ4lRXE=; b=KaA5Xf0Zq7VankK3QOvVnhKCvDQA0idGZ5r/QtBaYpBUZhM7IUDDnipX0Mck9NuTr6 FMzfUiWHWV61utAYtiNJjvsmZyQrjEGqV1vWWuYtjGooZk9wKA6pHj8JXiEh77Vd66gm gq9wCdtohtsBXI+QiM0g5DM3RYb5yVm35tcAKNI7nQZaFVODWjyuyXisRnjCevekbcDp QweW3qBvEiOtjGdP8+KO+Ai79xaww40f8oFcpzIoZ31G0t7vkt4E3RfT7Asw6Y6mNUmW 4v5tgaFeOHT3yezqwQ6Pv7HkjczS8wtW/o3vWALkNkC7/L83lM6ERIvU0onlnalg95pZ XQOA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449561; x=1692054361; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VD2saKmHCgP979K/4Ur2DZD5lv+pCsAy+26MDZ4lRXE=; b=iqL73rg6HEQ7wzQuWmHEzq7mD/TFI1cDK5G9kT+jw+AgRuh1e67mT1Jk4tHnYJn7VE kwN0LBT1I+Pj6gWNLJ/Ig7uZXP8B/MSyPeZZVnYvdNIJMaTRZAluHoTWThyf2bjTV0OG rLT6J7KhLKaUIwkr8XPnuFi4AZFvtnVdegJD9b9R9UBUM0m4OozKnq+Cg7ojpnGDeShT LUlS4SBs4XrypuvqqE2ksHrf4R/jWsfC/yVLR3MXCrNmz/FHS9IARzAXvdAa5siABS5B SuYInCsMEqps4TjF+E8SvOuFnHGQIuNTHfsKeFRDFDg4w1STCaxmqDtAgCW1FwxBaOUG lWQg== X-Gm-Message-State: AOJu0YwR83E3tfhVKYhADgbfD2fm6PJ4kfh6UjiCdDA4QLBTEI2wI/L7 0cASAS+DleMTk5DsX/Rrc7E= X-Google-Smtp-Source: AGHT+IGT7jcZ858GNZbGv2ymKyHkNCq6k2xr+zx9+OdmxVuwoAhnOKduGJp0I6ObmWSEyVKiLNM8sw== X-Received: by 2002:a25:8210:0:b0:d39:fa2f:8b63 with SMTP id q16-20020a258210000000b00d39fa2f8b63mr13670853ybk.25.1691449561057; Mon, 07 Aug 2023 16:06:01 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.05.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:06:00 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Huacai Chen , Mike Rapoport Subject: [PATCH mm-unstable v9 21/31] loongarch: Convert various functions to use ptdescs Date: Mon, 7 Aug 2023 16:05:03 -0700 Message-Id: <20230807230513.102486-22-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use pagetable_alloc() and ptdesc_address() instead to help standardize page tables further. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- arch/loongarch/include/asm/pgalloc.h | 27 +++++++++++++++------------ arch/loongarch/mm/pgtable.c | 7 ++++--- 2 files changed, 19 insertions(+), 15 deletions(-) diff --git a/arch/loongarch/include/asm/pgalloc.h b/arch/loongarch/include/asm/pgalloc.h index af1d1e4a6965..23f5b1107246 100644 --- a/arch/loongarch/include/asm/pgalloc.h +++ b/arch/loongarch/include/asm/pgalloc.h @@ -45,9 +45,9 @@ extern void pagetable_init(void); extern pgd_t *pgd_alloc(struct mm_struct *mm); #define __pte_free_tlb(tlb, pte, address) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), pte); \ +do { \ + pagetable_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ } while (0) #ifndef __PAGETABLE_PMD_FOLDED @@ -55,18 +55,18 @@ do { \ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) { pmd_t *pmd; - struct page *pg; + struct ptdesc *ptdesc; - pg = alloc_page(GFP_KERNEL_ACCOUNT); - if (!pg) + ptdesc = pagetable_alloc(GFP_KERNEL_ACCOUNT, 0); + if (!ptdesc) return NULL; - if (!pgtable_pmd_page_ctor(pg)) { - __free_page(pg); + if (!pagetable_pmd_ctor(ptdesc)) { + pagetable_free(ptdesc); return NULL; } - pmd = (pmd_t *)page_address(pg); + pmd = ptdesc_address(ptdesc); pmd_init(pmd); return pmd; } @@ -80,10 +80,13 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address) { pud_t *pud; + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, 0); - pud = (pud_t *) __get_free_page(GFP_KERNEL); - if (pud) - pud_init(pud); + if (!ptdesc) + return NULL; + pud = ptdesc_address(ptdesc); + + pud_init(pud); return pud; } diff --git a/arch/loongarch/mm/pgtable.c b/arch/loongarch/mm/pgtable.c index 1260cf30e3ee..b14343e211b6 100644 --- a/arch/loongarch/mm/pgtable.c +++ b/arch/loongarch/mm/pgtable.c @@ -11,10 +11,11 @@ pgd_t *pgd_alloc(struct mm_struct *mm) { - pgd_t *ret, *init; + pgd_t *init, *ret = NULL; + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, 0); - ret = (pgd_t *) __get_free_page(GFP_KERNEL); - if (ret) { + if (ptdesc) { + ret = (pgd_t *)ptdesc_address(ptdesc); init = pgd_offset(&init_mm, 0UL); pgd_init(ret); memcpy(ret + USER_PTRS_PER_PGD, init + USER_PTRS_PER_PGD, From patchwork Mon Aug 7 23:05:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345490 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25E41C27C6F for ; Mon, 7 Aug 2023 23:08:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231271AbjHGXI5 (ORCPT ); Mon, 7 Aug 2023 19:08:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45212 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231270AbjHGXHL (ORCPT ); Mon, 7 Aug 2023 19:07:11 -0400 Received: from mail-yw1-x112b.google.com (mail-yw1-x112b.google.com [IPv6:2607:f8b0:4864:20::112b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7AE191FFC; Mon, 7 Aug 2023 16:06:18 -0700 (PDT) Received: by mail-yw1-x112b.google.com with SMTP id 00721157ae682-584388ec441so52574687b3.3; Mon, 07 Aug 2023 16:06:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449563; x=1692054363; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=b5UN203jFDiNo/YHJ5VSjgxwvcpHMPrLplz2RQSMCvk=; b=OTPDdNMzu7YNnKXhv+Q3ZqqlENNm5aYdj6D71YWlelfDeuCfloDRpvW/2blg63fLEM wp2lqLun+gv7jM76tgF3SL5EUSik6EQoGHQg/la3vssrqrbUd6clAJJZm9oTnkSFnW3p rrak3TkX2TlgthUOthGhnmNqe2u92uklZfoOWmy63Mi41aONrji7aHLiDQehYWfjAdaZ soPZkqDTfcWv3Jopxt6sr0srYUxYWkTWEBaSK7GkVRhdyRSTtqcke1kGItccZqtqhkia YtUOs+TmHqOaN+5n3hq45fmviaiyb+cOIccc/DlZdoNJ1+R3sKq7dO2/Ljgq8Rru35Tt Yo/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449563; x=1692054363; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=b5UN203jFDiNo/YHJ5VSjgxwvcpHMPrLplz2RQSMCvk=; b=ZgMMQ7Ry+PsLiEW/1Kh2g/SFgLJv7pcusyBSEnM3OJCDa6sjaugE3OkZdEgB41lwWK rJzx56I6wXlEEr/ThRxy6a7tF2eJME/73+qJEzMyOXKHN1uhfKGXqN9VH/s+DFxTZGgP +5vYfEnYs4FEurmhTdm9bXPJPPoALXKKeA80MvUgLwR0Cbfd9GkrEcyb8FFAizI7I4B7 EyR+ESFJ6cSzBsxQmjyz32RbFczVDq35QOxlok5OOKQYdbl/5V9ofFk1EfrvvC4svOTi C5RGMceuH8/rzSghl9qiOJOHk2hk88xJq83wo3QOgFo7ac5kyLbtrglSTT75Qq3jzfD8 mvig== X-Gm-Message-State: AOJu0YxM50tXhI+uDiD6zUtG3QDYxVD6brjKIaKuhNNpFJNC3s75tMRa IfhI5mF8Lwx2t/S45sQuks8= X-Google-Smtp-Source: AGHT+IGso2idiavqhjdtI0le+b51Epzt0Ore5+PmjI6n65mRaKN/pRbN2Fu2wELJlR9Tq3E7Uha1dg== X-Received: by 2002:a25:be08:0:b0:d0e:2e5c:2f80 with SMTP id h8-20020a25be08000000b00d0e2e5c2f80mr8938705ybk.64.1691449563064; Mon, 07 Aug 2023 16:06:03 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.06.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:06:02 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport , Geert Uytterhoeven Subject: [PATCH mm-unstable v9 22/31] m68k: Convert various functions to use ptdescs Date: Mon, 7 Aug 2023 16:05:04 -0700 Message-Id: <20230807230513.102486-23-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use pagetable_alloc() and ptdesc_address() instead to help standardize page tables further. Acked-by: Mike Rapoport (IBM) Acked-by: Geert Uytterhoeven Signed-off-by: Vishal Moola (Oracle) --- arch/m68k/include/asm/mcf_pgalloc.h | 47 ++++++++++++++-------------- arch/m68k/include/asm/sun3_pgalloc.h | 8 ++--- arch/m68k/mm/motorola.c | 4 +-- 3 files changed, 30 insertions(+), 29 deletions(-) diff --git a/arch/m68k/include/asm/mcf_pgalloc.h b/arch/m68k/include/asm/mcf_pgalloc.h index 5c2c0a864524..302c5bf67179 100644 --- a/arch/m68k/include/asm/mcf_pgalloc.h +++ b/arch/m68k/include/asm/mcf_pgalloc.h @@ -5,22 +5,22 @@ #include #include -extern inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) +static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte) { - free_page((unsigned long) pte); + pagetable_free(virt_to_ptdesc(pte)); } extern const char bad_pmd_string[]; -extern inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) +static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm) { - unsigned long page = __get_free_page(GFP_DMA); + struct ptdesc *ptdesc = pagetable_alloc((GFP_DMA | __GFP_ZERO) & + ~__GFP_HIGHMEM, 0); - if (!page) + if (!ptdesc) return NULL; - memset((void *)page, 0, PAGE_SIZE); - return (pte_t *) (page); + return ptdesc_address(ptdesc); } extern inline pmd_t *pmd_alloc_kernel(pgd_t *pgd, unsigned long address) @@ -35,36 +35,34 @@ extern inline pmd_t *pmd_alloc_kernel(pgd_t *pgd, unsigned long address) static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t pgtable, unsigned long address) { - struct page *page = virt_to_page(pgtable); + struct ptdesc *ptdesc = virt_to_ptdesc(pgtable); - pgtable_pte_page_dtor(page); - __free_page(page); + pagetable_pte_dtor(ptdesc); + pagetable_free(ptdesc); } static inline pgtable_t pte_alloc_one(struct mm_struct *mm) { - struct page *page = alloc_pages(GFP_DMA, 0); + struct ptdesc *ptdesc = pagetable_alloc(GFP_DMA | __GFP_ZERO, 0); pte_t *pte; - if (!page) + if (!ptdesc) return NULL; - if (!pgtable_pte_page_ctor(page)) { - __free_page(page); + if (!pagetable_pte_ctor(ptdesc)) { + pagetable_free(ptdesc); return NULL; } - pte = page_address(page); - clear_page(pte); - + pte = ptdesc_address(ptdesc); return pte; } static inline void pte_free(struct mm_struct *mm, pgtable_t pgtable) { - struct page *page = virt_to_page(pgtable); + struct ptdesc *ptdesc = virt_to_ptdesc(pgtable); - pgtable_pte_page_dtor(page); - __free_page(page); + pagetable_pte_dtor(ptdesc); + pagetable_free(ptdesc); } /* @@ -75,16 +73,19 @@ static inline void pte_free(struct mm_struct *mm, pgtable_t pgtable) static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) { - free_page((unsigned long) pgd); + pagetable_free(virt_to_ptdesc(pgd)); } static inline pgd_t *pgd_alloc(struct mm_struct *mm) { pgd_t *new_pgd; + struct ptdesc *ptdesc = pagetable_alloc((GFP_DMA | __GFP_NOWARN) & + ~__GFP_HIGHMEM, 0); - new_pgd = (pgd_t *)__get_free_page(GFP_DMA | __GFP_NOWARN); - if (!new_pgd) + if (!ptdesc) return NULL; + new_pgd = ptdesc_address(ptdesc); + memcpy(new_pgd, swapper_pg_dir, PTRS_PER_PGD * sizeof(pgd_t)); memset(new_pgd, 0, PAGE_OFFSET >> PGDIR_SHIFT); return new_pgd; diff --git a/arch/m68k/include/asm/sun3_pgalloc.h b/arch/m68k/include/asm/sun3_pgalloc.h index 198036aff519..ff48573db2c0 100644 --- a/arch/m68k/include/asm/sun3_pgalloc.h +++ b/arch/m68k/include/asm/sun3_pgalloc.h @@ -17,10 +17,10 @@ extern const char bad_pmd_string[]; -#define __pte_free_tlb(tlb,pte,addr) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), pte); \ +#define __pte_free_tlb(tlb, pte, addr) \ +do { \ + pagetable_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ } while (0) static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index 8bca46e51e94..c1761d309fc6 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -161,7 +161,7 @@ void *get_pointer_table(int type) * m68k doesn't have SPLIT_PTE_PTLOCKS for not having * SMP. */ - pgtable_pte_page_ctor(virt_to_page(page)); + pagetable_pte_ctor(virt_to_ptdesc(page)); } mmu_page_ctor(page); @@ -201,7 +201,7 @@ int free_pointer_table(void *table, int type) list_del(dp); mmu_page_dtor((void *)page); if (type == TABLE_PTE) - pgtable_pte_page_dtor(virt_to_page((void *)page)); + pagetable_pte_dtor(virt_to_ptdesc((void *)page)); free_page (page); return 1; } else if (ptable_list[type].next != dp) { From patchwork Mon Aug 7 23:05:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345488 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id EC977C25B78 for ; Mon, 7 Aug 2023 23:08:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231616AbjHGXIz (ORCPT ); Mon, 7 Aug 2023 19:08:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46222 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230370AbjHGXHL (ORCPT ); Mon, 7 Aug 2023 19:07:11 -0400 Received: from mail-yb1-xb35.google.com (mail-yb1-xb35.google.com [IPv6:2607:f8b0:4864:20::b35]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9DCFE2101; Mon, 7 Aug 2023 16:06:19 -0700 (PDT) Received: by mail-yb1-xb35.google.com with SMTP id 3f1490d57ef6-d479d128596so3766473276.1; Mon, 07 Aug 2023 16:06:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449565; x=1692054365; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SCvcoKWLGx6gAjym9VWVFszZElqPM/KxU0nQCUs9lhU=; b=UEUmdRWyflXklFyhNX736vuEaqgiuaE747b2iDMdbsntaDVnz91uxqlH5BcdgyOH5x uvJaTvPkpQGN9akIqApZ8zKHvLZguej6njtjOnVMX/cM5jmsFezl/LYYVYVkbeho4b0z HHNTILzp+Xc4v6kzsuUo73oAPiN0CJazQDUAwcPLlet/lIEFQgMVrv8B24qA6Hz5wjkw b8w6Ifa5zhm6L11dsVrnJ9eA4XJTM6QxI5QWilhFcSjtgNKdoGywloLoSOAO6BZpBMSb 8CiRrKjceUBw/RGFjWiyE5TzR9GZJ5T3ThAn9ujnfgeZbAfe+0n8PqgNKPCIQtpGYCsJ QRzw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449565; x=1692054365; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SCvcoKWLGx6gAjym9VWVFszZElqPM/KxU0nQCUs9lhU=; b=OfhBKV50UdItgef9y5J3YvMKYKZ8Y+Le7uuHKStez4Pj1Z0uc1q80lDEHvMDqZ4uW6 mt56yy+RvAn0K/EFacvg0FgHjD5dV1cpLvMzcjojQczgZ5W15b6jeh0bTqrZ9DhRTDCi eT4KrNHe6PV5ljbIIa1QCVwLbPXuHCRboWskvCIqM4SkZZIJ3c5LpPbj+p4YPNR4/gLU Dl7GLbDZ4CAin0NZC13yTJmhi7eC1phs7YQwtEvWIJqcQ35cfQB5KJdYnLQnubalQvMh kOsiIMeuMXVVp9VzdLLAKAghXIBXZgeuwhA1PdZ0asQJT5aVwXVgwjLY96V4eiBWL3Eh 2e6Q== X-Gm-Message-State: AOJu0YypYNsPhGpQhcpsMnrOXjiUJuKNlyU4NFZfOVpxIvPl3zgmaYDL uujpZ+bvmffv+xZ2imj2YMA= X-Google-Smtp-Source: AGHT+IEqzmUPQsWPkdBtUj3V9QXK122J73o4hp/g5UvHeQgKIN7qrVubtY48PFSpogl4MBW4UXPeVQ== X-Received: by 2002:a25:944:0:b0:cb4:6167:a69c with SMTP id u4-20020a250944000000b00cb46167a69cmr9868214ybm.8.1691449565251; Mon, 07 Aug 2023 16:06:05 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.06.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:06:04 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Thomas Bogendoerfer , Mike Rapoport Subject: [PATCH mm-unstable v9 23/31] mips: Convert various functions to use ptdescs Date: Mon, 7 Aug 2023 16:05:05 -0700 Message-Id: <20230807230513.102486-24-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use pagetable_alloc() and ptdesc_address() instead to help standardize page tables further. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- arch/mips/include/asm/pgalloc.h | 32 ++++++++++++++++++-------------- arch/mips/mm/pgtable.c | 8 +++++--- 2 files changed, 23 insertions(+), 17 deletions(-) diff --git a/arch/mips/include/asm/pgalloc.h b/arch/mips/include/asm/pgalloc.h index f72e737dda21..40e40a7eb94a 100644 --- a/arch/mips/include/asm/pgalloc.h +++ b/arch/mips/include/asm/pgalloc.h @@ -51,13 +51,13 @@ extern pgd_t *pgd_alloc(struct mm_struct *mm); static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd) { - free_pages((unsigned long)pgd, PGD_TABLE_ORDER); + pagetable_free(virt_to_ptdesc(pgd)); } -#define __pte_free_tlb(tlb,pte,address) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), pte); \ +#define __pte_free_tlb(tlb, pte, address) \ +do { \ + pagetable_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), page_ptdesc(pte)); \ } while (0) #ifndef __PAGETABLE_PMD_FOLDED @@ -65,18 +65,18 @@ do { \ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) { pmd_t *pmd; - struct page *pg; + struct ptdesc *ptdesc; - pg = alloc_pages(GFP_KERNEL_ACCOUNT, PMD_TABLE_ORDER); - if (!pg) + ptdesc = pagetable_alloc(GFP_KERNEL_ACCOUNT, PMD_TABLE_ORDER); + if (!ptdesc) return NULL; - if (!pgtable_pmd_page_ctor(pg)) { - __free_pages(pg, PMD_TABLE_ORDER); + if (!pagetable_pmd_ctor(ptdesc)) { + pagetable_free(ptdesc); return NULL; } - pmd = (pmd_t *)page_address(pg); + pmd = ptdesc_address(ptdesc); pmd_init(pmd); return pmd; } @@ -90,10 +90,14 @@ static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long address) static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long address) { pud_t *pud; + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, + PUD_TABLE_ORDER); - pud = (pud_t *) __get_free_pages(GFP_KERNEL, PUD_TABLE_ORDER); - if (pud) - pud_init(pud); + if (!ptdesc) + return NULL; + pud = ptdesc_address(ptdesc); + + pud_init(pud); return pud; } diff --git a/arch/mips/mm/pgtable.c b/arch/mips/mm/pgtable.c index b13314be5d0e..1506e458040d 100644 --- a/arch/mips/mm/pgtable.c +++ b/arch/mips/mm/pgtable.c @@ -10,10 +10,12 @@ pgd_t *pgd_alloc(struct mm_struct *mm) { - pgd_t *ret, *init; + pgd_t *init, *ret = NULL; + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, + PGD_TABLE_ORDER); - ret = (pgd_t *) __get_free_pages(GFP_KERNEL, PGD_TABLE_ORDER); - if (ret) { + if (ptdesc) { + ret = ptdesc_address(ptdesc); init = pgd_offset(&init_mm, 0UL); pgd_init(ret); memcpy(ret + USER_PTRS_PER_PGD, init + USER_PTRS_PER_PGD, From patchwork Mon Aug 7 23:05:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345485 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 286D2C25B4F for ; Mon, 7 Aug 2023 23:08:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231592AbjHGXIt (ORCPT ); Mon, 7 Aug 2023 19:08:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231321AbjHGXHM (ORCPT ); Mon, 7 Aug 2023 19:07:12 -0400 Received: from mail-yw1-x1134.google.com (mail-yw1-x1134.google.com [IPv6:2607:f8b0:4864:20::1134]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 203C62105; Mon, 7 Aug 2023 16:06:20 -0700 (PDT) Received: by mail-yw1-x1134.google.com with SMTP id 00721157ae682-583c48a9aa1so52642377b3.1; Mon, 07 Aug 2023 16:06:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449567; x=1692054367; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ke0Z6VjA86Q0iJYCAtHmQzMGt2iXqJZdkl8QiQF9dCo=; b=W+7e7ZXA6pq6bkOdiU4kqXMdaRoaFiJYq1YjNheTen2BEAKrJsHyDCwylC8TwIaELf OoOeKnKEm2Bup1E3lQkfzBFhNx4UEQeZdbP6gFS8EtHNx1s6wSSR8pJNy131BEuCYsID 5yHclRtlDqB0YUpxc8mp/j3duvqB/kcSwVKs7titgSAxTd3bahGpDvqB1s5BR/l1Q8b5 JJTz17F1sukT0v2fD4c6TeVfoKxL+4R8ZKOAgjggAvDHXskwAwGz/9+hK5mkw3BMis+2 8wcIPPvCrUvFi8Oxcy9SzYIpq7haNxXeYlICDRNRlufgVcr5TYo9K2btnO8SAcdlpPvp 77sA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449567; x=1692054367; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ke0Z6VjA86Q0iJYCAtHmQzMGt2iXqJZdkl8QiQF9dCo=; b=YUVBBKK5ML5byR6piki+xLkWIUOY9etUJlf+E8t+SCgZEWIuOWpflVlhx0I4L9f2gO GIT8fMi++rZEV9R6U844m+gPpYn4dK9Xvodq+Zp+8oR/MhVjek5veMjePWWkXHW/bYkk +EfI6mOPyBxOkzlPxW6g6oCgonhKznWMbDSGi5cOo36B3kyGdWwW/2dOFekawwRxLddk pclc8yoFhOnghvOW/hO+xDZ9eXaSQOwTQQhwm2GkK6MXes2apDLaDccMbwB3MNCwFnaB WUy1cXcTyBrhwmGX1wJ8uPsTU9mx3x0oESZPMPO1qr1nlWIjyuzk0XWTrRxfuBnKdH0l 8BzQ== X-Gm-Message-State: AOJu0YzIETyZz650mIVfHzEM+DyNapu2JSOmPyJX2B+87424pxRBFvxj Vqw3C70GhO46ghIvlEJ3Tcg= X-Google-Smtp-Source: AGHT+IEFwZzf7hWzCaDcJ0rf5o011LKpXcZZ+QgyAwbXtivToV2K1Z3qkIAZZunAX4r/XJQivv3Vpw== X-Received: by 2002:a25:29c2:0:b0:d4c:f456:d562 with SMTP id p185-20020a2529c2000000b00d4cf456d562mr4842541ybp.1.1691449567341; Mon, 07 Aug 2023 16:06:07 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.06.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:06:07 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport , Dinh Nguyen Subject: [PATCH mm-unstable v9 24/31] nios2: Convert __pte_free_tlb() to use ptdescs Date: Mon, 7 Aug 2023 16:05:06 -0700 Message-Id: <20230807230513.102486-25-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents. Acked-by: Mike Rapoport (IBM) Acked-by: Dinh Nguyen Signed-off-by: Vishal Moola (Oracle) --- arch/nios2/include/asm/pgalloc.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/nios2/include/asm/pgalloc.h b/arch/nios2/include/asm/pgalloc.h index ecd1657bb2ce..ce6bb8e74271 100644 --- a/arch/nios2/include/asm/pgalloc.h +++ b/arch/nios2/include/asm/pgalloc.h @@ -28,10 +28,10 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, extern pgd_t *pgd_alloc(struct mm_struct *mm); -#define __pte_free_tlb(tlb, pte, addr) \ - do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), (pte)); \ +#define __pte_free_tlb(tlb, pte, addr) \ + do { \ + pagetable_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) #endif /* _ASM_NIOS2_PGALLOC_H */ From patchwork Mon Aug 7 23:05:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345492 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 214CDC3DA41 for ; Mon, 7 Aug 2023 23:09:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231694AbjHGXI7 (ORCPT ); Mon, 7 Aug 2023 19:08:59 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45094 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230115AbjHGXHM (ORCPT ); Mon, 7 Aug 2023 19:07:12 -0400 Received: from mail-oi1-x231.google.com (mail-oi1-x231.google.com [IPv6:2607:f8b0:4864:20::231]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 322802106; Mon, 7 Aug 2023 16:06:20 -0700 (PDT) Received: by mail-oi1-x231.google.com with SMTP id 5614622812f47-3a3efee1d44so3967808b6e.3; Mon, 07 Aug 2023 16:06:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449570; x=1692054370; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8XaHpTf3P/utbT17OPG/H5MolPsiYyRMVXSdV5AvoHU=; b=GmyaVrawFabDS7oM/FJ2wsqOTnvNiSZkWHllyllBSGtBhNSpEmhqjU2xmr+48gvMwO LUXnCk58UhmfTImv2R4R9LNF0+WlzxXIVQ4Y7ZC5fEeHoGrn0o/ogRSADB0Y+NluFrtN MrrlZ4Lz/gi5878wRIWP06wgFFKVx+8i+ghKFpBHekPx9T0p+3sqj2xh34rznNB4nw+U O7FH+uzt41O2vvO9FAauF/idvM9r4rw7Q35Wwsa8GbYTT2MLxcQcKcfZ8291m/c55wrV URc64VPnBpaXKs/Bni+oDqsCjGnYy/ppZbjbVbnispVrc0PlqosfX2g85SuErvT0W1h1 o3sg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449570; x=1692054370; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8XaHpTf3P/utbT17OPG/H5MolPsiYyRMVXSdV5AvoHU=; b=jc9rf8b/akZq12I/xiR2SIzyWPySDNIp71f5Zv+iMrmipxizhqUbxjIAPBKtdixqc3 dC3JaKvQQXQmS51aEKaat75KYVU9XPDBUxU4XXYnGnVJ6464mqNfZ/pJgWsBsca9g0B4 fwUcNml5MppLzmYvpS21ScXr96b3lXZM33i15sf/0lwpYiTBXeF9Yu8fgPpGvItErBu8 9D8uGtTReFRmItTeZfx2QhX6lqFrWbzd1gKZqCeUVHEpFv029B30ZQ/3UgDuR0qoasRU SOMor38GrJIhbxx97Aixl84EAo7hmAa3FxyFamqBQI3wTDKHEy513wxJsE8oi/Eg3SOb qgXw== X-Gm-Message-State: AOJu0YwpckU4zAwiQAaKKSPsf73Wp3JL+HuawYitxa+BeK9pPF15caLr BsdY2xafKQaoxcjAioaamqM= X-Google-Smtp-Source: AGHT+IFfR71b5uGC/N7nUJvkV7EEf+q4ZrD/2jbgt4p5+XAZUesJuHroaMR2T6Qu+08oLKK4OIq5Tg== X-Received: by 2002:a05:6808:1997:b0:3a7:5314:e55e with SMTP id bj23-20020a056808199700b003a75314e55emr14747526oib.24.1691449569767; Mon, 07 Aug 2023 16:06:09 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.06.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:06:09 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Jonas Bonn , Mike Rapoport Subject: [PATCH mm-unstable v9 25/31] openrisc: Convert __pte_free_tlb() to use ptdescs Date: Mon, 7 Aug 2023 16:05:07 -0700 Message-Id: <20230807230513.102486-26-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- arch/openrisc/include/asm/pgalloc.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/arch/openrisc/include/asm/pgalloc.h b/arch/openrisc/include/asm/pgalloc.h index b7b2b8d16fad..c6a73772a546 100644 --- a/arch/openrisc/include/asm/pgalloc.h +++ b/arch/openrisc/include/asm/pgalloc.h @@ -66,10 +66,10 @@ extern inline pgd_t *pgd_alloc(struct mm_struct *mm) extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm); -#define __pte_free_tlb(tlb, pte, addr) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), (pte)); \ +#define __pte_free_tlb(tlb, pte, addr) \ +do { \ + pagetable_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) #endif From patchwork Mon Aug 7 23:05:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345496 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E171DC25B7D for ; Mon, 7 Aug 2023 23:09:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229591AbjHGXJB (ORCPT ); Mon, 7 Aug 2023 19:09:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231355AbjHGXHM (ORCPT ); Mon, 7 Aug 2023 19:07:12 -0400 Received: from mail-yb1-xb36.google.com (mail-yb1-xb36.google.com [IPv6:2607:f8b0:4864:20::b36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AC3F72109; Mon, 7 Aug 2023 16:06:21 -0700 (PDT) Received: by mail-yb1-xb36.google.com with SMTP id 3f1490d57ef6-d35a9d7a5bdso5116874276.0; Mon, 07 Aug 2023 16:06:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449572; x=1692054372; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=NpbBCMkwg1WbsmcEnKHREFultw/IY4tG3Jsqz6w0a8E=; b=nS5J1UYk8f9msjmOJfcwkB8RnOSNZv60DefW2TDIHuOlvS9npGk8oT2LZd+SySl8sv g+gjV1veIOE2SNGvcFOMJ1pPimwLTZ7+pvFQ4bVBKr+eOsPpBFIPNto8klZeZdDlbFxo wkyNcA7u+kweH04q+BG4NvsUNaK7E9zibuZ01wwXmu5qj90Fw+mY1pf3xcSZWMeZa7wK aGRqh9CjWX7iCRUWp+ZrbhBol1e4MCC8JBlzqolLSWyg74wpF8elWEgkI2Vj4oenaGhD isx7MJZrX54CbrhbHsj99pEt0PqmBmGh9HnsD2DaWM5Gm6veS11IW5m9dzOKe44sU/M/ md6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449572; x=1692054372; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=NpbBCMkwg1WbsmcEnKHREFultw/IY4tG3Jsqz6w0a8E=; b=DHdwlr1ZLcIzfSyCwcFedN1pehdxk2lPDpj5jadwZTLJc4GFEAaoqWrQah7s/iRmPQ xpHWqQSNAXGQ+fQpXUQRUVHkk+sA/oQPGwi8mneqnmqjTzNTtxO1AjAvobOtn2nyXH+Q XxNAmZvISteddm8aG70TYyZ3YcyBCP0uLo57BPaYX87RJi2lTHQ1VtR5D+5sZSjki7yM oleFFYFPwzMMvfsUTYPSSTJv3g8eqt1wxAE/7ss7VDREkgCwlyoehSukWhF4YCQmTQKs clK8zPZ1QiDaiN0rIiy7h8K5OVdAJZ7eX3ZKvuGIXC0hdpAWUDdXd6s6SR4YskmbiUDt 6GCw== X-Gm-Message-State: AOJu0YxXqGFxmNPd9O+Y7p2Nd6y+QzBcpUpO0+L7UTxOLQYijx1fEOSd n3+k6vIpz9vmgKdfWdWU7ms= X-Google-Smtp-Source: AGHT+IHFLqgFAthROKM1yjg8aTn4QAXhjx425iARLy/TVdE1z6PZIG9RjSxWoTmqjhSqLQZr0gTMYg== X-Received: by 2002:a25:d24d:0:b0:d0a:127e:7478 with SMTP id j74-20020a25d24d000000b00d0a127e7478mr9478817ybg.44.1691449571835; Mon, 07 Aug 2023 16:06:11 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.06.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:06:11 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Paul Walmsley , Palmer Dabbelt , Mike Rapoport Subject: [PATCH mm-unstable v9 26/31] riscv: Convert alloc_{pmd, pte}_late() to use ptdescs Date: Mon, 7 Aug 2023 16:05:08 -0700 Message-Id: <20230807230513.102486-27-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Some of the functions use the *get*page*() helper functions. Convert these to use pagetable_alloc() and ptdesc_address() instead to help standardize page tables further. Acked-by: Palmer Dabbelt Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- arch/riscv/include/asm/pgalloc.h | 8 ++++---- arch/riscv/mm/init.c | 16 ++++++---------- 2 files changed, 10 insertions(+), 14 deletions(-) diff --git a/arch/riscv/include/asm/pgalloc.h b/arch/riscv/include/asm/pgalloc.h index 59dc12b5b7e8..d169a4f41a2e 100644 --- a/arch/riscv/include/asm/pgalloc.h +++ b/arch/riscv/include/asm/pgalloc.h @@ -153,10 +153,10 @@ static inline pgd_t *pgd_alloc(struct mm_struct *mm) #endif /* __PAGETABLE_PMD_FOLDED */ -#define __pte_free_tlb(tlb, pte, buf) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), pte); \ +#define __pte_free_tlb(tlb, pte, buf) \ +do { \ + pagetable_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), page_ptdesc(pte));\ } while (0) #endif /* CONFIG_MMU */ diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c index 9ce504737d18..430a3d05a841 100644 --- a/arch/riscv/mm/init.c +++ b/arch/riscv/mm/init.c @@ -353,12 +353,10 @@ static inline phys_addr_t __init alloc_pte_fixmap(uintptr_t va) static phys_addr_t __init alloc_pte_late(uintptr_t va) { - unsigned long vaddr; - - vaddr = __get_free_page(GFP_KERNEL); - BUG_ON(!vaddr || !pgtable_pte_page_ctor(virt_to_page((void *)vaddr))); + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, 0); - return __pa(vaddr); + BUG_ON(!ptdesc || !pagetable_pte_ctor(ptdesc)); + return __pa((pte_t *)ptdesc_address(ptdesc)); } static void __init create_pte_mapping(pte_t *ptep, @@ -436,12 +434,10 @@ static phys_addr_t __init alloc_pmd_fixmap(uintptr_t va) static phys_addr_t __init alloc_pmd_late(uintptr_t va) { - unsigned long vaddr; - - vaddr = __get_free_page(GFP_KERNEL); - BUG_ON(!vaddr || !pgtable_pmd_page_ctor(virt_to_page((void *)vaddr))); + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL & ~__GFP_HIGHMEM, 0); - return __pa(vaddr); + BUG_ON(!ptdesc || !pagetable_pmd_ctor(ptdesc)); + return __pa((pmd_t *)ptdesc_address(ptdesc)); } static void __init create_pmd_mapping(pmd_t *pmdp, From patchwork Mon Aug 7 23:05:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345483 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id CD37BC07E8F for ; Mon, 7 Aug 2023 23:08:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230333AbjHGXIq (ORCPT ); Mon, 7 Aug 2023 19:08:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45394 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230322AbjHGXHO (ORCPT ); Mon, 7 Aug 2023 19:07:14 -0400 Received: from mail-yb1-xb2d.google.com (mail-yb1-xb2d.google.com [IPv6:2607:f8b0:4864:20::b2d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2F5F81FD0; Mon, 7 Aug 2023 16:06:24 -0700 (PDT) Received: by mail-yb1-xb2d.google.com with SMTP id 3f1490d57ef6-d0548cf861aso5310963276.3; Mon, 07 Aug 2023 16:06:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449574; x=1692054374; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SuJFa6mAGFwbCAvFBsdWEeQAHJIQ94yaq1s2El5pO5M=; b=sRCgndxFhKIflL3dRUOciVixuY3um2CsOH9mqCT4RfpcBNQLoC/2xIlVP2BC0LrRXg d0Mfnrp8G1Gq70+B6oCgBqzc+N7zHH9xY0+av47l7B8CO6Tt+n9pMVSoswf5Jzx8/yzy NzCNQAVK/Lkid61qiT1qq4e/dSW3T9N7fVh6BXy/whyNG/50NiRv4160ULtNDt7ZAJ8u ZEinV12wAdQdY0FHQSDiTfa7/P6pGvAR+Q1UWR0j31w4+0ktiAC1WT1q41QbYEtl/+hQ cKdjsvUfk2w2q52f0mVtR4dPNDq1UHAYwd2r0wZzmsP36I/aPnhCen/ej73PJ82E0WBs UQzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449574; x=1692054374; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SuJFa6mAGFwbCAvFBsdWEeQAHJIQ94yaq1s2El5pO5M=; b=lyIxMfUA9H/tyWrl0uEtP4kLBJkYiBs7cZapW73wURRpo0l1ocFZP/dcglw1DW/JmS f9xzMejxJE8dIDMozLfXsqHvaa4ajHbh8NtbSHFx+QL+mQ3nNwLm/iXU93OaasouHr/k Ko8EhttKKfwvNq0+frehvdoPKALr6BgmmBDTtedVxK77WFT4I3X3r0sy/iQcRsjCHK03 eWF+y3/XEEkRnIG1pb9CBFKoEp9xHc8X71iwMqDkktN1uuLiATps9l83f1yxgpfHt8AA +9AZcnPEDKDmiXrfpqTinTT+caQUGFz1KgQ//B8C88IHneKhkw82a0wT21wpjGaVPsLT EKaA== X-Gm-Message-State: AOJu0YxHoj/E++oh/A6qC6NfYVlTLpIGkRPooEdAybzSRsXc3Xl9je16 jsqCbn0iefc0iwXNLvBHykw= X-Google-Smtp-Source: AGHT+IHDPhUgl52q/fREvy7LWtCvyksQLzlkWI4tWrtb//A8NgqAG/s+XL1YOkSBACmJ6FF0msS3mA== X-Received: by 2002:a25:4081:0:b0:d4c:82ff:7bde with SMTP id n123-20020a254081000000b00d4c82ff7bdemr6684894yba.63.1691449574044; Mon, 07 Aug 2023 16:06:14 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.06.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:06:13 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Yoshinori Sato , Geert Uytterhoeven , John Paul Adrian Glaubitz , Mike Rapoport Subject: [PATCH mm-unstable v9 27/31] sh: Convert pte_free_tlb() to use ptdescs Date: Mon, 7 Aug 2023 16:05:09 -0700 Message-Id: <20230807230513.102486-28-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents. Also cleans up some spacing issues. Reviewed-by: Geert Uytterhoeven Acked-by: John Paul Adrian Glaubitz Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- arch/sh/include/asm/pgalloc.h | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/arch/sh/include/asm/pgalloc.h b/arch/sh/include/asm/pgalloc.h index a9e98233c4d4..5d8577ab1591 100644 --- a/arch/sh/include/asm/pgalloc.h +++ b/arch/sh/include/asm/pgalloc.h @@ -2,6 +2,7 @@ #ifndef __ASM_SH_PGALLOC_H #define __ASM_SH_PGALLOC_H +#include #include #define __HAVE_ARCH_PMD_ALLOC_ONE @@ -31,10 +32,10 @@ static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, set_pmd(pmd, __pmd((unsigned long)page_address(pte))); } -#define __pte_free_tlb(tlb,pte,addr) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb), (pte)); \ +#define __pte_free_tlb(tlb, pte, addr) \ +do { \ + pagetable_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) #endif /* __ASM_SH_PGALLOC_H */ From patchwork Mon Aug 7 23:05:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345484 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A2E65C41513 for ; Mon, 7 Aug 2023 23:08:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231567AbjHGXIs (ORCPT ); Mon, 7 Aug 2023 19:08:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45320 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229456AbjHGXHZ (ORCPT ); Mon, 7 Aug 2023 19:07:25 -0400 Received: from mail-oi1-x22b.google.com (mail-oi1-x22b.google.com [IPv6:2607:f8b0:4864:20::22b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB26B1711; Mon, 7 Aug 2023 16:06:24 -0700 (PDT) Received: by mail-oi1-x22b.google.com with SMTP id 5614622812f47-3a6f87b2993so3745146b6e.3; Mon, 07 Aug 2023 16:06:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449576; x=1692054376; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9BaWlDiv5V7gFuBlQl83W1x9C2ZQPjYd5FMjDEEKLfk=; b=cpy8Xcl19ScHIaEJohMfnXLz1DqejMUY8L9glfoRJKsJ7GTvGcuQsYlnxy1uVil+vX sKLd/jl5n7/nYQY+c94D1q52xLTeGbCjuaSsuOaGb/1E2rdiSHktCUTDo4f7HqbeV+dj TPAGwr0CoWyRhx6mzwQflnUob8s/y/rCKdVhSwURC5eFFFvRnN/tn2Qy2PbiuytULAR6 gF0MKwRWTHX4EnLLnTfGRqqibjjUEq59j99BhXKpF7aj2wSWqBVgMsp2RTnzW+/7J2fC DAIl8+bDSwnR/XjlkNOvluV6Us9KQMDrngkUDG9ywXNDgTmwYNXsy+sDpdm8duXtW4+4 Ov0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449576; x=1692054376; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9BaWlDiv5V7gFuBlQl83W1x9C2ZQPjYd5FMjDEEKLfk=; b=NhqZL1a5TND6TXDhQTHj7C6hOgLYl4aeQUNg/egDJnOUHPFo0KlQzy7WjjLw6s0v3U U7N57L8rg9bLgJ3DnPQPiKyHMBX8YUDVBlFcpXAygRd2P92E9lYmxnKazS1S8zMLwZyW lq25DlzDoDZ5pkx/+XqmmDGU1HffSEraI8JJDT4VYEOWCJuMiiYPLel7fcuWPuSM7Gdj adIBgw6mPfK6YjSU2axEoCDyOf1iZ0QvSNVILyy3+mwjzfv+ScKN6QRySX7C0SUHPrwI IbJm3Fo91M9OjO3CXEEIR+LZ78olQygvbS/8WoB3f/WhnBWO4hLH+aWL/u1hyakRJPfp cEHA== X-Gm-Message-State: AOJu0Yzobh4+grrhbs1y9OBxU1mUjWe0sDIMzOLtLzgmWp1zxF4aWpgl Ce3F/twKpglx9dRGjmEWjCU= X-Google-Smtp-Source: AGHT+IHyrg/p55954UWSWEFSK4VAgtzO3wGDAXzmBNoLynnb9Xv8FQKgcm7kods1XM481PofrT4NJA== X-Received: by 2002:a05:6808:23d3:b0:3a3:e6eb:9c53 with SMTP id bq19-20020a05680823d300b003a3e6eb9c53mr13630414oib.23.1691449576353; Mon, 07 Aug 2023 16:06:16 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.06.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:06:16 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH mm-unstable v9 28/31] sparc64: Convert various functions to use ptdescs Date: Mon, 7 Aug 2023 16:05:10 -0700 Message-Id: <20230807230513.102486-29-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org As part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents, convert various page table functions to use ptdescs. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- arch/sparc/mm/init_64.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c index 680ef206565c..f83017992eaa 100644 --- a/arch/sparc/mm/init_64.c +++ b/arch/sparc/mm/init_64.c @@ -2907,14 +2907,15 @@ pte_t *pte_alloc_one_kernel(struct mm_struct *mm) pgtable_t pte_alloc_one(struct mm_struct *mm) { - struct page *page = alloc_page(GFP_KERNEL | __GFP_ZERO); - if (!page) + struct ptdesc *ptdesc = pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0); + + if (!ptdesc) return NULL; - if (!pgtable_pte_page_ctor(page)) { - __free_page(page); + if (!pagetable_pte_ctor(ptdesc)) { + pagetable_free(ptdesc); return NULL; } - return (pte_t *) page_address(page); + return ptdesc_address(ptdesc); } void pte_free_kernel(struct mm_struct *mm, pte_t *pte) @@ -2924,10 +2925,10 @@ void pte_free_kernel(struct mm_struct *mm, pte_t *pte) static void __pte_free(pgtable_t pte) { - struct page *page = virt_to_page(pte); + struct ptdesc *ptdesc = virt_to_ptdesc(pte); - pgtable_pte_page_dtor(page); - __free_page(page); + pagetable_pte_dtor(ptdesc); + pagetable_free(ptdesc); } void pte_free(struct mm_struct *mm, pgtable_t pte) From patchwork Mon Aug 7 23:05:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E949BC27C78 for ; Mon, 7 Aug 2023 23:08:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231651AbjHGXI6 (ORCPT ); Mon, 7 Aug 2023 19:08:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45916 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229865AbjHGXHc (ORCPT ); Mon, 7 Aug 2023 19:07:32 -0400 Received: from mail-yw1-x1136.google.com (mail-yw1-x1136.google.com [IPv6:2607:f8b0:4864:20::1136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D017198D; Mon, 7 Aug 2023 16:06:29 -0700 (PDT) Received: by mail-yw1-x1136.google.com with SMTP id 00721157ae682-585ff234cd1so54757857b3.3; Mon, 07 Aug 2023 16:06:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449578; x=1692054378; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Yb5+fiX/YSPgeVruPImZCslC+7XzLtSDmAZ+KnCCvCw=; b=LURal4KqklmU9AiCOifU5Mxc6WM+IybZMw4f+H9b+ehsvWfF2rdEGUCUpHED/xgeP0 03+is7BherE6vDUPF9cInwhZZ6O6FVEP73Sri7+CBG0/7jN64G4bgtBq56ZrcLUBIxxT uSfx8fFqjmiqF0MtkULVgc1gjFJH4lOvkwRfZt9wzMVRnizjKRrxJwuFQ2mIfEo7cHgy UR2ZIO+mIWR9RWU4d+HJedZnPKiZaDv8i4STB2D6+pQaLcMt/pvWXKdhAwPxBTPHyGDc YUL8XwwyvOUHrQNxNKfZS8WVqZrgPXs80ZLFz3I7kRgWNiGS07HTyMzLuT9V1dLH0d1v g2Pg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449578; x=1692054378; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Yb5+fiX/YSPgeVruPImZCslC+7XzLtSDmAZ+KnCCvCw=; b=W5+LP1+7zjfJ2zqv4JMsmC8EkG8kapuuQIc3Ev1iOr0QqdB/eOMI0pxw9RX4iPzSej vI8zGcZHggv2YNUe0aiB9cI5LuqWHkKel/HZsCPjySf7Q3b20hFdDzps6hbjflHIAsIP kBiFH+a4OTApK8wTqc35dGm1csXccKLagdB1vQk4EQXp7jGzIuBFNji4E4QyLfT/bZR6 avBKHvDoLMZzB9cHGQ7ylP6JHXfT8FvTNf6WKRqgOEUWJJ23Sn1LCSKGGG80yDFOIZsr neE5fyhinAU3Gq1w6BFs4kd1dGjX6slO3rC0/hR08RbS6DOFqjT2H0ctfNA9g/JPcnnG FcDQ== X-Gm-Message-State: AOJu0YzKImx9dElVEUJPfwlVnVd6QgwY5tbdBSsCiT59z+G0EnxIJfWn E82BMgM359sAHdpqGGtOniw= X-Google-Smtp-Source: AGHT+IHSK0JHQfLZYNqZB3BuP0bNBVURutz2ITpQB4n2EhlNMl4NfRND8wevRJ4hRdMY0+rsbG2z6w== X-Received: by 2002:a25:ac04:0:b0:d0a:8269:e5d9 with SMTP id w4-20020a25ac04000000b00d0a8269e5d9mr9659872ybi.60.1691449578542; Mon, 07 Aug 2023 16:06:18 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.06.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:06:18 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , "David S. Miller" , Mike Rapoport Subject: [PATCH mm-unstable v9 29/31] sparc: Convert pgtable_pte_page_{ctor, dtor}() to ptdesc equivalents Date: Mon, 7 Aug 2023 16:05:11 -0700 Message-Id: <20230807230513.102486-30-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable pte constructor/destructors with ptdesc equivalents. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- arch/sparc/mm/srmmu.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c index 13f027afc875..8393faa3e596 100644 --- a/arch/sparc/mm/srmmu.c +++ b/arch/sparc/mm/srmmu.c @@ -355,7 +355,8 @@ pgtable_t pte_alloc_one(struct mm_struct *mm) return NULL; page = pfn_to_page(__nocache_pa((unsigned long)ptep) >> PAGE_SHIFT); spin_lock(&mm->page_table_lock); - if (page_ref_inc_return(page) == 2 && !pgtable_pte_page_ctor(page)) { + if (page_ref_inc_return(page) == 2 && + !pagetable_pte_ctor(page_ptdesc(page))) { page_ref_dec(page); ptep = NULL; } @@ -371,7 +372,7 @@ void pte_free(struct mm_struct *mm, pgtable_t ptep) page = pfn_to_page(__nocache_pa((unsigned long)ptep) >> PAGE_SHIFT); spin_lock(&mm->page_table_lock); if (page_ref_dec_return(page) == 1) - pgtable_pte_page_dtor(page); + pagetable_pte_dtor(page_ptdesc(page)); spin_unlock(&mm->page_table_lock); srmmu_free_nocache(ptep, SRMMU_PTE_TABLE_SIZE); From patchwork Mon Aug 7 23:05:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345489 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ACEFEC27C5F for ; Mon, 7 Aug 2023 23:08:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231638AbjHGXI4 (ORCPT ); Mon, 7 Aug 2023 19:08:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229760AbjHGXHb (ORCPT ); Mon, 7 Aug 2023 19:07:31 -0400 Received: from mail-yb1-xb2e.google.com (mail-yb1-xb2e.google.com [IPv6:2607:f8b0:4864:20::b2e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CF5D198C; Mon, 7 Aug 2023 16:06:29 -0700 (PDT) Received: by mail-yb1-xb2e.google.com with SMTP id 3f1490d57ef6-d479d128596so3766673276.1; Mon, 07 Aug 2023 16:06:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449580; x=1692054380; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=T9QlLO1dPeyZnWI5hHw1TuJlTK1nfQVgIvIYX0JocUU=; b=ZmZBlK2BE47dPXTqc8IQ1uTKCUobrHijjQnotS76WbJUUgCGVOPFVEmsLAOFQve7Mq 8k04A5/AuFSS607BBJG/Nd5XpAmnH5zPypJPEnlo7r978iqN52fpsMDyDxOev3Z41hLb CpvzpxqdWGHuM0xgZvYoYbOi2GTwb0pfPUe0oyJziB36Q6lozhNOIAq2JTvWjnhFdQ2U 4BFxzsH5/HG/XXl5/YPVavhDi3L4gvkLB1NxxGw9654yYac9diJlEe3+5Z8y0K4G8C7u mJwXJ96QObfNW/oMJd7PgnIP4//NNbVjGeiAj0SJF/1wFKYAbBETcCfRmwi4qkbVDTFa RY/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449580; x=1692054380; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=T9QlLO1dPeyZnWI5hHw1TuJlTK1nfQVgIvIYX0JocUU=; b=ZtOSjQSDGrysRDDN6Fh749Slw45EwMIW29NjcJbCfB8R+K8EDI2oPO6Di08wK5XSB+ 9Z4taGldj/9fjTtKspYu4thAgPvWjZLfDBhirQZWDK/UVO/LV2sbrslbBDQm5jXPC8Ny GG4VFQcJKVwUOQD6srUUoH5/CwWwICEL2wzOUIC+9J5gwjhGOV1oeevsmlyHwpLPFAEU rFf9yfAsvOVM19Q8+UnYKncCdX6dJUcYOW1RWq6+3z6v2POFXiUSdXH9vqyxUg6OCpxI QjPOW4BTmhSdDPctyHDmEoN/o9R9R57jDAyFPy2olPO2Fp4LN2sC/90nSrqDgfaAyGJ2 a90Q== X-Gm-Message-State: AOJu0YxGWdv+E1N+1qunEwH0HOyAUZDW4qdYhBjkaU7Ejj2HA2p/bHWU N2iqQfjEWX9+JwdU205eeTA= X-Google-Smtp-Source: AGHT+IFhMlpHJ+7onf6BYxf2ZEVLLZTaCynnVlAbd/4Vka+RTYiQ5OXzXPwOq0O9emhI9trC6wG3Sw== X-Received: by 2002:a5b:f8c:0:b0:d0b:7e30:6d17 with SMTP id q12-20020a5b0f8c000000b00d0b7e306d17mr8498276ybh.14.1691449580620; Mon, 07 Aug 2023 16:06:20 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.06.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:06:20 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Richard Weinberger , Mike Rapoport Subject: [PATCH mm-unstable v9 30/31] um: Convert {pmd, pte}_free_tlb() to use ptdescs Date: Mon, 7 Aug 2023 16:05:12 -0700 Message-Id: <20230807230513.102486-31-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org Part of the conversions to replace pgtable constructor/destructors with ptdesc equivalents. Also cleans up some spacing issues. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- arch/um/include/asm/pgalloc.h | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/um/include/asm/pgalloc.h b/arch/um/include/asm/pgalloc.h index 8ec7cd46dd96..de5e31c64793 100644 --- a/arch/um/include/asm/pgalloc.h +++ b/arch/um/include/asm/pgalloc.h @@ -25,19 +25,19 @@ */ extern pgd_t *pgd_alloc(struct mm_struct *); -#define __pte_free_tlb(tlb,pte, address) \ -do { \ - pgtable_pte_page_dtor(pte); \ - tlb_remove_page((tlb),(pte)); \ +#define __pte_free_tlb(tlb, pte, address) \ +do { \ + pagetable_pte_dtor(page_ptdesc(pte)); \ + tlb_remove_page_ptdesc((tlb), (page_ptdesc(pte))); \ } while (0) #ifdef CONFIG_3_LEVEL_PGTABLES -#define __pmd_free_tlb(tlb, pmd, address) \ -do { \ - pgtable_pmd_page_dtor(virt_to_page(pmd)); \ - tlb_remove_page((tlb),virt_to_page(pmd)); \ -} while (0) \ +#define __pmd_free_tlb(tlb, pmd, address) \ +do { \ + pagetable_pmd_dtor(virt_to_ptdesc(pmd)); \ + tlb_remove_page_ptdesc((tlb), virt_to_ptdesc(pmd)); \ +} while (0) #endif From patchwork Mon Aug 7 23:05:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Vishal Moola (Oracle)" X-Patchwork-Id: 13345494 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 970B1C2FC0F for ; Mon, 7 Aug 2023 23:09:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231708AbjHGXJA (ORCPT ); Mon, 7 Aug 2023 19:09:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45890 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229863AbjHGXHc (ORCPT ); Mon, 7 Aug 2023 19:07:32 -0400 Received: from mail-yb1-xb32.google.com (mail-yb1-xb32.google.com [IPv6:2607:f8b0:4864:20::b32]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5C572199D; Mon, 7 Aug 2023 16:06:29 -0700 (PDT) Received: by mail-yb1-xb32.google.com with SMTP id 3f1490d57ef6-d4789fd9317so2925409276.1; Mon, 07 Aug 2023 16:06:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1691449583; x=1692054383; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gIX4EtUZRQHbbGVDBj/rJyP4ZKSCaTL9vvlgteJwLjg=; b=ny/PO9Rzwz4wKD05c8Oj8x01t9rfWUwF+DPBSi6xhAQ1t3uiUBYZ7YUUbf8WtD6fas a2dGGpvFO2SYjUkztwf/JpnNv7QhJL+Goto2OWRpxKnsAjXenC9v6P3rISzqLv6y4LVV 4Qzw+SXgVTszZ9R94kkJTKmg5SvkJhFFe8ZHJiGz98l2xVrWZH/L4O6QNYM4Ix9oN6dK IfbxHi4LcKxjr+5M4cvO0MwysEIi/pmb6eegOI2VGQmX9xJr0pTMWjXflu5Ws+wnG+9V BkJIEDxKgjBTQlry6d49f1/eWCkgGXBEpt1ui7ZKc/08IVlfaQd/C3Y9s9851HPExZOi t/gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1691449583; x=1692054383; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gIX4EtUZRQHbbGVDBj/rJyP4ZKSCaTL9vvlgteJwLjg=; b=MyubxoSv2dPzI+gnTZeRE/Y5bXYDjZTJ3Q8LGa7uri2b96lr2Kxjtli6LKXIIlrcqg amIdLYh/RbXjijAySm2xVl0lLyQIaK2FB+b+0NKwZW+kaUsQzpHC5ataNM0Lg+unN5eP or49+zi87uks4nGislg/AYlbse2nZARq0TTVHZhXQJgfmLLqjJdGC32K7W2hUAfYSa4U zk17rIgRQsOeXqetHne7MynTRtga/d65lEbwylhzX+hvofWaIfEVDRiAz0ym3nbIqP0z KxLsU5UDcpJqZUs4rYBuk9Ioe0KfRsXiT79Xpj7HGrs/qWLVGnqriaI6oCXD3WgoxEhY VmEQ== X-Gm-Message-State: AOJu0Yxu1jd0lmCHI6nOEm5Z1A6IzKcGkLeeedcIzIUpPIEAAY7PJ9B9 ZxqELVDkdVxKoAk5qtEPnvY= X-Google-Smtp-Source: AGHT+IHSz7sLxFa/9tcB1BFEAx77xsw7IyoQHCgeHupxbvTa93/qmodby26m2eSPWXo4FijMYi73DQ== X-Received: by 2002:a25:e708:0:b0:d53:f88a:dc09 with SMTP id e8-20020a25e708000000b00d53f88adc09mr3953501ybh.2.1691449582662; Mon, 07 Aug 2023 16:06:22 -0700 (PDT) Received: from unknowna0e70b2ca394.attlocal.net ([2600:1700:2f7d:1800::16]) by smtp.googlemail.com with ESMTPSA id d190-20020a25cdc7000000b00d3596aca5bcsm2545203ybf.34.2023.08.07.16.06.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 07 Aug 2023 16:06:22 -0700 (PDT) From: "Vishal Moola (Oracle)" To: Andrew Morton , Matthew Wilcox Cc: linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, loongarch@lists.linux.dev, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-openrisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-um@lists.infradead.org, xen-devel@lists.xenproject.org, kvm@vger.kernel.org, Hugh Dickins , "Vishal Moola (Oracle)" , Mike Rapoport Subject: [PATCH mm-unstable v9 31/31] mm: Remove pgtable_{pmd, pte}_page_{ctor, dtor}() wrappers Date: Mon, 7 Aug 2023 16:05:13 -0700 Message-Id: <20230807230513.102486-32-vishal.moola@gmail.com> X-Mailer: git-send-email 2.40.1 In-Reply-To: <20230807230513.102486-1-vishal.moola@gmail.com> References: <20230807230513.102486-1-vishal.moola@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-sh@vger.kernel.org These functions are no longer necessary. Remove them and cleanup Documentation referencing them. Acked-by: Mike Rapoport (IBM) Signed-off-by: Vishal Moola (Oracle) --- Documentation/mm/split_page_table_lock.rst | 12 +++++------ .../zh_CN/mm/split_page_table_lock.rst | 14 ++++++------- include/linux/mm.h | 20 ------------------- 3 files changed, 13 insertions(+), 33 deletions(-) diff --git a/Documentation/mm/split_page_table_lock.rst b/Documentation/mm/split_page_table_lock.rst index a834fad9de12..e4f6972eb6c0 100644 --- a/Documentation/mm/split_page_table_lock.rst +++ b/Documentation/mm/split_page_table_lock.rst @@ -58,7 +58,7 @@ Support of split page table lock by an architecture =================================================== There's no need in special enabling of PTE split page table lock: everything -required is done by pgtable_pte_page_ctor() and pgtable_pte_page_dtor(), which +required is done by pagetable_pte_ctor() and pagetable_pte_dtor(), which must be called on PTE table allocation / freeing. Make sure the architecture doesn't use slab allocator for page table @@ -68,8 +68,8 @@ This field shares storage with page->ptl. PMD split lock only makes sense if you have more than two page table levels. -PMD split lock enabling requires pgtable_pmd_page_ctor() call on PMD table -allocation and pgtable_pmd_page_dtor() on freeing. +PMD split lock enabling requires pagetable_pmd_ctor() call on PMD table +allocation and pagetable_pmd_dtor() on freeing. Allocation usually happens in pmd_alloc_one(), freeing in pmd_free() and pmd_free_tlb(), but make sure you cover all PMD table allocation / freeing @@ -77,7 +77,7 @@ paths: i.e X86_PAE preallocate few PMDs on pgd_alloc(). With everything in place you can set CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK. -NOTE: pgtable_pte_page_ctor() and pgtable_pmd_page_ctor() can fail -- it must +NOTE: pagetable_pte_ctor() and pagetable_pmd_ctor() can fail -- it must be handled properly. page->ptl @@ -97,7 +97,7 @@ trick: split lock with enabled DEBUG_SPINLOCK or DEBUG_LOCK_ALLOC, but costs one more cache line for indirect access; -The spinlock_t allocated in pgtable_pte_page_ctor() for PTE table and in -pgtable_pmd_page_ctor() for PMD table. +The spinlock_t allocated in pagetable_pte_ctor() for PTE table and in +pagetable_pmd_ctor() for PMD table. Please, never access page->ptl directly -- use appropriate helper. diff --git a/Documentation/translations/zh_CN/mm/split_page_table_lock.rst b/Documentation/translations/zh_CN/mm/split_page_table_lock.rst index 4fb7aa666037..a2c288670a24 100644 --- a/Documentation/translations/zh_CN/mm/split_page_table_lock.rst +++ b/Documentation/translations/zh_CN/mm/split_page_table_lock.rst @@ -56,16 +56,16 @@ Hugetlb特定的辅助函数: 架构对分页表锁的支持 ==================== -没有必要特别启用PTE分页表锁:所有需要的东西都由pgtable_pte_page_ctor() -和pgtable_pte_page_dtor()完成,它们必须在PTE表分配/释放时被调用。 +没有必要特别启用PTE分页表锁:所有需要的东西都由pagetable_pte_ctor() +和pagetable_pte_dtor()完成,它们必须在PTE表分配/释放时被调用。 确保架构不使用slab分配器来分配页表:slab使用page->slab_cache来分配其页 面。这个区域与page->ptl共享存储。 PMD分页锁只有在你有两个以上的页表级别时才有意义。 -启用PMD分页锁需要在PMD表分配时调用pgtable_pmd_page_ctor(),在释放时调 -用pgtable_pmd_page_dtor()。 +启用PMD分页锁需要在PMD表分配时调用pagetable_pmd_ctor(),在释放时调 +用pagetable_pmd_dtor()。 分配通常发生在pmd_alloc_one()中,释放发生在pmd_free()和pmd_free_tlb() 中,但要确保覆盖所有的PMD表分配/释放路径:即X86_PAE在pgd_alloc()中预先 @@ -73,7 +73,7 @@ PMD分页锁只有在你有两个以上的页表级别时才有意义。 一切就绪后,你可以设置CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK。 -注意:pgtable_pte_page_ctor()和pgtable_pmd_page_ctor()可能失败--必 +注意:pagetable_pte_ctor()和pagetable_pmd_ctor()可能失败--必 须正确处理。 page->ptl @@ -90,7 +90,7 @@ page->ptl用于访问分割页表锁,其中'page'是包含该表的页面struc 的指针并动态分配它。这允许在启用DEBUG_SPINLOCK或DEBUG_LOCK_ALLOC的 情况下使用分页锁,但由于间接访问而多花了一个缓存行。 -PTE表的spinlock_t分配在pgtable_pte_page_ctor()中,PMD表的spinlock_t -分配在pgtable_pmd_page_ctor()中。 +PTE表的spinlock_t分配在pagetable_pte_ctor()中,PMD表的spinlock_t +分配在pagetable_pmd_ctor()中。 请不要直接访问page->ptl - -使用适当的辅助函数。 diff --git a/include/linux/mm.h b/include/linux/mm.h index 6310e0c59efe..6a95dfed4957 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2932,11 +2932,6 @@ static inline bool pagetable_pte_ctor(struct ptdesc *ptdesc) return true; } -static inline bool pgtable_pte_page_ctor(struct page *page) -{ - return pagetable_pte_ctor(page_ptdesc(page)); -} - static inline void pagetable_pte_dtor(struct ptdesc *ptdesc) { struct folio *folio = ptdesc_folio(ptdesc); @@ -2946,11 +2941,6 @@ static inline void pagetable_pte_dtor(struct ptdesc *ptdesc) lruvec_stat_sub_folio(folio, NR_PAGETABLE); } -static inline void pgtable_pte_page_dtor(struct page *page) -{ - pagetable_pte_dtor(page_ptdesc(page)); -} - pte_t *__pte_offset_map(pmd_t *pmd, unsigned long addr, pmd_t *pmdvalp); static inline pte_t *pte_offset_map(pmd_t *pmd, unsigned long addr) { @@ -3057,11 +3047,6 @@ static inline bool pagetable_pmd_ctor(struct ptdesc *ptdesc) return true; } -static inline bool pgtable_pmd_page_ctor(struct page *page) -{ - return pagetable_pmd_ctor(page_ptdesc(page)); -} - static inline void pagetable_pmd_dtor(struct ptdesc *ptdesc) { struct folio *folio = ptdesc_folio(ptdesc); @@ -3071,11 +3056,6 @@ static inline void pagetable_pmd_dtor(struct ptdesc *ptdesc) lruvec_stat_sub_folio(folio, NR_PAGETABLE); } -static inline void pgtable_pmd_page_dtor(struct page *page) -{ - pagetable_pmd_dtor(page_ptdesc(page)); -} - /* * No scalability reason to split PUD locks yet, but follow the same pattern * as the PMD locks to make it easier if we decide to. The VM should not be