From patchwork Tue Jul 23 15:34:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13740175 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B513DC3DA49 for ; Tue, 23 Jul 2024 15:35:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 26D5A6B00BA; Tue, 23 Jul 2024 11:35:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F3B56B00BB; Tue, 23 Jul 2024 11:35:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 01E6A6B00BC; Tue, 23 Jul 2024 11:35:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D7E106B00BA for ; Tue, 23 Jul 2024 11:35:15 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 656E91401FE for ; Tue, 23 Jul 2024 15:35:15 +0000 (UTC) X-FDA: 82371416190.19.F8D611E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf19.hostedemail.com (Postfix) with ESMTP id 82FF71A001D for ; Tue, 23 Jul 2024 15:35:13 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="mgbs/kdS"; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721748861; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Ls/nGiqrMN6eSAAmt4FhCK20E6yJjNxuF4jZ+2UM+B0=; b=BghWDtH2S2XMwI9zMq6qmKjK+UL6dSnf2MoKAJ75BsGm7+qxyZJ7swGJnpzkwdOWO2mb4i QRYOU2RlcfW+Es4fA8FTuh5BN8Kqawrfs3H4rCQOEzr47yO8EvvLRxpXqckpckEb12K9ko hJn4HuJhuMsXGqSf/EqXJVAMYQ3EKz0= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721748861; a=rsa-sha256; cv=none; b=p8mvwHBv7rxj06C/j2hPWYc6PpkBXbMZe/b4V1fZKCVAeOIseCCFZou6nZJ8gPnflRY/kG 1hQleP+TtkTFncwEmu034cjISnueVHuC6z+f1VSoViUYr6u2v+RGrOSAX2v2v1dEpZjVSY WQ1kS+MqW+vxxQ7yY9cRG9v1vBSYzlQ= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="mgbs/kdS"; dmarc=none; spf=none (imf19.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Ls/nGiqrMN6eSAAmt4FhCK20E6yJjNxuF4jZ+2UM+B0=; b=mgbs/kdSkqd6aBDyv5jfK177NU YGpnBG6y917w8L1ue6DG4cwxXIdz+WEqvTmQW/t6akSGIXXW2u1jK13LcLRCcalqK2nlL69GbT1f8 rrK9vogldMYB8quOj84uxm90Yo0EzF6JxwEApuRDNBQAGAF2ozfzSGrCidGWO2ZrlExWTnY/tqR5y T3dwii11efm4uO3c5S3DILvAWwoCChhKpbOXguTXpBiveTGhV3k9+B+wCvfDlSy3510l1X1GHI6NZ GXF3XwalC4l7+YeM8hRJj0uKZllQDVgKRCOuAUmQXbqjq0EaWngF43RYC/JQ6HUqiH7VaL7bTTxEt DzjJ7CiA==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1sWHXd-000000070LG-0Mdl; Tue, 23 Jul 2024 15:35:05 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 1/6] bootmem: Stop using page->index Date: Tue, 23 Jul 2024 16:34:56 +0100 Message-ID: <20240723153503.1669586-2-willy@infradead.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240723153503.1669586-1-willy@infradead.org> References: <20240723153503.1669586-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 82FF71A001D X-Stat-Signature: sfg5gg8x7jwwibfx6ogdwxeog1re4f3x X-Rspam-User: X-HE-Tag: 1721748913-651064 X-HE-Meta: U2FsdGVkX1/fb8o0ethVoW7V5wlgYg+ctWS0lNsz12tVbnpPf5CsVfyUtAamF8kVqgtHUqWL/pH8eUIurJV+UQa3NmbGVZuvDpkD89z0kexmY1bDYQzbOBwq7ouIVlYl0clgQWXwlnYNl8fq+mz7vIlAyp6SgnLyNNE5VQ92YtAIDAwpY5d3Hpmv4pApfHC5lKz9E73hQ7vxlYs2n0hk/XHRJsjT9qSWQ7Gb59xSWXrG5b8aFhkyv2lMNLWRUCjLG78l8jvs+fdhqUkp3LDhQ/oDzMzBAoFZsU6PAfidyggiaDB5u3gjPQxvFSoDsEMGH3tFKTfD3gW8lEZZaUB/wL405G9qVeYoVDnNs3MOvF7Y/aFL4sKI1dTpu8SXbtt9GOdwkHHhcNAQeh2zND3o5AOLNt3yGhDW6OlhrAOhfubQCIPxIAwjURKS8iHyKcdowe9cRGF+4iLREWSVBlrMbihZkzW9McBKPS4Fgm4E71JGDONpY5D0FogmhJNcyErBQ1iUZISUDJc+iUgssAu7Mvle/P7Zk+lG6nPrhSEoKVTd4NKbsfUmDxtEeeeExb+nPWxvW2jJuIvfZOoAL5iq9R4SpfwDo5tUvMk0rKGxlwPZ+NyUvxg28Ek+bLHtt3fCBN98G1NQrir4rs2lju6cLWHJV7jrbbPXmXB6tIoiWbXHmpJb/9Gtdha9m5Kv5Tta+Gt8fKCeA6/CkHNc7zxkdlFvddKIOJKbMQvV0eQtAIBXDq7D557EcXOsXmROgmAZk9DYN0lz1AvOI+NriyMUWwNeEgAE2hEcZbxUWRSurkKJpz2dyTL6hvgi7soBJR6hyF/JoZMUjbIDK2JRN2zmD7aO/quxgZfdDoiFyTpfngoTdId9A8UozGmAklxlsn1H2i8+VoP596Xm+OTM3DvjNX8+SxIDK+JznNEPEin6PUsDUPNyLJajM9Cwo6EGHryHTInZJlPVeGq4dKraj1P CTGRIgPy vrrxDrkVGvp9zWyg+lblwb1U5pbTB5TMNl0stbDL/w8Rw5F5XZYuS+iwDnZqkIFI6tWUoD01+zoFakij7AdCgIcyS0eqswFCgIIky8sLDfvv7RvdWwxt+CEvbdARcsKKpUEhEAsOiHgKprKPU6jPU7zMNfIymuFhdHryC5CMB8ho3bvopsYrxrpLpP+ikETUze662REXiwExAep6N3JlPfNhPcA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Encode the type into the bottom four bits of page->private and the info into the remaining bits. Also turn the bootmem type into a named enum. Signed-off-by: Matthew Wilcox (Oracle) --- arch/x86/mm/init_64.c | 9 ++++----- include/linux/bootmem_info.h | 25 +++++++++++++++++-------- mm/bootmem_info.c | 11 ++++++----- mm/sparse.c | 8 ++++---- 4 files changed, 31 insertions(+), 22 deletions(-) diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index d8dbeac8b206..d77f22850aa2 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -983,13 +983,12 @@ int arch_add_memory(int nid, u64 start, u64 size, static void __meminit free_pagetable(struct page *page, int order) { - unsigned long magic; - unsigned int nr_pages = 1 << order; - /* bootmem page has reserved flag */ if (PageReserved(page)) { - magic = page->index; - if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) { + enum bootmem_type type = bootmem_type(page); + unsigned long nr_pages = 1 << order; + + if (type == SECTION_INFO || type == MIX_SECTION_INFO) { while (nr_pages--) put_page_bootmem(page++); } else diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h index cffa38a73618..e2fe5de93dcc 100644 --- a/include/linux/bootmem_info.h +++ b/include/linux/bootmem_info.h @@ -6,11 +6,10 @@ #include /* - * Types for free bootmem stored in page->lru.next. These have to be in - * some random range in unsigned long space for debugging purposes. + * Types for free bootmem stored in the low bits of page->private. */ -enum { - MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12, +enum bootmem_type { + MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 1, SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE, MIX_SECTION_INFO, NODE_INFO, @@ -21,9 +20,19 @@ enum { void __init register_page_bootmem_info_node(struct pglist_data *pgdat); void get_page_bootmem(unsigned long info, struct page *page, - unsigned long type); + enum bootmem_type type); void put_page_bootmem(struct page *page); +static inline enum bootmem_type bootmem_type(const struct page *page) +{ + return (unsigned long)page->private & 0xf; +} + +static inline unsigned long bootmem_info(const struct page *page) +{ + return (unsigned long)page->private >> 4; +} + /* * Any memory allocated via the memblock allocator and not via the * buddy will be marked reserved already in the memmap. For those @@ -31,7 +40,7 @@ void put_page_bootmem(struct page *page); */ static inline void free_bootmem_page(struct page *page) { - unsigned long magic = page->index; + enum bootmem_type type = bootmem_type(page); /* * The reserve_bootmem_region sets the reserved flag on bootmem @@ -39,7 +48,7 @@ static inline void free_bootmem_page(struct page *page) */ VM_BUG_ON_PAGE(page_ref_count(page) != 2, page); - if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) + if (type == SECTION_INFO || type == MIX_SECTION_INFO) put_page_bootmem(page); else VM_BUG_ON_PAGE(1, page); @@ -54,7 +63,7 @@ static inline void put_page_bootmem(struct page *page) } static inline void get_page_bootmem(unsigned long info, struct page *page, - unsigned long type) + enum bootmem_type type) { } diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c index fa7cb0c87c03..95f288169a38 100644 --- a/mm/bootmem_info.c +++ b/mm/bootmem_info.c @@ -14,23 +14,24 @@ #include #include -void get_page_bootmem(unsigned long info, struct page *page, unsigned long type) +void get_page_bootmem(unsigned long info, struct page *page, + enum bootmem_type type) { - page->index = type; + BUG_ON(type > 0xf); + BUG_ON(info > (ULONG_MAX >> 4)); SetPagePrivate(page); - set_page_private(page, info); + set_page_private(page, info << 4 | type); page_ref_inc(page); } void put_page_bootmem(struct page *page) { - unsigned long type = page->index; + enum bootmem_type type = bootmem_type(page); BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE || type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE); if (page_ref_dec_return(page) == 1) { - page->index = 0; ClearPagePrivate(page); set_page_private(page, 0); INIT_LIST_HEAD(&page->lru); diff --git a/mm/sparse.c b/mm/sparse.c index e4b830091d13..ad89ce5d9d28 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -721,19 +721,19 @@ static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages, static void free_map_bootmem(struct page *memmap) { unsigned long maps_section_nr, removing_section_nr, i; - unsigned long magic, nr_pages; + unsigned long type, nr_pages; struct page *page = virt_to_page(memmap); nr_pages = PAGE_ALIGN(PAGES_PER_SECTION * sizeof(struct page)) >> PAGE_SHIFT; for (i = 0; i < nr_pages; i++, page++) { - magic = page->index; + type = bootmem_type(page); - BUG_ON(magic == NODE_INFO); + BUG_ON(type == NODE_INFO); maps_section_nr = pfn_to_section_nr(page_to_pfn(page)); - removing_section_nr = page_private(page); + removing_section_nr = bootmem_info(page); /* * When this function is called, the removing section is From patchwork Tue Jul 23 15:34:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13740173 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1F9BAC3DA49 for ; Tue, 23 Jul 2024 15:35:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4B4196B0089; Tue, 23 Jul 2024 11:35:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 463476B00B9; Tue, 23 Jul 2024 11:35:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 303D06B00BB; Tue, 23 Jul 2024 11:35:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 12DFA6B0089 for ; Tue, 23 Jul 2024 11:35:13 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B3DB81C0607 for ; Tue, 23 Jul 2024 15:35:12 +0000 (UTC) X-FDA: 82371416064.29.DCAEA43 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf09.hostedemail.com (Postfix) with ESMTP id 0C25B14000D for ; Tue, 23 Jul 2024 15:35:10 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=iyiRhel8; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721748849; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=AVLzwYo6U16iN8amn1SMV4UMGf6UW3JZ6MhTrIe4UCQ=; b=uZovjn9TAWfpFOTw3hYDhQ/v3/2McrPAXc44nWoepOhfaeBpeIrmk/xBKwkyjhor5baGDG Cq3TlaS9gN/V0+zXkr7Mb3jXlyk4WIOv4/A3n0epiVTOQqSjfS4aEf8hTA+6E7N6lPV+26 kJKdSPVY6azF00uHLyq2kjb5akzAPrM= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=iyiRhel8; spf=none (imf09.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721748849; a=rsa-sha256; cv=none; b=TlXSq8dxyowleyJ+hlMauhA0Ofqwa87VF6Q4dVRykFG31dhEEPgf4LNgxdUs3kMg2y9pl6 3oi/Mdcijo1VbPZHoIqM3dJIsQvQoJqPeSiGjkIZsJj7adp5KtNVxIFYfasHxJBtzxzlJz +vIAsXAfdYtaR0zbB8oYgJVgmDkKr2E= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=AVLzwYo6U16iN8amn1SMV4UMGf6UW3JZ6MhTrIe4UCQ=; b=iyiRhel8jhXe+c9/Uaqh5MGhbc Nirm2UuxnewdYuChGADETrPCr9t/Bhl6uel0KJOyz3ko16JsM8gAyIhMarRqdIRgbmQNeBnasyxAe IQw7NO8VHdZ89fC4Wji5krrzD9VyncK0HXQKETEyb5YpwTJL2bHfk/7+B0Mj1h5Pa4TRKVSh89dad MTHQ0Cv6hZIEGHUjgyPOV2LoPdAVmQwj0T3PoAPiI/jwfiUYewkQgq6ty+TozemTSbZoT2BDRHAjk m6zyPuGmnTJCnWCpL2fONQ6OZeiixNYJrnynyIDchfmlOgclW2djqMaW0k38WkTIJY+l5YAp56TmE 3PoG2BXQ==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1sWHXd-000000070LI-0dvy; Tue, 23 Jul 2024 15:35:05 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 2/6] mm: Constify page_address_in_vma() Date: Tue, 23 Jul 2024 16:34:57 +0100 Message-ID: <20240723153503.1669586-3-willy@infradead.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240723153503.1669586-1-willy@infradead.org> References: <20240723153503.1669586-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 0C25B14000D X-Stat-Signature: zeabnwsr4ze8uz9eg3a7dnqbk4u8xe9x X-Rspam-User: X-HE-Tag: 1721748910-518874 X-HE-Meta: U2FsdGVkX19ChUh0337Ikqd3+FDjolCwz2MqZxyohNKFWDsjBTUQPujm+smy4+z2jB2MhEA/55L56Dxu7/HQB3ZkW9cybQ9bKUbTiZcXus8Lcy9rHCvbUo+NcAhZMsQVxhtGRf+OAdNpI/iliSJXVimwsQEAQ5jL+3dNmW27vGV5okYBasRqFnB3NzBI8AZQIz8zuGxCMWZhSWrAdmLK4y58YyN/avRPHph3iWPxdykHQmJJaRu4gY2PiWVyhbuGwPkHqStHkH82AchzO1RFsgQTMUdUBgoeU8PJpq2jtXgNhflfGxtwPB/bM6/kRhOffqspHkl6dL4KV/kQFUhJyYtJNm76IDDSyFgnmRIycelZPQDpSp2o51LGPkl72HuOUzgPhQHeb1XKY8ySd5VnAkEafkDaZ1Si4AQSJ2+Ohr73LLR7KK6nKXGzkfklTqMLo5WoyEYkyVZG4rp95PXn7xULfGJEoL+0bmkTSFGAy2zB20fHhEdaPnOnr6Ff02mTlnGSvc2wCzblunClHo297xESHbfXFHjeDSPuP7ITxlEuyAcnbbzhxROdBxkwYhD62eGaLtzEXMFfrUk6TU5exNeTiqn0KvwPhNC2RtqbcnSIjKLWK7txDGa45VU6x/XU68ZxtR1cTBA9JNZhFHt1bvVTSA1kU18G/NIW1PM30o1YnTfk/JF6gZ5p2xYj9e4953Hk28JCV1fqrSwzIy17Q66MYPARDZ3UewdHP5RVAyUgDTAGrtOoJd8sb9kqheqKCDXCo3CwR4xy/c9WHt6L2ZfWzQQTDmSVI0oBCv2kWZukYly0uexhzWuaQ7bEDDYLV0bOWXyKjWMERreUvlK7B8Um5TdbC2WKLZdNRXXf0ct9yt0jw0RhdtpbH8nEgg+h+L4u7+Kh+0ggIBGBinhzJfwUnwFsrndXCwAJFgklNYkXZWW7G4Qq5x1dAmbWhfAw9iTRmQjA4OstFZIfpEG zq1Vjxlz +CyUXjb4kA9U/O3zunKrAQmGEftOfT7eJxr3LYMUnvc3v2mSoeIRqxAtYhtQ3xtxTUcgwD9NB1s9K2dNLXoiydG9CaPAyjoLRTaUKUAa0LcYIj+RnnNffvgne46H1Wz/iZiyKhoZ5C2DiPkIA/i21tC9D3B8v2OTWL1XqbKeIzaxW/LyulMq/gPjwRYxDlDGtV7ETthI9qukGH+OuuaFX0vQCUCSmowBaKC9le/4IdEYbKlZVqcJngiDtsIcjvV5l4qvFb4hoZLm9Y6s= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: If we also mark the struct folio argument to folio_anon_vma(), we can make page_address_in_vma() take a const struct page pointer. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/rmap.h | 2 +- mm/internal.h | 2 +- mm/rmap.c | 5 +++-- mm/util.c | 2 +- 4 files changed, 6 insertions(+), 5 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 0978c64f49d8..d1fca5b76039 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -732,7 +732,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw); /* * Used by swapoff to help locate where page is expected in vma. */ -unsigned long page_address_in_vma(struct page *, struct vm_area_struct *); +unsigned long page_address_in_vma(const struct page *, struct vm_area_struct *); /* * Cleans the PTEs of shared mappings. diff --git a/mm/internal.h b/mm/internal.h index b4d86436565b..e511708b2be0 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -810,7 +810,7 @@ static inline bool is_data_mapping(vm_flags_t flags) } /* mm/util.c */ -struct anon_vma *folio_anon_vma(struct folio *folio); +struct anon_vma *folio_anon_vma(const struct folio *folio); #ifdef CONFIG_MMU void unmap_mapping_folio(struct folio *folio); diff --git a/mm/rmap.c b/mm/rmap.c index 8616308610b9..886bf67ba382 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -771,9 +771,10 @@ static bool should_defer_flush(struct mm_struct *mm, enum ttu_flags flags) * At what user virtual address is page expected in vma? * Caller should check the page is actually part of the vma. */ -unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma) +unsigned long page_address_in_vma(const struct page *page, + struct vm_area_struct *vma) { - struct folio *folio = page_folio(page); + const struct folio *folio = page_folio(page); pgoff_t pgoff; if (folio_test_anon(folio)) { diff --git a/mm/util.c b/mm/util.c index bc488f0121a7..8afe3b90d650 100644 --- a/mm/util.c +++ b/mm/util.c @@ -780,7 +780,7 @@ void *vcalloc_noprof(size_t n, size_t size) } EXPORT_SYMBOL(vcalloc_noprof); -struct anon_vma *folio_anon_vma(struct folio *folio) +struct anon_vma *folio_anon_vma(const struct folio *folio) { unsigned long mapping = (unsigned long)folio->mapping; From patchwork Tue Jul 23 15:34:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13740176 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C40FC3DA49 for ; Tue, 23 Jul 2024 15:35:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 37C946B00BC; Tue, 23 Jul 2024 11:35:19 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 32DAC6B00BD; Tue, 23 Jul 2024 11:35:19 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1CD006B00BE; Tue, 23 Jul 2024 11:35:19 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id EBAD86B00BC for ; Tue, 23 Jul 2024 11:35:18 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 5166DC0216 for ; Tue, 23 Jul 2024 15:35:18 +0000 (UTC) X-FDA: 82371416316.12.2237666 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf06.hostedemail.com (Postfix) with ESMTP id 98E9618000C for ; Tue, 23 Jul 2024 15:35:16 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=t2NcawX7; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721748854; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=89pFi3YoXttaW6Nlt14NSpO+XS/DUE2E9c0ExHdBGgg=; b=O8BfTUTCSzFq+bXK4x5VZInuIbKE3diVQQSPwIK72VCNJ4vjxiI7mR70VKBq9+RwQavAHg +1/ypp+ML+HT9AvR0lLkr/yXSxra8RxXvj+ZqOaTwAxpBDwORvSCh/BMOnu0GwfSCjDbKW TKd0km7Dg864KZ+AdxjTG/iYPB+2cG4= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=t2NcawX7; spf=none (imf06.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721748854; a=rsa-sha256; cv=none; b=q8NLxMNcp3Bo3fe2aXrqRqoS8/DCUTc37uV8jkix9jvRf0sv99rGP5s9AejYukovOLVrpf +ck2D5ChRxj1Pl/n/+W9vhHc7xd/jxOg4Q59pTa80RANn05NOITpHEWYRwH6T72Lz8y8eR m1fiettmi+H54+RNXCXbgb0K2UZJ/co= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=89pFi3YoXttaW6Nlt14NSpO+XS/DUE2E9c0ExHdBGgg=; b=t2NcawX7UZY0lc+eRmWHNWJd0h fivgS9E44PtBmFx55ZLubABKr0q/ruUv9uRHrY478gTRM5AJKBNaUUFdrO4j58AeMbLDO0tG2NwMn CZz3HFj4YodbWIGo21JAIOYpA+IB0GOdTa3SfOE4GwsTqPuv8yqDELseoeQuqELON/++zsMqq1Kpx 9dBtI1fedmU9rogz3bU5kRQ0KTiVQ7bqm2oZd/XUoGqMi3GB2APdA49nsahj2MrOxKrxeqr/1N+Zb Hz6AARtfWsBQMuVOKHeQFfiaQCyQtHDJxu/Nf5kdHSzaK0GBJreIiirOYWFQShHo3cfg1/lM5M7Hf O+HXB+TQ==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1sWHXd-000000070LK-0xiu; Tue, 23 Jul 2024 15:35:05 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 3/6] mm: Convert page_to_pgoff() to page_pgoff() Date: Tue, 23 Jul 2024 16:34:58 +0100 Message-ID: <20240723153503.1669586-4-willy@infradead.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240723153503.1669586-1-willy@infradead.org> References: <20240723153503.1669586-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 98E9618000C X-Stat-Signature: 4wscugjizrhzbarpn3fydrjofnu7wwjy X-Rspam-User: X-HE-Tag: 1721748916-679284 X-HE-Meta: U2FsdGVkX1+XCVIrUjtgyxXdgOMFMmuGk3CjFoIZsjq4CGbZgDQ4ZtFE4y68mJVO+C8vtSsHYcS85e2Empwv2HSzC+Qf8BN9ZbGD6EtMZ9i2AIoewKze2jm8MJPCwsdenMrALoxnWnjFDZEUmJ81ITeivp1rIXSFTORmKiK0QYZTvYp+q6r3huih83OUTLTlTExt+lru2tK1FN6N+ozjP4TOqWjVRFJH1JxlwNvvjBOMKCcoKv5Rjhx7fV5dGeTL+68G3V5QF2VC8mX4Y45rFkeF2pQdXawngpjIqNKGRY+f3wsuZxv5OUma1ZoxNwHeOXLFNMMR2xgtn/Kd0Cs2oZdsJIEncYSeKxm/lQ6fXuniu3sEQQO+H+8fxV/IiiWBGJfb0HMYqrieZStBS9dCfqeXS3Zna5lTmb0PWiMQboK6SdFBxnZ66OMATrd7XXH9q2AEFh/AGGtAq4eMEpPrAv+9fLXwhMjuxEYzWR9bQFKPMCe/ciZM2BttaLk00SCj8Enx8oB/crisrXToV1IvRVeIn1Cx55mZ0VloShP0Cua4yqJNkpMpre9hagrGV032RjIkliSgX+yyvF1MTCtT8nE/l0VbH/CDAX9BMoppFYXpyob3OMUrqDR60s4vF1/g72Dm5JzeLld2bpShIu3bIdHENa2tLlEzGKGPCzF4WTybKp7Iai4m0NK7O3cP9dL7c80HDeXqc5zSxrEkNNFjc2/SbXmGzN2j4JXzHXOdlzffvmVBdK+dQ2QiQGYhvMmCvTGN0t6BZA9rETO8PwhBxUMHlfbSwHPrMv2p5IKVlCe9ADbuEtyBqgR3KTyLt4jyezHWq+3RU8giPwpaIQWJPHQvypdAcdauXfMMkUJal728IqfEuTxvlhzIS60skqhfhbr6zfoxdCtgv4ni6cRR0IfQhlnFZf1P0ribotF7d35Way+jmPbG8sacxbuw4FZ2pIvfdC3sFM+sw8zMpnM RLHlpkAx dchp7H41/mrH17LmbdiQc0wEeCum60kNVLlF4K30WV+KPW9ownkhnmKc/bhr8vGApyHDXctIpkydR+vWSkwLjRY+SayMY2UUD2kn6L8oRcJTnWopXNdoD0v/U8+HmLB0f94Ovo1I5HBidl59XjDgN0Wg2OV5ISm3ZFK4hQPl8qaoNS8e2pJNpBJsoJvBV/ezUcRjmZ+M1oJYMPD8+jaHgFhLwkUO0X+eME/AFS1S+L0cYMb8GtFPvumeytkU3w9o37ECdcT/X1PGPi2E= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Change the function signature to pass in the folio as all three callers have it. This removes a reference to page->index, which we're trying to get rid of. Also move page_pgoff() to mm/internal.h as code outside mm has no business calling it. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 18 ------------------ mm/internal.h | 6 ++++++ mm/memory-failure.c | 4 ++-- mm/rmap.c | 2 +- 4 files changed, 9 insertions(+), 21 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 483a191bb4df..1f295ef7d10d 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -913,24 +913,6 @@ static inline struct folio *read_mapping_folio(struct address_space *mapping, return read_cache_folio(mapping, index, NULL, file); } -/* - * Get the offset in PAGE_SIZE (even for hugetlb pages). - */ -static inline pgoff_t page_to_pgoff(struct page *page) -{ - struct page *head; - - if (likely(!PageTransTail(page))) - return page->index; - - head = compound_head(page); - /* - * We don't initialize ->index for tail pages: calculate based on - * head page - */ - return head->index + page - head; -} - /* * Return byte-offset into filesystem object for page. */ diff --git a/mm/internal.h b/mm/internal.h index e511708b2be0..8dfd9527ac1e 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -919,6 +919,12 @@ void mlock_drain_remote(int cpu); extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma); +static inline pgoff_t page_pgoff(const struct folio *folio, + const struct page *page) +{ + return folio->index + folio_page_idx(folio, page); +} + /** * vma_address - Find the virtual address a page range is mapped at * @vma: The vma which maps this object. diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 581d3e5c9117..572c742ecf48 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -617,7 +617,7 @@ static void collect_procs_anon(struct folio *folio, struct page *page, if (av == NULL) /* Not actually mapped anymore */ return; - pgoff = page_to_pgoff(page); + pgoff = page_pgoff(folio, page); rcu_read_lock(); for_each_process(tsk) { struct vm_area_struct *vma; @@ -653,7 +653,7 @@ static void collect_procs_file(struct folio *folio, struct page *page, i_mmap_lock_read(mapping); rcu_read_lock(); - pgoff = page_to_pgoff(page); + pgoff = page_pgoff(folio, page); for_each_process(tsk) { struct task_struct *t = task_early_kill(tsk, force_early); unsigned long addr; diff --git a/mm/rmap.c b/mm/rmap.c index 886bf67ba382..ba1920291ac6 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1266,7 +1266,7 @@ static void __page_check_anon_rmap(struct folio *folio, struct page *page, */ VM_BUG_ON_FOLIO(folio_anon_vma(folio)->root != vma->anon_vma->root, folio); - VM_BUG_ON_PAGE(page_to_pgoff(page) != linear_page_index(vma, address), + VM_BUG_ON_PAGE(page_pgoff(folio, page) != linear_page_index(vma, address), page); } From patchwork Tue Jul 23 15:34:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13740178 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58E8FC3DA49 for ; Tue, 23 Jul 2024 15:35:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 131D56B009C; Tue, 23 Jul 2024 11:35:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0E3D86B00BE; Tue, 23 Jul 2024 11:35:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E9DB36B00C0; Tue, 23 Jul 2024 11:35:24 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id CA0E46B00BE for ; Tue, 23 Jul 2024 11:35:24 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 56BF7401E8 for ; Tue, 23 Jul 2024 15:35:24 +0000 (UTC) X-FDA: 82371416568.24.8EE6DD7 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 996111C0034 for ; Tue, 23 Jul 2024 15:35:22 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=oUDnBoDl; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721748860; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=G2dy+oHwkbXO1j0uagI55DaZsuRPTbQymaQhjtDfwLI=; b=1rl4j/FZenPSxTdMmNZvV50G106CU5CTsxA7HMzh8XI6+VBSQuMhyAj4mGEFkQteIFNgZQ 4DYX+xDqjRmB6Wpc/TduR/pyDGeWwfNExv5s0WP6bRXLWNi/xafckC1EyjTZDJCJYvO/iP QXU1hOFN3T2/CMLvky9dT+MYmOEtqBY= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=oUDnBoDl; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721748860; a=rsa-sha256; cv=none; b=42qA1wUfPgiKJw0u56kfZdFPR//CCPUYUgCT+fS2tMLDWFl/EM81ngwlYt5MR0zQ2F4evz Svl7RQ9HK89zULyHjQtcGmhvILNDqon6x5Gn26xT4bDhyF5nlXuTbHNxMNF94lsuwFurMP bJ29CygHRrltZJL74TLPFTvAggoF08c= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=G2dy+oHwkbXO1j0uagI55DaZsuRPTbQymaQhjtDfwLI=; b=oUDnBoDlgf5pIgsuZoZhYTltAH tWO9E8PYxpcFGKfYLi/YjEsQaGmZuviGBmtw4MtE+Jmbz/BOPtghGk/lN7Gwq3bBvdoeQxZ7ckoJG pfTMWhzdLaoR3uHntopGFlHICHd9rK3SbYXeOlKZp7kiALCQneq7s6n1U+Yw8KbN60ApjHFXuH6FJ j97GQlJzLk3ouCCoyfOPGdVTCyPZK+XbfbNKSJQPphy0k0AVUtpXT1LUvdN4xydW+QrpMHLfoa2JH 9CT/P7p+IjPAr+LxBtcomrw8S4yfRJFJ49oZxzRJh9uE1Y1Xe0mRLzdyxwKx2z+hUf+I1quWfozEg y5qFUnqA==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1sWHXd-000000070LM-1FvC; Tue, 23 Jul 2024 15:35:05 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 4/6] mm: Mass constification of folio/page pointers Date: Tue, 23 Jul 2024 16:34:59 +0100 Message-ID: <20240723153503.1669586-5-willy@infradead.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240723153503.1669586-1-willy@infradead.org> References: <20240723153503.1669586-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 996111C0034 X-Stat-Signature: i446kz48ebo77tgihb9gjqqbibjhtudt X-Rspam-User: X-HE-Tag: 1721748922-505730 X-HE-Meta: U2FsdGVkX18D/dVwSaqkRYrWUJNcg4IJzEBEqXKHkH10PynvM23LrmXQUoVozScBBY1HOERRdlVLT5R13tR/g3hPRx29UD6zSHbDX79QsXisblciM0VjVXyWsHoyJFJuc4tPuYhUYyN+/dQzoXLbMyPhN5W4pNZnom5hae8dUOHBWiKXLEzkQ1ubA9yJ3FaP9AJOnkwyW+b0Hlbv1GHeRXyEhtvWJwYA+MpRvSqCjbb7fzY3UX4nlNMlmXE075KVlNwXN2v56c0OE7XQrBbqijNQaTo3O7lMn0ZuhSuGydv84ouj57mLc/U4i2K9ubSSxfO7V/+Y8l2dzccrr4GTDTJGaB1IfCyvsKqPJLWal3Ifyqg0kXHQoB+JTIC7nrg/U/QDjYkT5UghY5xlDL59mUwn4G/oQJat6hIaRqdr19m+kkf99uqa5xZVzu5dMX2P2G9n2O8HAp480V2UOIduIZBNgkkBie8LTJ1uDFMU+RDvWxm0l/CuVPdjf7y599tFarcHGcY11yHXzxr5KhF+OcqgKYj1zefeMBMh1H43XH6WtceEWzqjFtdtYzgQTkUKypqFvIqFrmeuqKXDl2qXy5zMkFcDfMi/srH8G41FzJmXE1DN0FM+ebjx5FoxwI3O/f7jJGwXmQ9BljjQ5osd5LILyX+9dsqTq8ePdjsPkhVXvv8PhhCGtv9Lo+pu43d7EhYTNd1JPu5JwkgrsRNkW9tIxPUyx5/GRXxC35q/TJ6161mAiztUUPbB77Js9fbNn8fe7HulTwlVFni0ixWfVjrlIByRuZjzD0bKJqoEJoe9e+Agqzasifdf0FlHoy2kDyiXjYhRwPTNz1VuTrVZPARxeyXOu8QwyYYBJGIGpFiKI5yA/bhM3kGpBf4cA5aeckrMxcnmPGy24x6okQQ3OQLNZL5+3334ToOqRSVeGQMawln4oAfyoP5Dl4ihvh036Ctw+khZqTsbGJUuM0y 4sIWVm5Z G7jE6z7NNu243RJkYZ+ga8Cg8V6UKWrEfzqKEtDVxLQwjTjWMNr1UdBTafHjNU0CAXMmxHQUWt7JqBzcK9ds4WNRBF9h2W9ntNFIkbQHTgCR13pHsx8pdVNgq4IXELxh1diZfJjfir5eQtpVs6D1HU58bRuprdkzE7dnqGuN2kjN8H906/k/QR8QExAr6j2D7AoF8mDiV6ewWvwqFzo4s8i/LRg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Now that page_pgoff() takes const pointers, we can constify the pointers to a lot of functions. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/ksm.h | 7 ++++--- include/linux/rmap.h | 10 +++++----- mm/internal.h | 5 +++-- mm/ksm.c | 5 +++-- mm/memory-failure.c | 24 +++++++++++++----------- mm/page_vma_mapped.c | 5 +++-- mm/rmap.c | 11 ++++++----- 7 files changed, 37 insertions(+), 30 deletions(-) diff --git a/include/linux/ksm.h b/include/linux/ksm.h index 11690dacd986..c4a8891f6e7d 100644 --- a/include/linux/ksm.h +++ b/include/linux/ksm.h @@ -92,7 +92,7 @@ struct folio *ksm_might_need_to_copy(struct folio *folio, void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc); void folio_migrate_ksm(struct folio *newfolio, struct folio *folio); -void collect_procs_ksm(struct folio *folio, struct page *page, +void collect_procs_ksm(const struct folio *folio, const struct page *page, struct list_head *to_kill, int force_early); long ksm_process_profit(struct mm_struct *); @@ -125,8 +125,9 @@ static inline void ksm_might_unmap_zero_page(struct mm_struct *mm, pte_t pte) { } -static inline void collect_procs_ksm(struct folio *folio, struct page *page, - struct list_head *to_kill, int force_early) +static inline void collect_procs_ksm(const struct folio *folio, + const struct page *page, struct list_head *to_kill, + int force_early) { } diff --git a/include/linux/rmap.h b/include/linux/rmap.h index d1fca5b76039..bef597736e60 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -171,7 +171,7 @@ static inline void anon_vma_merge(struct vm_area_struct *vma, unlink_anon_vmas(next); } -struct anon_vma *folio_get_anon_vma(struct folio *folio); +struct anon_vma *folio_get_anon_vma(const struct folio *folio); /* RMAP flags, currently only relevant for some anon rmap operations. */ typedef int __bitwise rmap_t; @@ -194,8 +194,8 @@ enum rmap_level { RMAP_LEVEL_PMD, }; -static inline void __folio_rmap_sanity_checks(struct folio *folio, - struct page *page, int nr_pages, enum rmap_level level) +static inline void __folio_rmap_sanity_checks(const struct folio *folio, + const struct page *page, int nr_pages, enum rmap_level level) { /* hugetlb folios are handled separately. */ VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); @@ -769,14 +769,14 @@ struct rmap_walk_control { bool (*rmap_one)(struct folio *folio, struct vm_area_struct *vma, unsigned long addr, void *arg); int (*done)(struct folio *folio); - struct anon_vma *(*anon_lock)(struct folio *folio, + struct anon_vma *(*anon_lock)(const struct folio *folio, struct rmap_walk_control *rwc); bool (*invalid_vma)(struct vm_area_struct *vma, void *arg); }; void rmap_walk(struct folio *folio, struct rmap_walk_control *rwc); void rmap_walk_locked(struct folio *folio, struct rmap_walk_control *rwc); -struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, +struct anon_vma *folio_lock_anon_vma_read(const struct folio *folio, struct rmap_walk_control *rwc); #else /* !CONFIG_MMU */ diff --git a/mm/internal.h b/mm/internal.h index 8dfd9527ac1e..ec01e63572ae 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1090,10 +1090,11 @@ void ClearPageHWPoisonTakenOff(struct page *page); bool take_page_off_buddy(struct page *page); bool put_page_back_buddy(struct page *page); struct task_struct *task_early_kill(struct task_struct *tsk, int force_early); -void add_to_kill_ksm(struct task_struct *tsk, struct page *p, +void add_to_kill_ksm(struct task_struct *tsk, const struct page *p, struct vm_area_struct *vma, struct list_head *to_kill, unsigned long ksm_addr); -unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma); +unsigned long page_mapped_in_vma(const struct page *page, + struct vm_area_struct *vma); extern unsigned long __must_check vm_mmap_pgoff(struct file *, unsigned long, unsigned long, unsigned long, diff --git a/mm/ksm.c b/mm/ksm.c index df6bae3a5a2c..8d45cfe7671f 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1080,7 +1080,8 @@ static int unmerge_ksm_pages(struct vm_area_struct *vma, return err; } -static inline struct ksm_stable_node *folio_stable_node(struct folio *folio) +static inline +struct ksm_stable_node *folio_stable_node(const struct folio *folio) { return folio_test_ksm(folio) ? folio_raw_mapping(folio) : NULL; } @@ -3085,7 +3086,7 @@ void rmap_walk_ksm(struct folio *folio, struct rmap_walk_control *rwc) /* * Collect processes when the error hit an ksm page. */ -void collect_procs_ksm(struct folio *folio, struct page *page, +void collect_procs_ksm(const struct folio *folio, const struct page *page, struct list_head *to_kill, int force_early) { struct ksm_stable_node *stable_node; diff --git a/mm/memory-failure.c b/mm/memory-failure.c index 572c742ecf48..729e9c49cc57 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -445,7 +445,7 @@ static unsigned long dev_pagemap_mapping_shift(struct vm_area_struct *vma, * Schedule a process for later kill. * Uses GFP_ATOMIC allocations to avoid potential recursions in the VM. */ -static void __add_to_kill(struct task_struct *tsk, struct page *p, +static void __add_to_kill(struct task_struct *tsk, const struct page *p, struct vm_area_struct *vma, struct list_head *to_kill, unsigned long addr) { @@ -461,7 +461,7 @@ static void __add_to_kill(struct task_struct *tsk, struct page *p, if (is_zone_device_page(p)) tk->size_shift = dev_pagemap_mapping_shift(vma, tk->addr); else - tk->size_shift = page_shift(compound_head(p)); + tk->size_shift = folio_shift(page_folio(p)); /* * Send SIGKILL if "tk->addr == -EFAULT". Also, as @@ -486,7 +486,7 @@ static void __add_to_kill(struct task_struct *tsk, struct page *p, list_add_tail(&tk->nd, to_kill); } -static void add_to_kill_anon_file(struct task_struct *tsk, struct page *p, +static void add_to_kill_anon_file(struct task_struct *tsk, const struct page *p, struct vm_area_struct *vma, struct list_head *to_kill, unsigned long addr) { @@ -509,7 +509,7 @@ static bool task_in_to_kill_list(struct list_head *to_kill, return false; } -void add_to_kill_ksm(struct task_struct *tsk, struct page *p, +void add_to_kill_ksm(struct task_struct *tsk, const struct page *p, struct vm_area_struct *vma, struct list_head *to_kill, unsigned long addr) { @@ -606,8 +606,9 @@ struct task_struct *task_early_kill(struct task_struct *tsk, int force_early) /* * Collect processes when the error hit an anonymous page. */ -static void collect_procs_anon(struct folio *folio, struct page *page, - struct list_head *to_kill, int force_early) +static void collect_procs_anon(const struct folio *folio, + const struct page *page, struct list_head *to_kill, + int force_early) { struct task_struct *tsk; struct anon_vma *av; @@ -643,8 +644,9 @@ static void collect_procs_anon(struct folio *folio, struct page *page, /* * Collect processes when the error hit a file mapped page. */ -static void collect_procs_file(struct folio *folio, struct page *page, - struct list_head *to_kill, int force_early) +static void collect_procs_file(const struct folio *folio, + const struct page *page, struct list_head *to_kill, + int force_early) { struct vm_area_struct *vma; struct task_struct *tsk; @@ -680,7 +682,7 @@ static void collect_procs_file(struct folio *folio, struct page *page, } #ifdef CONFIG_FS_DAX -static void add_to_kill_fsdax(struct task_struct *tsk, struct page *p, +static void add_to_kill_fsdax(struct task_struct *tsk, const struct page *p, struct vm_area_struct *vma, struct list_head *to_kill, pgoff_t pgoff) { @@ -691,7 +693,7 @@ static void add_to_kill_fsdax(struct task_struct *tsk, struct page *p, /* * Collect processes when the error hit a fsdax page. */ -static void collect_procs_fsdax(struct page *page, +static void collect_procs_fsdax(const struct page *page, struct address_space *mapping, pgoff_t pgoff, struct list_head *to_kill, bool pre_remove) { @@ -725,7 +727,7 @@ static void collect_procs_fsdax(struct page *page, /* * Collect the processes who have the corrupted page mapped to kill. */ -static void collect_procs(struct folio *folio, struct page *page, +static void collect_procs(const struct folio *folio, const struct page *page, struct list_head *tokill, int force_early) { if (!folio->mapping) diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index ae5cc42aa208..9b6632aab5f7 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -325,9 +325,10 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) * outside the VMA or not present, returns -EFAULT. * Only valid for normal file or anonymous VMAs. */ -unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma) +unsigned long page_mapped_in_vma(const struct page *page, + struct vm_area_struct *vma) { - struct folio *folio = page_folio(page); + const struct folio *folio = page_folio(page); pgoff_t pgoff = folio->index + folio_page_idx(folio, page); struct page_vma_mapped_walk pvmw = { .pfn = page_to_pfn(page), diff --git a/mm/rmap.c b/mm/rmap.c index ba1920291ac6..9bcddd8ec228 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -496,7 +496,7 @@ void __init anon_vma_init(void) * concurrently without folio lock protection). See folio_lock_anon_vma_read() * which has already covered that, and comment above remap_pages(). */ -struct anon_vma *folio_get_anon_vma(struct folio *folio) +struct anon_vma *folio_get_anon_vma(const struct folio *folio) { struct anon_vma *anon_vma = NULL; unsigned long anon_mapping; @@ -540,7 +540,7 @@ struct anon_vma *folio_get_anon_vma(struct folio *folio) * reference like with folio_get_anon_vma() and then block on the mutex * on !rwc->try_lock case. */ -struct anon_vma *folio_lock_anon_vma_read(struct folio *folio, +struct anon_vma *folio_lock_anon_vma_read(const struct folio *folio, struct rmap_walk_control *rwc) { struct anon_vma *anon_vma = NULL; @@ -1250,8 +1250,9 @@ static void __folio_set_anon(struct folio *folio, struct vm_area_struct *vma, * @vma: the vm area in which the mapping is added * @address: the user virtual address mapped */ -static void __page_check_anon_rmap(struct folio *folio, struct page *page, - struct vm_area_struct *vma, unsigned long address) +static void __page_check_anon_rmap(const struct folio *folio, + const struct page *page, struct vm_area_struct *vma, + unsigned long address) { /* * The page's anon-rmap details (mapping and index) are guaranteed to @@ -2535,7 +2536,7 @@ void __put_anon_vma(struct anon_vma *anon_vma) anon_vma_free(root); } -static struct anon_vma *rmap_walk_anon_lock(struct folio *folio, +static struct anon_vma *rmap_walk_anon_lock(const struct folio *folio, struct rmap_walk_control *rwc) { struct anon_vma *anon_vma; From patchwork Tue Jul 23 15:35:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13740179 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27567C3DA49 for ; Tue, 23 Jul 2024 15:35:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F14CE6B00BF; Tue, 23 Jul 2024 11:35:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EC5D96B00C0; Tue, 23 Jul 2024 11:35:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CF24D6B00C1; Tue, 23 Jul 2024 11:35:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id AC5956B00BF for ; Tue, 23 Jul 2024 11:35:27 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 6D91DA0164 for ; Tue, 23 Jul 2024 15:35:27 +0000 (UTC) X-FDA: 82371416694.24.A2A2318 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf14.hostedemail.com (Postfix) with ESMTP id 92BBA100025 for ; Tue, 23 Jul 2024 15:35:25 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="UUZ6K/3x"; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721748903; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/dA6HBMc6uKpbYOiJasf7cbWJ1qRymTAoZ2gMBqvEQQ=; b=do+lll63FEzFx7fNbATgMwWmy3P0R8tldUeNle16QThOkedAy7NJjiJf3E30wf4IbPQIDE XHAxmi94WeSV8roszmsZQEYD8dbVKyhTFN2OOMWcBjmecTkCGMqw1rib67BAJ+yMjymOwt azeoifXdiOdE/1f/ND9J1RhiF2gbBuo= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="UUZ6K/3x"; spf=none (imf14.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721748903; a=rsa-sha256; cv=none; b=RPAO1TIsHP6K/U8qp1R391359nvL03UI0vMnn5JWIAnMdHZZ5zwDEnPoL5kue4SHgEYvBe ufH2On7zL0tnVqbItbvx8BbAb7EUu1jS6ldTV4q/Quf8aY0dlWW5u8eX27cOofUDJ5GIsJ uCIMDu5JDQ9MkEg60MC2iYl1x8fhtjo= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=/dA6HBMc6uKpbYOiJasf7cbWJ1qRymTAoZ2gMBqvEQQ=; b=UUZ6K/3xbZjY927kWQHTOhG+uE 9wYYkZC+hh/VFctrNYTsAYYiJeexcLG54/ylABtFGpU/6tcP+3n3xpgAM3BF2NrEITI+15k2eavFI eUYhELzgv2YsThLlH2tDyOp8VBY9ZQGHxdXa0cxD7jbHA2vV/C995MAVJF3AJF52gzTjViU5iM1y3 fNivCS0UKugZyzsX2TspByz7L+M1lPmWedEYmlIKhE6ADIdxn43BTskxhozE5RjR4ZhlTpkKaVj22 o4wHwbY8oBYyvEyF9EkKQ+PanCF0eyTuG0eR1KVu0adYOsC0tnkqD2+BdSKO4UY99F/W2H8HGfqEN nkmmuo2A==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1sWHXd-000000070LO-1dRo; Tue, 23 Jul 2024 15:35:05 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 5/6] mm: Remove references to page->index in huge_memory.c Date: Tue, 23 Jul 2024 16:35:00 +0100 Message-ID: <20240723153503.1669586-6-willy@infradead.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240723153503.1669586-1-willy@infradead.org> References: <20240723153503.1669586-1-willy@infradead.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 92BBA100025 X-Stat-Signature: oug4bp5h6bm9uy5agymwjq6myzoedshc X-HE-Tag: 1721748925-734416 X-HE-Meta: U2FsdGVkX18Ze2V+pV9HnvrKNOKvcfZX/pfggTTJoVe2Mbmv7t9QpscOjkC7/wg0KapEDImwoBAxD6tqg4+ztOn0W4s9UJe9OS/19Vs1fcRBs9vOYcqNgLcBaR/c15KJcialVchq3EZGMGbHXkOXFhaWSlocugIqcRwZ7G8ob5qMJkade9caONWt508ZmBYzbB637jocyGaNBvKXl8FBuiOHfWvXjSquBVsLkTsMLovboTK132fmOZLSZmGtDCBKMyVm3CoWYOCM5eUM7smY/V0unmk4Awd/STX8IlxiqPQWmC3C7S8aSkekP6qreQCc6pk/iLj2wOHYl6tgzf8QNL/pfyXmhnDuVkfq1Ba3RYvSckWbhvg1ZoD3P0nY4tz7uPP5+Kb3soOPZ20XudXmRD6OMYGZ0nj1b4dneDVUOjsBpSAUoFJnjtY/EMf/pI2LGfz2vaLw+fzDSd/ZDYSWThvaIhV03NYIibDAA/qDvQhZy2ch+Rxa4S6KtfaT362U0NcVAISrtWFL6OMQI5F6OFtBHknDJ85osX3vgbIkKm9dxw9jsjZVq2IIUR8qj1DMFkQuTo9Znzent4P+ng40f9r6KylSC93VWLqa/vGbUPZygCy4bsetyAv9PhCQ95zKWO2xUJIwuit2ySSpKBhix6zuv85SUuWqLmW1E21oiyugvDoHMiCLASsDHbBzo2CgHzMvV99tppJX2keIWQPjP11Mz4lPf2MdpFSgCwOMAhbNFGlclGoM38HqSoVV9MMW5GmRkvAs4zUbX2twB58V9GZtKQi5+/yIUvPMZYMKOVfKRHJwWsQIOsNPiBvfLOSjntnw+1LphbjtHeZaQrl/0OiYuTzl9L/0qWV023UjMmHEB3u3GStQcx3wGcYtVU8G3oxRcNYyABfrZ9DosQKdDGNHSyvCIVooGNNTtK6AWbCMaAOrJ89MULzj6FpSfksSBwQPKV2lO9hrbd3QPhp RLu+I+cq JL9hZix4BY912fu2L3LCeX59A2lYtmvDMuwmKq6ZU7FVO4WlfVkFiUmhTt2h+1ISAyqFYc0wJTLTXxHDjYZThMe+D0UVD9+87lqCBhodAEN9GirLViJmOE/AMM3k5vRFR45MnfZmRveWjUgjv+CIuFU9xZtFF8NieYcdJNmvRvbOGLI0aJJdPR2Or37u3QqBEHdGTohoPf2ffMTgHcnA8n2h7z+vetKupCoL8IgY4AwtQXOlKJvKYEvGKptYm6fWmdiP7uaVa7920wo4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We already have folios in all these places; it's just a matter of using them instead of the pages. Signed-off-by: Matthew Wilcox (Oracle) --- mm/huge_memory.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f9696c94e211..4ffcae1c82e1 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2860,8 +2860,8 @@ static void __split_huge_page_tail(struct folio *folio, int tail, /* ->mapping in first and second tail page is replaced by other uses */ VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING, page_tail); - page_tail->mapping = head->mapping; - page_tail->index = head->index + tail; + new_folio->mapping = folio->mapping; + new_folio->index = folio->index + tail; /* * page->private should not be set in tail pages. Fix up and warn once @@ -2937,11 +2937,11 @@ static void __split_huge_page(struct page *page, struct list_head *list, ClearPageHasHWPoisoned(head); for (i = nr - new_nr; i >= new_nr; i -= new_nr) { + struct folio *tail; __split_huge_page_tail(folio, i, lruvec, list, new_order); + tail = page_folio(head + i); /* Some pages can be beyond EOF: drop them from page cache */ - if (head[i].index >= end) { - struct folio *tail = page_folio(head + i); - + if (tail->index >= end) { if (shmem_mapping(folio->mapping)) nr_dropped++; else if (folio_test_clear_dirty(tail)) @@ -2949,12 +2949,12 @@ static void __split_huge_page(struct page *page, struct list_head *list, inode_to_wb(folio->mapping->host)); __filemap_remove_folio(tail, NULL); folio_put(tail); - } else if (!PageAnon(page)) { - __xa_store(&folio->mapping->i_pages, head[i].index, - head + i, 0); + } else if (!folio_test_anon(folio)) { + __xa_store(&folio->mapping->i_pages, tail->index, + tail, 0); } else if (swap_cache) { __xa_store(&swap_cache->i_pages, offset + i, - head + i, 0); + tail, 0); } } From patchwork Tue Jul 23 15:35:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13740174 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C8A6C3DA63 for ; Tue, 23 Jul 2024 15:35:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6FA0D6B00B9; Tue, 23 Jul 2024 11:35:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 63A586B00BC; Tue, 23 Jul 2024 11:35:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 37B9E6B00BA; Tue, 23 Jul 2024 11:35:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 1728A6B00B9 for ; Tue, 23 Jul 2024 11:35:13 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B9496801C9 for ; Tue, 23 Jul 2024 15:35:12 +0000 (UTC) X-FDA: 82371416064.21.E14EA19 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf25.hostedemail.com (Postfix) with ESMTP id 1F2FBA002A for ; Tue, 23 Jul 2024 15:35:07 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Wix4ip4b; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721748863; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5qfarQf/HcpWv0H28k3K2oNhrJDB2EHlR8Yx12bkJ3U=; b=Ki3pLrqJ/jFo2ls8CnD5xKPAMJkdFj5wTUPuzXNH4/NFZetFO2OZhJFyIslehSG/ImbX2D CzYC1plRviWNpnvRo628550O9DvObO2fUG/DzZ5X76+GZj7KEOIDJBAjBUWIO3e4IMSD4t c6rEU6otiwAO7A8+lIYSadrPhp1n9gY= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721748863; a=rsa-sha256; cv=none; b=ui/3+3bcgFigoMz/YpQypURLSYMTEtCiJ23dAOMkXeSDp1YoxMcr41O4keKlkT62yKGOAN X1krBnbEZf4hWQY22ugk+CQZcrVJ4VYTDmXUkXcSF+vtHEDMHnbUjgRvJ7AOwcERwCn8c0 /5/MIL8EcuqokQIqIRF3f+oRQPqNVdE= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=Wix4ip4b; spf=none (imf25.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=5qfarQf/HcpWv0H28k3K2oNhrJDB2EHlR8Yx12bkJ3U=; b=Wix4ip4b8VcC39L3FGKOG1HhJL Qy3eaFTipD2W0cATfTG5aWZRTQqZOGCyyvQV78sVlFORoYnuklMkRmu0ykr1H/XbtAvIYbqXlNN5B EYwVIRUwOgOwGfWq2vPQ3EkQ95QqrA2w684bjT1ZyuXCPbqmfdgAyz/DUf0t9D6Mb7Yua1WF7oc0w 8MAq5fQq8qfxcYzTpRLqXjNI9n8aDcaUQwg0IhIlSNTNzL9PK3Btst5BWun8oCL26ztFu3w4bd9JN N4NEBvlZBAUomWE5lRYG8ly6xIb92tSUkMGKxR/OD/aq099CrYj0LRzgAYDeMS6RlcQhhZ+2eBp6Q QYtYTaGw==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1sWHXd-000000070LQ-2P8J; Tue, 23 Jul 2024 15:35:05 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 6/6] mm: Use page->private instead of page->index in percpu Date: Tue, 23 Jul 2024 16:35:01 +0100 Message-ID: <20240723153503.1669586-7-willy@infradead.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240723153503.1669586-1-willy@infradead.org> References: <20240723153503.1669586-1-willy@infradead.org> MIME-Version: 1.0 X-Stat-Signature: tsdajpmpaquuh3gytkxrzjkqotpgnsqu X-Rspamd-Queue-Id: 1F2FBA002A X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1721748907-134563 X-HE-Meta: U2FsdGVkX1+T5XDAj8KUm0NG30CAqoE7747P8puHmpWGoRXAZvdHg9SB7LcoMfh24BQiWe5eFs9NCa+Ms3KBX878K5LAUR2yFr+Dw9qiV+9PafLKRMgPu3Dp5uMu1DyvQf4Q6XBjWMgIt7QUPN3OQ/NcqNDMl8cpWHSomyGudxM+IWPyiyxQydMQY70+EfwWESNePWTYbShX4tvR94OkQRyR4x3wu6CnXq5p5eb2t6BuGp2OUpc45lqUOkPXB6JrtCn7p+bWN/hrncwbAWSZ3EAFsTbDUvK2PYtM84BqeruJHJjE/54Fzwl/AGc6PJemhKwDdH+9/C59PsBl/yJJwOrKSqdmhutF0YyCNHRawnTwGlpWtmDCQhpdibx4LDGV1HMb4nm/ZjMOqsffYiEw5KXr42WsQXtAbSICgxcCwL+5vcVfhCedeRlzJe4qWN6qhBoNPWycqJjsoWgBY2t627U4bzyd89ZvZBXFiCx1I70M3/uGq2aLdMSXo2ary/YcttWDj89jscX/dSx5mWf99+ziqv5Pz3GSLtVUQj3dGKl9Dt4+JACGIFahd/oISEereylTLDhQvWIj/TNh1G2D53NDaHIc4K+3plvDqnUMiFI8NJcqu9ghrZI5j2cWx5Zijrs1T8dDIgJZe6C+RcaDigF1VWXWZfudsy6BROc6Af5P9mYUMrOFROoHSPxhoIOD9CIDwi54rpJXEY7h5GNyGpMn1CSxDR3pxZVIhenW8OFlc4ea/f2sY9TXlm1alL+irF1pkpt/9XZlXcttW2cfJLK8FvQDgD0OqQT/Pj3Gfqvdg9ro/EBZ5kxzyhVo60fMBlWgp86tMfASpYPcDZT66T/auqQtTAzi04cbzMYlVPNQf24Q1qIiyCjAaAoNX+sb+EjlbxXnRfLvKM/aoPSZDg3RiHKtRhYp+UXdrDS/eYexPzYrX+J/0LhyeczdTgRLUnDZnOHq0JTIGwSGJAN Kls9jQGI uUhEXqIZFnTnPPhA58WNxqU/XFm5iqg5SU3WYsbk57qZYBuWDDmdD5BHCM2TsszMn96CBuztXkGkrHyIL/ROPNurxfJMRf0T7m7tVvGmykh6zpV68YZE2O/O04BRFUD1epGC5HpZaV0GJEEaOC1dBaRN5riG3fYIcCOs50xEe3o0ctFBskK6HC6iqAwvozwlqy0s4rppTEi6bcNRdTHWCnomcGw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The percpu allocator only uses one field in struct page, just change it from page->index to page->private. Signed-off-by: Matthew Wilcox (Oracle) --- mm/percpu.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/percpu.c b/mm/percpu.c index 20d91af8c033..763fe7641602 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -253,13 +253,13 @@ static int pcpu_chunk_slot(const struct pcpu_chunk *chunk) /* set the pointer to a chunk in a page struct */ static void pcpu_set_page_chunk(struct page *page, struct pcpu_chunk *pcpu) { - page->index = (unsigned long)pcpu; + page->private = (unsigned long)pcpu; } /* obtain pointer to a chunk from a page struct */ static struct pcpu_chunk *pcpu_get_page_chunk(struct page *page) { - return (struct pcpu_chunk *)page->index; + return (struct pcpu_chunk *)page->private; } static int __maybe_unused pcpu_page_idx(unsigned int cpu, int page_idx)