From patchwork Tue Mar 5 08:59:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Max Kellermann X-Patchwork-Id: 13581858 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D74B1C54E41 for ; Tue, 5 Mar 2024 08:59:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 655C26B0095; Tue, 5 Mar 2024 03:59:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 605576B0096; Tue, 5 Mar 2024 03:59:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 47E3A940007; Tue, 5 Mar 2024 03:59:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 2E9516B0095 for ; Tue, 5 Mar 2024 03:59:43 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 0A9AE1C0E29 for ; Tue, 5 Mar 2024 08:59:43 +0000 (UTC) X-FDA: 81862387446.12.3597B24 Received: from mail-ed1-f50.google.com (mail-ed1-f50.google.com [209.85.208.50]) by imf13.hostedemail.com (Postfix) with ESMTP id 45A7E2000B for ; Tue, 5 Mar 2024 08:59:41 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=ionos.com header.s=google header.b=H2rW9Uro; dmarc=pass (policy=quarantine) header.from=ionos.com; spf=pass (imf13.hostedemail.com: domain of max.kellermann@ionos.com designates 209.85.208.50 as permitted sender) smtp.mailfrom=max.kellermann@ionos.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1709629181; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9QtTxRX/inj0OtKTHCIe6FPb0V6jffrKP/nT087XKzU=; b=fM+KH+OrV+b3QRWXAyI775yEI6KXUhVikvIowecwmEyc5+gtuX8F+j5WXe6CDgAp+BP82A xVXibvzsAMGCJqu5kP8EJ5xSNgHh1PT1F60HPn3/AyPj/wq7Teg3jnrw9I5za8nQd1ZiA0 rq2nR/5Q9mhRFbDy8YAyjJ1RHyI5sSI= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=ionos.com header.s=google header.b=H2rW9Uro; dmarc=pass (policy=quarantine) header.from=ionos.com; spf=pass (imf13.hostedemail.com: domain of max.kellermann@ionos.com designates 209.85.208.50 as permitted sender) smtp.mailfrom=max.kellermann@ionos.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1709629181; a=rsa-sha256; cv=none; b=qXTEpm40qJTHTGAyPP+Q3SYYeudL+fguvL631yI1WMiIrkQjtPYGxnMq99qO8TwJH91UJl YSK+7Xw+GFcvnnwxjtcBc3/yXehYP5hv9YXSH+aXhSeqahUD/z/y9RMmWd526GSE++0Z6m uvU4hhHHdpllIeL2WVO2KTwFKf0D87o= Received: by mail-ed1-f50.google.com with SMTP id 4fb4d7f45d1cf-5654f700705so7369280a12.1 for ; Tue, 05 Mar 2024 00:59:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ionos.com; s=google; t=1709629180; x=1710233980; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9QtTxRX/inj0OtKTHCIe6FPb0V6jffrKP/nT087XKzU=; b=H2rW9Uro4FBdAebwUX38ib1NFXuLW/L7stA6CuTTBRTUqZ3xe7tx/juS8lj2QpDzuC nYqR8qkMn7Ymk8cHPA1E/nkgCmfqKqW+Zis+jq+IJxBj/Ka+6rGhacfRvLY2Mru6IrPv piEm0aQRoQiDL7JuwmEXF7kZ22x41Wx5dVcfZxAizg9t2gTInqb3fXrh94FnGq3uQQYE 4f1Jpi1hjZykHlvWrELrAhrfn4xvIoG2KzpBsTkABkpN8l2Yz3i242uysHnGNFi9sKvC cpa9wsEh3PqkPFu70fetwj2inqwxpXx+UgSEd3tfmewL4MC4G8BGZ5YHZ1dEVO2OW20q 9bIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709629180; x=1710233980; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9QtTxRX/inj0OtKTHCIe6FPb0V6jffrKP/nT087XKzU=; b=BG87Pvtkev0187qFz4FpNV86O9xOH3UuOBJ3ORj5meMHarEBW/HqH8OnqEbCVQEDI1 hIoFoi7Ala/TRWl+JZWBI2Pq2QWOoYh4Kv3dimg1n2CEsOJmIEYQ0cThUOgDmMdHtViS mcWQtgEa5fz7FSAp6t2gwuP1poervAjN9gqT8O6tKf3dNdhw/f/zvRIQQDogMgAtf7sF Rgnw8BL5Vo6cFOxOm9y/6B6/FYAPxlKU5il7uYznLxPbKBCJ+jnDQF9VPZui1F22ZRYn eyVRQWEAi1ohNQcGN+3Ngb99MXalLe9+f0ltN3xMWCl3pFfeivSlPNdQIH7XZDRI475o GogA== X-Forwarded-Encrypted: i=1; AJvYcCXxNyJxKEpsC34878WUJBRixyjH46L8d9gO4NKgzuIYarrXomzAwMMrHOAhb4jvKkmQI3Yk1h/Q7HTiLEClckqnCy4= X-Gm-Message-State: AOJu0YxV7lbcPhV7LX1NLcs9zpdp97uiB3aOpnlDE6TeMb5dV4P2BSP5 yNbYv/KUHq6wlH9TUaqQY+5fIsy+mH7RiohjlaPUhm9EPkJmO3O+5q1Aef3lJEY= X-Google-Smtp-Source: AGHT+IGLVXgC2Q8vwJVeBVQ7VcbMya+GmcQDX1B/t3cUkKCgkZulprKrnzJI+ititEHBWTC1kz24QQ== X-Received: by 2002:a17:906:3442:b0:a44:3ec9:1fd3 with SMTP id d2-20020a170906344200b00a443ec91fd3mr6469936ejb.30.1709629179998; Tue, 05 Mar 2024 00:59:39 -0800 (PST) Received: from raven.blarg.de (p200300dc6f010900023064fffe740809.dip0.t-ipconnect.de. [2003:dc:6f01:900:230:64ff:fe74:809]) by smtp.gmail.com with ESMTPSA id gs4-20020a170906f18400b00a449d12cdc5sm4453005ejb.119.2024.03.05.00.59.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 05 Mar 2024 00:59:38 -0800 (PST) From: Max Kellermann To: akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: willy@infradead.org, sfr@canb.auug.org.au, Max Kellermann Subject: [PATCH v3 05/14] linux/mm.h: move page_address() and others to mm/page_address.h Date: Tue, 5 Mar 2024 09:59:10 +0100 Message-Id: <20240305085919.1601395-6-max.kellermann@ionos.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240305085919.1601395-1-max.kellermann@ionos.com> References: <20240305085919.1601395-1-max.kellermann@ionos.com> MIME-Version: 1.0 X-Rspam-User: X-Stat-Signature: b57nrx93xwu4kwsus73cexni3q6pbn1h X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 45A7E2000B X-HE-Tag: 1709629181-272469 X-HE-Meta: U2FsdGVkX18yre75EUYMCS1lJiKd/0q5G2O0qimCvF+TkUo9QqyxAXOlH7tmtWMEnt4V83SSkd3NtcnpH672yGLTT9GtPYbW/LOjaTUGN4qJjtDx4L+dZIA2bv44QGp/UvF8kAzo0o3G4uWrXKE1bYt1GYKFoIpq1MdYRpHVDbWEirBm4fpE2VBebiPM+5aogtqa7qGtpIWD2AY21Vn3QMcqNf6wyQgee0KLoxVlFi/ir21Bh1hz+KLW2E+oCeCdQ8Pza9c4+JsityHSZbG7INb6eCxC1cMjhqEgzefUonuz62ZL4Z+hZ1ab1dsUG43kJEfnP/19CAsMDc0XUyO8uECsDEdFXpha+8lh7jOVrtiRbv5eCKdNdq005VrxqF3FceapPS2LxEqCsocJ90JW3Pv9WNnX8v4ij5tFT+OL1vF41PO+AUHw3TF1ZMxRDLIM6A0Lz128s/iAI7gjUXNsKm/CtNF3uW1D4zxQMNAPopSGUDeMN+3O+B6t6kA5nXtGS+CzQTYLkt0cve9mR/TM6vyUY85bFON+fjqr1VH3Ebu6wENKz9xMwzChq3D5mhPrTTYWVr5ZEeqjULuB3AhPn75MZtpbT+Xxq7dnP7zJCoQ9RZ8hy9IaaRpCVxt4KDYPOR9vFSF3RhFi3PQDt4H3HbIDpLVKHYomZE0LmN30Br9FASDN/2ALURQP8P4pgAZRAd7e6XOKz97SKpR9c5nHFeUTGqvVwCl+fKWL/LonWjKJCn5GtBYpfbb5jgZbAYddCs4uGuZM5YQ12oU4Qu2Lv6d6wfS9SS7IKBWXXD6gfQqLmPsX3r+v64ftz6We7UvQLxgONKOXvH3G+JnqQCdov5ebEeexLb8jRO5R+Nn86CI1jhAIZQzll7nGBKadCUdxwkGlTDxFpT76Jh+A8gy5QSoQYJKvmwIVS2BYWWZ1rzdz0rGzKMst5WL72NXrKb43r8d/h0IS0DEfVVwFTHE uaj9RXsO 1A7kSbH0h0ATjXaSaNzRojXvYr1fFbeUeEhljlIfODI7ympe+Ni2pqFs/B/FS67PmM2DLtWsRNy3NoLsS729rwpSvcN4uHasBtB/Dx2jH00JAFjVOP7/VTwZapRF0jfzz5jw5Mpsguab4m8pF/6PB1Ysq8uN5AXWTTVzT X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Prepare to reduce dependencies on linux/mm.h. page_address() is used by the following popular headers: - linux/bio.h - linux/bvec.h - linux/highmem.h - linux/scatterlist.h - linux/skbuff.h Moving it to a separate lean header will allow us to avoid the dependency on linux/mm.h. Signed-off-by: Max Kellermann --- include/linux/mm.h | 56 +------------------------- include/linux/mm/page_address.h | 71 +++++++++++++++++++++++++++++++++ 2 files changed, 72 insertions(+), 55 deletions(-) create mode 100644 include/linux/mm/page_address.h diff --git a/include/linux/mm.h b/include/linux/mm.h index 79c1f924d4b5..713cedc03b88 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2,7 +2,7 @@ #ifndef _LINUX_MM_H #define _LINUX_MM_H -#include +#include #include #include #include @@ -104,10 +104,6 @@ extern int mmap_rnd_compat_bits __read_mostly; #define __pa_symbol(x) __pa(RELOC_HIDE((unsigned long)(x), 0)) #endif -#ifndef page_to_virt -#define page_to_virt(x) __va(PFN_PHYS(page_to_pfn(x))) -#endif - #ifndef lm_alias #define lm_alias(x) __va(__pa_symbol(x)) #endif @@ -211,14 +207,6 @@ int overcommit_kbytes_handler(struct ctl_table *, int, void *, size_t *, int overcommit_policy_handler(struct ctl_table *, int, void *, size_t *, loff_t *); -#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) -#define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n)) -#define folio_page_idx(folio, p) (page_to_pfn(p) - folio_pfn(folio)) -#else -#define nth_page(page,n) ((page) + (n)) -#define folio_page_idx(folio, p) ((p) - &(folio)->page) -#endif - /* to align the pointer to the (next) page boundary */ #define PAGE_ALIGN(addr) ALIGN(addr, PAGE_SIZE) @@ -2137,44 +2125,6 @@ static inline int arch_make_folio_accessible(struct folio *folio) */ #include -static __always_inline void *lowmem_page_address(const struct page *page) -{ - return page_to_virt(page); -} - -#if defined(CONFIG_HIGHMEM) && !defined(WANT_PAGE_VIRTUAL) -#define HASHED_PAGE_VIRTUAL -#endif - -#if defined(WANT_PAGE_VIRTUAL) -static inline void *page_address(const struct page *page) -{ - return page->virtual; -} -static inline void set_page_address(struct page *page, void *address) -{ - page->virtual = address; -} -#define page_address_init() do { } while(0) -#endif - -#if defined(HASHED_PAGE_VIRTUAL) -void *page_address(const struct page *page); -void set_page_address(struct page *page, void *virtual); -void page_address_init(void); -#endif - -#if !defined(HASHED_PAGE_VIRTUAL) && !defined(WANT_PAGE_VIRTUAL) -#define page_address(page) lowmem_page_address(page) -#define set_page_address(page, address) do { } while(0) -#define page_address_init() do { } while(0) -#endif - -static inline void *folio_address(const struct folio *folio) -{ - return page_address(&folio->page); -} - extern pgoff_t __page_file_index(struct page *page); /* @@ -2237,10 +2187,6 @@ static inline void clear_page_pfmemalloc(struct page *page) */ extern void pagefault_out_of_memory(void); -#define offset_in_page(p) ((unsigned long)(p) & ~PAGE_MASK) -#define offset_in_thp(page, p) ((unsigned long)(p) & (thp_size(page) - 1)) -#define offset_in_folio(folio, p) ((unsigned long)(p) & (folio_size(folio) - 1)) - /* * Parameter block passed down to zap_pte_range in exceptional cases. */ diff --git a/include/linux/mm/page_address.h b/include/linux/mm/page_address.h new file mode 100644 index 000000000000..e1aaacc5003f --- /dev/null +++ b/include/linux/mm/page_address.h @@ -0,0 +1,71 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_MM_PAGE_ADDRESS_H +#define _LINUX_MM_PAGE_ADDRESS_H + +#include // for struct page +#include // needed by the page_to_virt() macro on some architectures (e.g. arm64) +#include // for PAGE_MASK, page_to_virt() + +#if defined(CONFIG_FLATMEM) +#include // for memmap (used by __pfn_to_page()) +#elif defined(CONFIG_SPARSEMEM_VMEMMAP) +#include // for vmemmap (used by __pfn_to_page()) +#elif defined(CONFIG_SPARSEMEM) +#include // for page_to_section() (used by __page_to_pfn()) +#endif + +#ifndef page_to_virt +#define page_to_virt(x) __va(PFN_PHYS(page_to_pfn(x))) +#endif + +#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) +#define nth_page(page,n) pfn_to_page(page_to_pfn((page)) + (n)) +#define folio_page_idx(folio, p) (page_to_pfn(p) - folio_pfn(folio)) +#else +#define nth_page(page,n) ((page) + (n)) +#define folio_page_idx(folio, p) ((p) - &(folio)->page) +#endif + +static __always_inline void *lowmem_page_address(const struct page *page) +{ + return page_to_virt(page); +} + +#if defined(CONFIG_HIGHMEM) && !defined(WANT_PAGE_VIRTUAL) +#define HASHED_PAGE_VIRTUAL +#endif + +#if defined(WANT_PAGE_VIRTUAL) +static inline void *page_address(const struct page *page) +{ + return page->virtual; +} +static inline void set_page_address(struct page *page, void *address) +{ + page->virtual = address; +} +#define page_address_init() do { } while(0) +#endif + +#if defined(HASHED_PAGE_VIRTUAL) +void *page_address(const struct page *page); +void set_page_address(struct page *page, void *virtual); +void page_address_init(void); +#endif + +#if !defined(HASHED_PAGE_VIRTUAL) && !defined(WANT_PAGE_VIRTUAL) +#define page_address(page) lowmem_page_address(page) +#define set_page_address(page, address) do { } while(0) +#define page_address_init() do { } while(0) +#endif + +static inline void *folio_address(const struct folio *folio) +{ + return page_address(&folio->page); +} + +#define offset_in_page(p) ((unsigned long)(p) & ~PAGE_MASK) +#define offset_in_thp(page, p) ((unsigned long)(p) & (thp_size(page) - 1)) +#define offset_in_folio(folio, p) ((unsigned long)(p) & (folio_size(folio) - 1)) + +#endif /* _LINUX_MM_PAGE_ADDRESS_H */