From patchwork Wed Dec 13 21:58:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13491926 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4F3F7C4332F for ; Wed, 13 Dec 2023 21:59:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B17838D0060; Wed, 13 Dec 2023 16:58:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id AC8128D0062; Wed, 13 Dec 2023 16:58:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 80E928D0060; Wed, 13 Dec 2023 16:58:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 5AD708D0062 for ; Wed, 13 Dec 2023 16:58:55 -0500 (EST) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 27EA8A1D7F for ; Wed, 13 Dec 2023 21:58:55 +0000 (UTC) X-FDA: 81563160630.22.E4E0A72 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf12.hostedemail.com (Postfix) with ESMTP id C449B40012 for ; Wed, 13 Dec 2023 21:58:52 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="TM9Z/QTl"; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702504733; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ZmqyJYgvy399iQ2pmkRbX729RSnA28LX1k0f/kHnhWc=; b=vyb4A85Tn5sf6/1zUJN5+wtqB+xIAqW7ullFDlW6A5ezyhjVylNbUVIAllG7qFM8m1FEgG tRDX2ArmCoabRmmD5pIskzOPKwzKsvdigq6NPmuXbf7Uk6N//+6Sm0fvHsIPU8h1H0KkA1 TaMKrSz9h2p7z4h2hoRKC6AZilpa7Eg= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="TM9Z/QTl"; spf=none (imf12.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702504733; a=rsa-sha256; cv=none; b=3MFv8DDLR5MdBHK/dLMFKfDBXDNv5HL4Z/FNJ0A5y1kkFoAPpBT978XM6Utmrhgm8b3gBR MA/1fqbotzYGFXJstpvWicza3hNinPrJoZhcQc1RwKDifuCqrilRyz5LXX+uPdTbX00jCN RS3j6WJ+tIdXveSfl1/AljFR0uGTOCc= DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=ZmqyJYgvy399iQ2pmkRbX729RSnA28LX1k0f/kHnhWc=; b=TM9Z/QTlcFwx14ESiTSPzq9T1j pls0KXtivPO21UVMwAgnP5t+Np6h4ZMfXiForNf7vSoVybgtwf31FdRgfhoSHs2a0eN4S8iglqvO5 3uF+w9tNuQz7ybtMSMHE6nr+b6QHtffne6rKIUrcvBsWRdyDRS/OKlwP4HXpQ7dQKTv7QoPtaSrcm 4iU8zQAcb8fOdlIBEqtBUOjx9kQMr00FUxdLNocV8/MwYFmmRkOLwRt3WOhCL/6ZU2M+7rlinl3KC XprzXJdrdvegnIhhyn2UnPsKyNzrYfgQVeIeJgdxHlfjLRhkpOaMRzJC01YA+CM3rSykr/O8Df8uh dveVke0Q==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rDXFd-002oib-WA; Wed, 13 Dec 2023 21:58:46 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 10/13] mm: Convert swap_readpage() to swap_read_folio() Date: Wed, 13 Dec 2023 21:58:39 +0000 Message-Id: <20231213215842.671461-11-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231213215842.671461-1-willy@infradead.org> References: <20231213215842.671461-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: C449B40012 X-Rspam-User: X-Stat-Signature: 8c5bfuncct3p641mezm8ik7hnhy7o5qc X-Rspamd-Server: rspam01 X-HE-Tag: 1702504732-616529 X-HE-Meta: U2FsdGVkX19M5WB9mlaoDeojpZhKT+i4ByQoSonL+/SWFdi8+f28rcw19SOqwjrN5+oLEer03TOeUs4XZOa8Gp71vZoBt5ShIPByzqEMEHn8MNBfaJG7usuU4C27mb67SPuabJcK6ppvDDbiTMjhKs2XM9FTe/oj2ycCEtwOJoqJ1oh3KlQWvQ45VdNu2mKz3Lmc1aEC33iSQiXVt2W/7ITi/RLc47sY3bUZ7d86JveVrIUeIHcyINF14TDyOJE3KCihuqf2KyCoMF8Ul9UTp8e/ySAf5VDLJ/MOiQgwjZ8hYSSweT161gVHg7JZHZXaiPmlKpgBvxa6LuIXQgGe94Wpbm1ZX+9cyik7suD6W3ePSK0sAbEMezCa2vPXdbdK0VwC5ydGunEttOKSDwXKTc7mBExm/mSUsrJVvnCdJF3eG1apKEjXRiUxXm/1k4M2H7w3T8h2y6yDUV4LiCItv9Cqg40R5eiOXGgvHrRq97lkvESXR4MfdmVNdi2qnSrUxrMSpkD6yV04Y9zK+vEzwOGS2vh1SC81RSqQTF6SNDcMgKHni8YJvHHbci1GK99lj/iKy0YMYhG/wVnrhJwm7G+1DNwR+hSPeXRgr5LAVQ1JbbHipeWdkSlTkBQ1bxGUkVua5/SCvQYCzeBGRK9ip7SsDobdjh2f17wA5ZDjjreH2d2vkrKlaPQ60xa683shwEyVmB2SZzHfkf7hRS6M962z+B4C7mE2cRSt9iNK1G9KOBgx5luKieJ4ZZIlT9p8GD4gxjhdjptiOVRd2/wPpQD5xO1pxwM1ZmydWgRJEyVmAtEJVoJDlyJNSN2yLRhG/gxIKbhSnh1BTL3YlqVE2U0fuQEwJSVZpnLHYFVTIyj31nyGLDwQTtw5AA4ZSHLvYEItnLRStSYXkzzarbKfvTdP745bv+KQzXoUEUqiVY3/5WT8O9FUUmrQgYEG5E8fr5epduEaGPvnY+XZCU8 XscfwKDz yK8WX4vG/Hsn4CnPRDIcAYLOfHPbYfEwbh9DAI4nuXqefCNU+I6a+gjcpqirNSzi+l15ZG5IINK+8fLJphJBnEIHte9o1F5n3AbhxFeMrRQcZ+ZRsGm5ADoxsAgzCRVjW3ZksSdNU4fDfg58cWgJddibEcVA1suyuusHst2HNfMHzp9Sc5/zIqIo5ykl6k8bE0Qa77SUmft6WUrTtsivuAl+V47T6wKggvPhz X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: All callers have a folio, so pass it in, saving two calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/memory.c | 4 ++-- mm/page_io.c | 18 +++++++++--------- mm/swap.h | 5 +++-- mm/swap_state.c | 12 ++++++------ mm/swapfile.c | 2 +- 5 files changed, 21 insertions(+), 20 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index e402340e3f46..2f7b212b7d71 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3882,9 +3882,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) folio_add_lru(folio); - /* To provide entry to swap_readpage() */ + /* To provide entry to swap_read_folio() */ folio->swap = entry; - swap_readpage(page, true, NULL); + swap_read_folio(folio, true, NULL); folio->private = NULL; } } else { diff --git a/mm/page_io.c b/mm/page_io.c index 6736c56526bf..09c6a4f316f3 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -420,7 +420,7 @@ static void sio_read_complete(struct kiocb *iocb, long ret) mempool_free(sio, sio_pool); } -static void swap_readpage_fs(struct folio *folio, struct swap_iocb **plug) +static void swap_read_folio_fs(struct folio *folio, struct swap_iocb **plug) { struct swap_info_struct *sis = swp_swap_info(folio->swap); struct swap_iocb *sio = NULL; @@ -454,7 +454,7 @@ static void swap_readpage_fs(struct folio *folio, struct swap_iocb **plug) *plug = sio; } -static void swap_readpage_bdev_sync(struct folio *folio, +static void swap_read_folio_bdev_sync(struct folio *folio, struct swap_info_struct *sis) { struct bio_vec bv; @@ -474,7 +474,7 @@ static void swap_readpage_bdev_sync(struct folio *folio, put_task_struct(current); } -static void swap_readpage_bdev_async(struct folio *folio, +static void swap_read_folio_bdev_async(struct folio *folio, struct swap_info_struct *sis) { struct bio *bio; @@ -487,10 +487,10 @@ static void swap_readpage_bdev_async(struct folio *folio, submit_bio(bio); } -void swap_readpage(struct page *page, bool synchronous, struct swap_iocb **plug) +void swap_read_folio(struct folio *folio, bool synchronous, + struct swap_iocb **plug) { - struct folio *folio = page_folio(page); - struct swap_info_struct *sis = page_swap_info(page); + struct swap_info_struct *sis = swp_swap_info(folio->swap); bool workingset = folio_test_workingset(folio); unsigned long pflags; bool in_thrashing; @@ -514,11 +514,11 @@ void swap_readpage(struct page *page, bool synchronous, struct swap_iocb **plug) folio_mark_uptodate(folio); folio_unlock(folio); } else if (data_race(sis->flags & SWP_FS_OPS)) { - swap_readpage_fs(folio, plug); + swap_read_folio_fs(folio, plug); } else if (synchronous || (sis->flags & SWP_SYNCHRONOUS_IO)) { - swap_readpage_bdev_sync(folio, sis); + swap_read_folio_bdev_sync(folio, sis); } else { - swap_readpage_bdev_async(folio, sis); + swap_read_folio_bdev_async(folio, sis); } if (workingset) { diff --git a/mm/swap.h b/mm/swap.h index b81587740cf1..859ae8f0fd2d 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -10,7 +10,8 @@ struct mempolicy; /* linux/mm/page_io.c */ int sio_pool_init(void); struct swap_iocb; -void swap_readpage(struct page *page, bool do_poll, struct swap_iocb **plug); +void swap_read_folio(struct folio *folio, bool do_poll, + struct swap_iocb **plug); void __swap_read_unplug(struct swap_iocb *plug); static inline void swap_read_unplug(struct swap_iocb *plug) { @@ -63,7 +64,7 @@ static inline unsigned int folio_swap_flags(struct folio *folio) } #else /* CONFIG_SWAP */ struct swap_iocb; -static inline void swap_readpage(struct page *page, bool do_poll, +static inline void swap_read_folio(struct folio *folio, bool do_poll, struct swap_iocb **plug) { } diff --git a/mm/swap_state.c b/mm/swap_state.c index d4e25d9b5dc6..efff7148a59d 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -539,7 +539,7 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, * the swap entry is no longer in use. * * get/put_swap_device() aren't needed to call this function, because - * __read_swap_cache_async() call them and swap_readpage() holds the + * __read_swap_cache_async() call them and swap_read_folio() holds the * swap cache folio lock. */ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, @@ -557,7 +557,7 @@ struct page *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, mpol_cond_put(mpol); if (page_allocated) - swap_readpage(&folio->page, false, plug); + swap_read_folio(folio, false, plug); return folio_file_page(folio, swp_offset(entry)); } @@ -674,7 +674,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, if (!folio) continue; if (page_allocated) { - swap_readpage(&folio->page, false, &splug); + swap_read_folio(folio, false, &splug); if (offset != entry_offset) { folio_set_readahead(folio); count_vm_event(SWAP_RA); @@ -690,7 +690,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, folio = __read_swap_cache_async(entry, gfp_mask, mpol, ilx, &page_allocated, false); if (unlikely(page_allocated)) - swap_readpage(&folio->page, false, NULL); + swap_read_folio(folio, false, NULL); zswap_folio_swapin(folio); return folio_file_page(folio, swp_offset(entry)); } @@ -848,7 +848,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, if (!folio) continue; if (page_allocated) { - swap_readpage(&folio->page, false, &splug); + swap_read_folio(folio, false, &splug); if (i != ra_info.offset) { folio_set_readahead(folio); count_vm_event(SWAP_RA); @@ -866,7 +866,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, folio = __read_swap_cache_async(targ_entry, gfp_mask, mpol, targ_ilx, &page_allocated, false); if (unlikely(page_allocated)) - swap_readpage(&folio->page, false, NULL); + swap_read_folio(folio, false, NULL); zswap_folio_swapin(folio); return folio_file_page(folio, swp_offset(entry)); } diff --git a/mm/swapfile.c b/mm/swapfile.c index b22c47b11d65..f3e23a3d26ae 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -2225,7 +2225,7 @@ EXPORT_SYMBOL_GPL(add_swap_extent); /* * A `swap extent' is a simple thing which maps a contiguous range of pages * onto a contiguous range of disk blocks. A rbtree of swap extents is - * built at swapon time and is then used at swap_writepage/swap_readpage + * built at swapon time and is then used at swap_writepage/swap_read_folio * time for locating where on disk a page belongs. * * If the swapfile is an S_ISBLK block device, a single extent is installed.