From patchwork Wed Dec 13 21:58:42 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13491923 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6AE0BC4332F for ; Wed, 13 Dec 2023 21:58:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DFF138D005F; Wed, 13 Dec 2023 16:58:54 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DAD158D0049; Wed, 13 Dec 2023 16:58:54 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C26118D005F; Wed, 13 Dec 2023 16:58:54 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id AD0568D0049 for ; Wed, 13 Dec 2023 16:58:54 -0500 (EST) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 7C7D31202FB for ; Wed, 13 Dec 2023 21:58:54 +0000 (UTC) X-FDA: 81563160588.27.FF49551 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf13.hostedemail.com (Postfix) with ESMTP id D152B20017 for ; Wed, 13 Dec 2023 21:58:52 +0000 (UTC) Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="M/sxx6R0"; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702504732; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Erg+w3ItkaZkmss/nXfSVMDevgkiDiGZXSTd6j2yqnA=; b=3QoTxI8stQfU6PsAMSO6qDmsSidqHlIKEYMtav0ot6tGdy7re8SKH+fReTZ/3Mwsly2tjM i6P2iJuSFdFrLoHEVcMLpyV/cHJFgkeHBEOx/LhsHXkVvJNScZ0Wa1A/KPEp6R9vx51KcM KC9P0TJS3kCtd+4MXc4QUVbI/H3hPWc= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702504732; a=rsa-sha256; cv=none; b=AGuio26MCZ35ik4eo/C7yf7NHk7A6NudPQUHod5A+8NhVp1IcM3AdyPNVYCAQaoddHmUAD hts0K6V22EZLHoEumGp/AErX+W3InVhg4ESaAHXdis+PQy+++BPsfrO13WdShYEvDg0cVV csZIQP+3yDLikNzZMS/JcUWFeqdnrgY= ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b="M/sxx6R0"; spf=none (imf13.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Erg+w3ItkaZkmss/nXfSVMDevgkiDiGZXSTd6j2yqnA=; b=M/sxx6R03RBzXBeq9j3eCHCeco PFTWqqucfPmIsm37VlmfqUQDU8l+5AU8RhkwUPCNl/8q25Y8UfUZTTgCF06ZbxC69WG6/PKWRC9k8 mQsThMSiEvisjmVrfDRdMhpRzgXumWMPf8wXYfX216ySxd2xwEF+CskWg3rkuXYw6gAnGA7RCI0Xr 2+UiZEhiPqE15hn+m7utjVPNSKQqzBrMCHhox1xhlMaeq/9gWvAtrmYEt1pVPKI4yxUUUVWkDe2Qj YGnq/L8VvFEw/cmo+34uLOLiE6MmG2TAQQ8KZf4eyie3fo+gDAsbJtuI9hMoCeOzb/FRd5kfJQ+5H YJQAzPOA==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1rDXFe-002oih-8q; Wed, 13 Dec 2023 21:58:46 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH 13/13] mm: Convert swap_cluster_readahead and swap_vma_readahead to return a folio Date: Wed, 13 Dec 2023 21:58:42 +0000 Message-Id: <20231213215842.671461-14-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20231213215842.671461-1-willy@infradead.org> References: <20231213215842.671461-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: D152B20017 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: ue8hz41srdyz54xoymob8gpn1bcxxpu6 X-HE-Tag: 1702504732-348072 X-HE-Meta: U2FsdGVkX1+EbyY3fxRXBsEZfRjHge2Ucu7TY0ZCurpeWtp+ooEoIFmye7Jrl35SQ7BsDpmnCe/6XMIteFFmLOeHAYzF8FQGyJf+4/WcFQNYB54bj3IV8dd3cn1SUApTOwXM2mgUeevLo7vLTaezDpB7bESzG7CXZdXpwUFfrnar3F0FXvSznCt7Mrq1uxT53e1ar5u6liQDd05zyOrcxppQohIxqOAHSxsy1A7OzBpqIRiU0Xg9x8FtHfXpqCJRmqwgkEh/iOwtuIxJiSu6IX7qAR4a2qeUdDwCckj812mYitA6kYfCP93EDqONQoiVtNVPcbTP6T9+9K8ZTsmavIdTRSMGmyHtjyenXXS8LdFl4OCx4BvHBvEHfdY+uJzHUtjtrdwofqLlj8QWl2VbpytAiGoRRS5PmdoOP6B/voAGU+XrwwT/olsJYDac6ED/ErIR9NlLEJWDVKJDwY5FNsDsN7qIkQ48csTYQbdHEON/EZnN8ek46h9YWUDtFRQ8dpsyvzvM6WhjkTqYqfsOGZ4rgyVpKk9sRiM0qpct5NnoqBN8yc9v+UMCS1xHiXCWB+AAJBK3O0ZeAOFwaroDcOLkPXueSgSas0+4VV0r4J8ts1qwAk8a3zFR3i+SQpe2gGC4qx1U8phwsPlLD8uhtkwlHcO6ruPjvDM3oa+Ih6r8UJs54v1ynAL5es+dJpQtxt5Y0UEUNktJDtqwLrrtnWPGhFWATlCndLA+2Zwe/QnBxqoU40K8A8KLq8oHigDlwnfabdBEjlgLwDRQidFqmRsPDaeNUuRcaWBXFgBaq1nUraKe/ZTDsSSvFpyGvsQRBMj6wQ7l+lYGU1rAnfhsaXihYx9QB+daD4uR+7Wo2DmW4n7O51vJgHAPUq93hwoRSzPMLdoWLRBnzZSfBpykQ8S2Iaq38RXN4jh4S/A9c8wWWau0CDXGWtpAgBPwD2kjxNvKnVWgtGYVZc/lLuf FV1hcoI2 SbF8XKHe3luMzYk3rO8SQ5Bou/2CZG8C34tjt24Ytq+zYEbzJJep/MDug3ICZTtDLf6wTQpUebZMFn6v5LkMPZQ/yZtPCtdr0j6P1+TKTOJsDIomwK8D5QNbBXm2ASlk14IUH3Mtp4lGYR52hWuR9vohpWl9KxCgY2WntQista+gKzfLFOxc5FiYSZtYolkAZg/Cz6hRFMdITukyMu4Kakmsl4XtNdAC5nFQh X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: shmem_swapin_cluster() immediately converts the page back to a folio, and swapin_readahead() may as well call folio_file_page() once instead of having each function call it. Signed-off-by: Matthew Wilcox (Oracle) --- mm/shmem.c | 8 +++----- mm/swap.h | 6 +++--- mm/swap_state.c | 21 ++++++++++----------- 3 files changed, 16 insertions(+), 19 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index c62f904ba1ca..a4d388973021 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1570,15 +1570,13 @@ static struct folio *shmem_swapin_cluster(swp_entry_t swap, gfp_t gfp, { struct mempolicy *mpol; pgoff_t ilx; - struct page *page; + struct folio *folio; mpol = shmem_get_pgoff_policy(info, index, 0, &ilx); - page = swap_cluster_readahead(swap, gfp, mpol, ilx); + folio = swap_cluster_readahead(swap, gfp, mpol, ilx); mpol_cond_put(mpol); - if (!page) - return NULL; - return page_folio(page); + return folio; } /* diff --git a/mm/swap.h b/mm/swap.h index 82c68ccb5ab1..758c46ca671e 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -52,8 +52,8 @@ struct folio *read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_flags, struct mempolicy *mpol, pgoff_t ilx, bool *new_page_allocated, bool skip_if_exists); -struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, - struct mempolicy *mpol, pgoff_t ilx); +struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t flag, + struct mempolicy *mpol, pgoff_t ilx); struct page *swapin_readahead(swp_entry_t entry, gfp_t flag, struct vm_fault *vmf); @@ -80,7 +80,7 @@ static inline void show_swap_cache_info(void) { } -static inline struct page *swap_cluster_readahead(swp_entry_t entry, +static inline struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx) { return NULL; diff --git a/mm/swap_state.c b/mm/swap_state.c index 1cb1d5d0583e..793b5b9e4f96 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -629,7 +629,7 @@ static unsigned long swapin_nr_pages(unsigned long offset) * @mpol: NUMA memory allocation policy to be applied * @ilx: NUMA interleave index, for use only when MPOL_INTERLEAVE * - * Returns the struct page for entry and addr, after queueing swapin. + * Returns the struct folio for entry and addr, after queueing swapin. * * Primitive swap readahead code. We simply read an aligned block of * (1 << page_cluster) entries in the swap area. This method is chosen @@ -640,7 +640,7 @@ static unsigned long swapin_nr_pages(unsigned long offset) * are used for every page of the readahead: neighbouring pages on swap * are fairly likely to have been swapped out from the same node. */ -struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, +struct folio *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, struct mempolicy *mpol, pgoff_t ilx) { struct folio *folio; @@ -692,7 +692,7 @@ struct page *swap_cluster_readahead(swp_entry_t entry, gfp_t gfp_mask, if (unlikely(page_allocated)) swap_read_folio(folio, false, NULL); zswap_folio_swapin(folio); - return folio_file_page(folio, swp_offset(entry)); + return folio; } int init_swap_address_space(unsigned int type, unsigned long nr_pages) @@ -796,7 +796,7 @@ static void swap_ra_info(struct vm_fault *vmf, * @targ_ilx: NUMA interleave index, for use only when MPOL_INTERLEAVE * @vmf: fault information * - * Returns the struct page for entry and addr, after queueing swapin. + * Returns the struct folio for entry and addr, after queueing swapin. * * Primitive swap readahead code. We simply read in a few pages whose * virtual addresses are around the fault address in the same vma. @@ -804,9 +804,8 @@ static void swap_ra_info(struct vm_fault *vmf, * Caller must hold read mmap_lock if vmf->vma is not NULL. * */ -static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, - struct mempolicy *mpol, pgoff_t targ_ilx, - struct vm_fault *vmf) +static struct folio *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, + struct mempolicy *mpol, pgoff_t targ_ilx, struct vm_fault *vmf) { struct blk_plug plug; struct swap_iocb *splug = NULL; @@ -868,7 +867,7 @@ static struct page *swap_vma_readahead(swp_entry_t targ_entry, gfp_t gfp_mask, if (unlikely(page_allocated)) swap_read_folio(folio, false, NULL); zswap_folio_swapin(folio); - return folio_file_page(folio, swp_offset(entry)); + return folio; } /** @@ -888,14 +887,14 @@ struct page *swapin_readahead(swp_entry_t entry, gfp_t gfp_mask, { struct mempolicy *mpol; pgoff_t ilx; - struct page *page; + struct folio *folio; mpol = get_vma_policy(vmf->vma, vmf->address, 0, &ilx); - page = swap_use_vma_readahead() ? + folio = swap_use_vma_readahead() ? swap_vma_readahead(entry, gfp_mask, mpol, ilx, vmf) : swap_cluster_readahead(entry, gfp_mask, mpol, ilx); mpol_cond_put(mpol); - return page; + return folio_file_page(folio, swp_offset(entry)); } #ifdef CONFIG_SYSFS