From patchwork Mon Aug 8 19:34:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12939086 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87D1DC00140 for ; Mon, 8 Aug 2022 19:36:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6F6A9940025; Mon, 8 Aug 2022 15:35:55 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 69C6F940023; Mon, 8 Aug 2022 15:35:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1E8D6940023; Mon, 8 Aug 2022 15:35:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id E383C940025 for ; Mon, 8 Aug 2022 15:35:54 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B8ADA1C6449 for ; Mon, 8 Aug 2022 19:35:54 +0000 (UTC) X-FDA: 79777430628.24.C46AC16 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf30.hostedemail.com (Postfix) with ESMTP id 627D380042 for ; Mon, 8 Aug 2022 19:35:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=z8X6vYTk9MzT6MttdzTAM6ZnacOtFDqMaLC1FwKGRRA=; b=tdN6KxPnckqwQG2VJjkjthlz5j D4P+APR3PKBe+dThU5zh0BukbUnCMs4YcPeYy0cIuWP8aVTsMsVkfKjT+xtR9HjmmffLrWE6ped7X Sr/XN2GYk0n3AsR1XxgUjqmnVE3QVKXOzNBiNFd3U86A2XYu3HaAUcz5LzzWEXIJCfXF2dZmum6aH Hq2ypitHF0zHzAb+Bl8kkWp2GsmXquqKXWDN17YClrAMnUBBBGpeAkGYta1oFc9uofyvEzhCzGKsW LX9LF2wwmDIz/QgLwK1iuK1/nYmY3EXGsJFDBhpGBYyynfSbW9fuKX/FertT/JZistUp0QwJ1VeOo ZYtTbWdg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1oL8XW-00EB2y-2F; Mon, 08 Aug 2022 19:35:50 +0000 From: "Matthew Wilcox (Oracle)" To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , hughd@google.com Subject: [PATCH 55/59] huge_memory: Convert split_huge_page_to_list() to use a folio Date: Mon, 8 Aug 2022 20:34:23 +0100 Message-Id: <20220808193430.3378317-56-willy@infradead.org> X-Mailer: git-send-email 2.37.1 In-Reply-To: <20220808193430.3378317-1-willy@infradead.org> References: <20220808193430.3378317-1-willy@infradead.org> MIME-Version: 1.0 ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659987354; a=rsa-sha256; cv=none; b=WrHO++stnkC4PfqPatQWK122G8/CggoUUN0gJT/woSsb7Qfej3BcA3Btpn0TELWQmQi/9u IBgUIQyOAJF+fL1QvyHJyLqIPuMlXA3zvSrDF9ryCRb1m4UOq4ExEn0iYIYR6DPDDHyThS NNdX9X7yV/vW74bqsnfZShj/wUknXr0= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=tdN6KxPn; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659987354; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=z8X6vYTk9MzT6MttdzTAM6ZnacOtFDqMaLC1FwKGRRA=; b=Gx4r41dpTP4XPpECl3CCf6MXD2WdmlmvDVuaBUbAjhA8bU5QjwXsvi44yQZz577Vi/FTYA Fjb3XZEtogF9pPZPGMbko4jccF0KHLTNrJndmI2Wt0DrL3b8UoqNSpsyxtpqcvzsaK6L3g CuLa9k3gHleJwcEksH3N3RD4PMUOYRE= X-Rspamd-Queue-Id: 627D380042 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=tdN6KxPn; spf=none (imf30.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-Rspamd-Server: rspam09 X-Rspam-User: X-Stat-Signature: 1gcn5e8qrw4nuoxny7kysbxtkzr8h33d X-HE-Tag: 1659987354-329745 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Saves many calls to compound_head(). Signed-off-by: Matthew Wilcox (Oracle) --- mm/huge_memory.c | 49 ++++++++++++++++++++++++------------------------ 1 file changed, 24 insertions(+), 25 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 7b998f2083aa..431a3b7078c7 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2592,27 +2592,26 @@ bool can_split_folio(struct folio *folio, int *pextra_pins) int split_huge_page_to_list(struct page *page, struct list_head *list) { struct folio *folio = page_folio(page); - struct page *head = &folio->page; - struct deferred_split *ds_queue = get_deferred_split_queue(head); - XA_STATE(xas, &head->mapping->i_pages, head->index); + struct deferred_split *ds_queue = get_deferred_split_queue(&folio->page); + XA_STATE(xas, &folio->mapping->i_pages, folio->index); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; int extra_pins, ret; pgoff_t end; bool is_hzp; - VM_BUG_ON_PAGE(!PageLocked(head), head); - VM_BUG_ON_PAGE(!PageCompound(head), head); + VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); + VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); - is_hzp = is_huge_zero_page(head); - VM_WARN_ON_ONCE_PAGE(is_hzp, head); + is_hzp = is_huge_zero_page(&folio->page); + VM_WARN_ON_ONCE_FOLIO(is_hzp, folio); if (is_hzp) return -EBUSY; - if (PageWriteback(head)) + if (folio_test_writeback(folio)) return -EBUSY; - if (PageAnon(head)) { + if (folio_test_anon(folio)) { /* * The caller does not necessarily hold an mmap_lock that would * prevent the anon_vma disappearing so we first we take a @@ -2621,7 +2620,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) * is taken to serialise against parallel split or collapse * operations. */ - anon_vma = page_get_anon_vma(head); + anon_vma = page_get_anon_vma(&folio->page); if (!anon_vma) { ret = -EBUSY; goto out; @@ -2630,7 +2629,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) mapping = NULL; anon_vma_lock_write(anon_vma); } else { - mapping = head->mapping; + mapping = folio->mapping; /* Truncated ? */ if (!mapping) { @@ -2638,7 +2637,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) goto out; } - xas_split_alloc(&xas, head, compound_order(head), + xas_split_alloc(&xas, folio, folio_order(folio), mapping_gfp_mask(mapping) & GFP_RECLAIM_MASK); if (xas_error(&xas)) { ret = xas_error(&xas); @@ -2653,7 +2652,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) * but on 32-bit, i_size_read() takes an irq-unsafe seqlock, * which cannot be nested inside the page tree lock. So note * end now: i_size itself may be changed at any moment, but - * head page lock is good enough to serialize the trimming. + * folio lock is good enough to serialize the trimming. */ end = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE); if (shmem_mapping(mapping)) @@ -2669,38 +2668,38 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) goto out_unlock; } - unmap_page(head); + unmap_page(&folio->page); /* block interrupt reentry in xa_lock and spinlock */ local_irq_disable(); if (mapping) { /* - * Check if the head page is present in page cache. - * We assume all tail are present too, if head is there. + * Check if the folio is present in page cache. + * We assume all tail are present too, if folio is there. */ xas_lock(&xas); xas_reset(&xas); - if (xas_load(&xas) != head) + if (xas_load(&xas) != folio) goto fail; } /* Prevent deferred_split_scan() touching ->_refcount */ spin_lock(&ds_queue->split_queue_lock); - if (page_ref_freeze(head, 1 + extra_pins)) { - if (!list_empty(page_deferred_list(head))) { + if (folio_ref_freeze(folio, 1 + extra_pins)) { + if (!list_empty(page_deferred_list(&folio->page))) { ds_queue->split_queue_len--; - list_del(page_deferred_list(head)); + list_del(page_deferred_list(&folio->page)); } spin_unlock(&ds_queue->split_queue_lock); if (mapping) { - int nr = thp_nr_pages(head); + int nr = folio_nr_pages(folio); - xas_split(&xas, head, thp_order(head)); - if (PageSwapBacked(head)) { - __mod_lruvec_page_state(head, NR_SHMEM_THPS, + xas_split(&xas, folio, folio_order(folio)); + if (folio_test_swapbacked(folio)) { + __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, -nr); } else { - __mod_lruvec_page_state(head, NR_FILE_THPS, + __lruvec_stat_mod_folio(folio, NR_FILE_THPS, -nr); filemap_nr_thps_dec(mapping); }