From patchwork Tue Jun 4 04:24:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: alexs@kernel.org X-Patchwork-Id: 13684608 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 79BCAC25B74 for ; Tue, 4 Jun 2024 04:21:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9C2496B0092; Tue, 4 Jun 2024 00:21:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 972086B0096; Tue, 4 Jun 2024 00:21:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5EFFB6B0093; Tue, 4 Jun 2024 00:21:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 36C966B0093 for ; Tue, 4 Jun 2024 00:21:44 -0400 (EDT) Received: from smtpin09.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id D0263160CEA for ; Tue, 4 Jun 2024 04:21:43 +0000 (UTC) X-FDA: 82191907686.09.7EF9400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf21.hostedemail.com (Postfix) with ESMTP id 365611C0003 for ; Tue, 4 Jun 2024 04:21:42 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=LVNkYZ54; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf21.hostedemail.com: domain of alexs@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=alexs@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1717474902; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jaTa4hdmBnimMLa8wYkNVT/2zgdLaWkSGqmxeE4jtSI=; b=vZ6vBWOYbPuq3gtre79kZJ71f18G0xt5V5XLuq4wxWg8tr/FwmIGlL1ELa9Qh9IQ7XsDmo LLwMCUdJlA82lXq0+EL2hl9VEYQcuxH0er3TA6kPmVitsZH45wlZ2JHKsKMky5zcv6UdSs prYhdge0K/c2KRBW3M5QmyvlSBwcN/c= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=LVNkYZ54; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf21.hostedemail.com: domain of alexs@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=alexs@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1717474902; a=rsa-sha256; cv=none; b=ncUp0Dt8/ZLruse1IsPJcBCFbcqBhtqWzoxRzXMXTndBvhnkWPPib0uxGqmi5vFKp8M87K 9ZpXxpK0/dTMsOSHs195R18gPHTLRSh3LnK3V0nvudbH4wdJZZXMUGL7gM7yu8qIUqzj0u o/3348nEAg66YVLQYv2jdB26+p8ukWM= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 58523611F1; Tue, 4 Jun 2024 04:21:41 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 95F25C32786; Tue, 4 Jun 2024 04:21:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1717474900; bh=CfvlQ2M9KWnaqxKPXvUUKqZr9diZgDumxJiDtxK8xLg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LVNkYZ54FWWO0xDgawCPCmAKsUuesOzTQxtIVVhIDWRFB+BRrf0loqYP6nhYl+ZxY 4/FZDa5Z71YUuEhj74spNoj9ub6BKUBfz6kLqA4rTWFIZUJHInhVTnX/Q73H75AiZE 85lX1fl8SDh3Q25c6Noz3NN0Wl8T/1GRCbq6062CnvkbylpQ2xwZz64V3O1XnQv1GD rbF+PTkdvZSFnKI6OjAVUsldEzoxEUdBOuyOnpHHZA6UHSgFrSTxZfnCboZeZh5zMn hdaK9PWG/w+vABWb3fmiH/Jgh1v2eloKrI0GKrme2D4X1hpN3ameLMF8M7ylbqpR3O 9mJ/sbzJPl4rA== From: alexs@kernel.org To: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, izik.eidus@ravellosystems.com, willy@infradead.org, aarcange@redhat.com, chrisw@sous-sol.org, hughd@google.com, david@redhat.com Cc: "Alex Shi (tencent)" Subject: [PATCH 03/10] mm/ksm: use folio in try_to_merge_one_page Date: Tue, 4 Jun 2024 12:24:45 +0800 Message-ID: <20240604042454.2012091-4-alexs@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240604042454.2012091-1-alexs@kernel.org> References: <20240604042454.2012091-1-alexs@kernel.org> MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 365611C0003 X-Stat-Signature: fgppgjuu89um3oxbs8jdchpgfbme5mig X-HE-Tag: 1717474902-737140 X-HE-Meta: U2FsdGVkX1+v64CEAKpvY4j+pj/7MT+Rh3NwzOQodMGYO3UZGeQXt0O4JO4sc+0MpYRKlCWRvn2w4dj2CzliNfcNt7XD3KFpY6dTssiJealaclKWOHjRAxe5X4cDByQZzlUaHxYRa70qzwgSoOkvkqb0465sg9hj78no1+vmsHlBT7efgzIPFeI0vLVlv8yWhEd+UJ9Z0GVvjOBfOhRo8xwsFLCRl7pPvy9IKK8zjwgxo5RaEffiBoEkhO50NynYUptFspJR5dYrKeBAcB7OH9B9A5xcsHDNmwhxw7+e1Nu6hlmZ09259zgbi/pf4eeAiXpvl+/3UQMNgsFUahJLX8KmsV/A+SemCBre1p9e8M14iXNNxbEjVLT9oje6pMWGOPJ2IRbJrvhJT6djx4FyNYhavurOJWyM1KMssevmDAoINUYRZ04n8gpObjCvxgNZLuq9MgMiKDDHhzx8kEmoSOjtkgTQTrffCkDG+LX9lkYcmCldZ8tjWXelu9KkA2++K+W7AbbC2hmrb4ELfLuciREeybhdOeEJlDrWGdeS/QMc+j5uALuZLkWLhadd+ibCvs6HFasCC1PfWTfA+Dr31XIde2lLZg39Tcqqk07kiOhQETq/L2o0RbR1iGhm14m2C1elZIPiQRcwGViVdFAlNpnPiPFy1aGgrz6pPbotsjkSeVPIBQwYaspL+TlRnQ8bPpzOV7mermT8bUQ4u5UwrfjQ3MnitHunELU0hYLk32bdshDlL52bNUAia2QKcLYg+T+hkOdu9lvQaDz389yBrSx6WxPHqBtYGoEZNxOHX5ZKRFFphaIpRCMCL+wyMC8lfvFULGaDzq+i4nfpXdWIzOnYS6eRUG8SNRf7ZwCtfLzCHh18kHrNNSsUSxXX+MaGq0wxrrX+A6D3ahv9PZoU+niu9GDNU6pIzyB8ahMVMhIVshO/EJP6rVkVKx26wxwzvtHPKlnLr2/HB40/Nnd 3A0+wRlR wjYR0jL6Bdwr8JTQgN+mDA/3uaJsaWJOT8QW95PimcsfQii5JdkASj7xAW3dOuqWPPRnWnJ5LOKEEvw2LGCK1vtRv5qcfkb1jYciC4P7aGDY57Qnyhe6aJKXc8PUi3CsjvIFNTX3rKCZculia40a/FuXaJ4dIKzYMgvgDRp7Vhdwj9oGXZ/phVr6vTs9A6tGB3+tGxDjvBklpDCRhOkml6MKs0xoYXJN7vosRxf1DQDZkUGyy+x8DIeZqIPshqItjIaj0ZvC876gWWnx7L7nSpb2WRV7q8Nu0V2Vr X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: "Alex Shi (tencent)" scan_get_next_rmap_item() return folio actually now. So in the calling path to try_to_merge_one_page() parameter pages are actually folios. So let's use folio instead of of page in the function to save few compound checking in callee functions. The 'page' left here since flush functions still not support folios yet. Signed-off-by: Alex Shi (tencent) --- mm/ksm.c | 61 ++++++++++++++++++++++++++++++++------------------------ 1 file changed, 35 insertions(+), 26 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index e2fdb9dd98e2..21bfa9bfb210 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -1462,24 +1462,29 @@ static int replace_page(struct vm_area_struct *vma, struct page *page, } /* - * try_to_merge_one_page - take two pages and merge them into one - * @vma: the vma that holds the pte pointing to page - * @page: the PageAnon page that we want to replace with kpage - * @kpage: the PageKsm page that we want to map instead of page, - * or NULL the first time when we want to use page as kpage. + * try_to_merge_one_page - take two folios and merge them into one + * @vma: the vma that holds the pte pointing to folio + * @folio: the PageAnon page that we want to replace with kfolio + * @kfolio: the PageKsm page that we want to map instead of folio, + * or NULL the first time when we want to use folio as kfolio. * - * This function returns 0 if the pages were merged, -EFAULT otherwise. + * This function returns 0 if the folios were merged, -EFAULT otherwise. */ -static int try_to_merge_one_page(struct vm_area_struct *vma, struct page *page, - struct ksm_rmap_item *rmap_item, struct page *kpage) +static int try_to_merge_one_page(struct vm_area_struct *vma, struct folio *folio, + struct ksm_rmap_item *rmap_item, struct folio *kfolio) { pte_t orig_pte = __pte(0); int err = -EFAULT; + struct page *page = folio_page(folio, 0); + struct page *kpage; - if (page == kpage) /* ksm page forked */ + if (kfolio) + kpage = folio_page(kfolio, 0); + + if (folio == kfolio) /* ksm page forked */ return 0; - if (!PageAnon(page)) + if (!folio_test_anon(folio)) goto out; /* @@ -1489,11 +1494,11 @@ static int try_to_merge_one_page(struct vm_area_struct *vma, struct page *page, * prefer to continue scanning and merging different pages, * then come back to this page when it is unlocked. */ - if (!trylock_page(page)) + if (!folio_trylock(folio)) goto out; - if (PageTransCompound(page)) { - if (split_huge_page(page)) + if (folio_test_large(folio)) { + if (split_folio(folio)) goto out_unlock; } @@ -1506,35 +1511,36 @@ static int try_to_merge_one_page(struct vm_area_struct *vma, struct page *page, * ptes are necessarily already write-protected. But in either * case, we need to lock and check page_count is not raised. */ - if (write_protect_page(vma, page_folio(page), &orig_pte) == 0) { - if (!kpage) { + if (write_protect_page(vma, folio, &orig_pte) == 0) { + if (!kfolio) { /* * While we hold page lock, upgrade page from * PageAnon+anon_vma to PageKsm+NULL stable_node: * stable_tree_insert() will update stable_node. */ - folio_set_stable_node(page_folio(page), NULL); - mark_page_accessed(page); + folio_set_stable_node(folio, NULL); + folio_mark_accessed(folio); /* * Page reclaim just frees a clean page with no dirty * ptes: make sure that the ksm page would be swapped. */ - if (!PageDirty(page)) - SetPageDirty(page); + if (!folio_test_dirty(folio)) + folio_set_dirty(folio); err = 0; } else if (pages_identical(page, kpage)) err = replace_page(vma, page, kpage, orig_pte); } out_unlock: - unlock_page(page); + folio_unlock(folio); out: return err; } /* * try_to_merge_with_ksm_page - like try_to_merge_two_pages, - * but no new kernel page is allocated: kpage must already be a ksm page. + * but no new kernel page is allocated, kpage is a ksm page or NULL + * if we use page as first ksm page. * * This function returns 0 if the pages were merged, -EFAULT otherwise. */ @@ -1544,13 +1550,17 @@ static int try_to_merge_with_ksm_page(struct ksm_rmap_item *rmap_item, struct mm_struct *mm = rmap_item->mm; struct vm_area_struct *vma; int err = -EFAULT; + struct folio *kfolio; mmap_read_lock(mm); vma = find_mergeable_vma(mm, rmap_item->address); if (!vma) goto out; - err = try_to_merge_one_page(vma, page, rmap_item, kpage); + if (kpage) + kfolio = page_folio(kpage); + + err = try_to_merge_one_page(vma, page_folio(page), rmap_item, kfolio); if (err) goto out; @@ -2385,8 +2395,8 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite mmap_read_lock(mm); vma = find_mergeable_vma(mm, rmap_item->address); if (vma) { - err = try_to_merge_one_page(vma, page, rmap_item, - ZERO_PAGE(rmap_item->address)); + err = try_to_merge_one_page(vma, page_folio(page), rmap_item, + page_folio(ZERO_PAGE(rmap_item->address))); trace_ksm_merge_one_page( page_to_pfn(ZERO_PAGE(rmap_item->address)), rmap_item, mm, err); @@ -2671,8 +2681,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page) rmap_item = get_next_rmap_item(mm_slot, ksm_scan.rmap_list, ksm_scan.address); if (rmap_item) { - ksm_scan.rmap_list = - &rmap_item->rmap_list; + ksm_scan.rmap_list = &rmap_item->rmap_list; if (should_skip_rmap_item(*page, rmap_item)) goto next_page;