From patchwork Sat Feb 17 02:25:26 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 13561217 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9394C48260 for ; Sat, 17 Feb 2024 02:26:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3CF736B00AC; Fri, 16 Feb 2024 21:26:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 37E9A6B00AD; Fri, 16 Feb 2024 21:26:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1A9516B00AE; Fri, 16 Feb 2024 21:26:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 07FB06B00AC for ; Fri, 16 Feb 2024 21:26:04 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id DA9991A014C for ; Sat, 17 Feb 2024 02:26:03 +0000 (UTC) X-FDA: 81799705806.02.EEBFC9E Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf20.hostedemail.com (Postfix) with ESMTP id 64F1B1C000C for ; Sat, 17 Feb 2024 02:26:02 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=QiugQUAr; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708136762; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=ZkTKrYHMa+FqnrBhVZ6T44+rq76XaqVa7bIgMEH3ra4=; b=bNrNGajUYdCWRL3rLJYJxi00DB4nRiBJQvY7IPilScqTmMvjARWYcKn/L7Fdr0S3QVfsz6 0oYYnbzU/vsZEfhCXRXYHe5vDlQLx1mE7NLWM742pR5ZPS8r78GjBTHjSMfGaMS/+cmM1g IlRjSOhbqjmn0oehCoVNUf/VmalrHYk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708136762; a=rsa-sha256; cv=none; b=GRAPYJmlYL1CSwoLryE6tGbYI99iYUdj9q/+7ZKQvNGXi4cU/bqrv5pTDEtVL5SEcg3ADB 7rQKyf6AqVMyoSiO2QR9WThX3bJsPVUB5fW4JglSjxUfzzqLIWOYPbsDDc4cI+K+OnpY3Z deBTmxSa8gUgVbmrUXNVdQmdEe8aFjo= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=QiugQUAr; spf=none (imf20.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:In-Reply-To:References; bh=ZkTKrYHMa+FqnrBhVZ6T44+rq76XaqVa7bIgMEH3ra4=; b=QiugQUAr9tl+eqmH37bIRakAfq djOimVASMrUv0Vtb025B+yprLPeUCAB3ZHVHLQI4R1Cc3ugHP80faWURsdUWfdZPxBWNxRUxLkhxf IZBKs756FWw62IgW/1KaBRHUBA7eigmFmRNPSPdzpNCDkyK+EiINfLPw5ofnnxADuQsz6M0I2s/JM Vjyq5D64N3Z2a7vWtjw6Vv/FqlUvy3x3wdk4PME8tBpnXfm/DYTP9rP47tWsZxtPW7q1Tx7y2EqOU wkYxi8DxKXI4/blE0yc1d7hZsKBJd8HCAFeSZqQutR4fXF3MiLJ4vidDtqAd0o8Rcu9oOuyqb11wa PapbFB8Q==; Received: from willy by casper.infradead.org with local (Exim 4.97.1 #2 (Red Hat Linux)) id 1rbAOi-00000006HD3-0u3Q; Sat, 17 Feb 2024 02:25:48 +0000 From: "Matthew Wilcox (Oracle)" To: Andrew Morton Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org Subject: [PATCH v2 00/18] Rearrange batched folio freeing Date: Sat, 17 Feb 2024 02:25:26 +0000 Message-ID: <20240217022546.1496101-1-willy@infradead.org> X-Mailer: git-send-email 2.43.0 MIME-Version: 1.0 X-Rspamd-Queue-Id: 64F1B1C000C X-Rspam-User: X-Stat-Signature: zitru4q5uf57tgasmmmbkem1tz84b785 X-Rspamd-Server: rspam03 X-HE-Tag: 1708136762-424327 X-HE-Meta: U2FsdGVkX1/7TpuhJI3v7iVNpRVeBc4e8uGQ7AkRJtbxVpoAgusPp9a5ODMeG05lASKvzprFDTisIN9KD6CN7vqDMzc8mpDVWHgGM8RP6GUum7kJnUCZrZl5heRJ1qekTD9uSnw0a2j85Bno6t0tArjNANxySnTTIqIiWBTjyJfxRg108Qm/NHaObxbsRMzJkyJNb8YdvHVBx98nhfruTXvYVaF4GLBlSqEpP62BiGuhGtJ5/23KYbHRC8w1lIINAo20RY1sx7/HGSKz8wMx6BNcmwSpRkQb7HBpyTD5sGVuF5iSDdQG2pF/i79nw4hrvJsbbl6x3j6BwnLs6Y3Hys7i2ntCRbonmSuHlKONhx7LIq5jtXxkYUqYEQsCCGZvowYZsH7iyVmZ3XRAy0GiqTN41S1GXGVJeBSv5/8BrC1PbIU4dW0P0iEJwi99lK1U+1rcclErodXdg8ULEXIfopsyJVAlJ9BvmyyEy8jeqRblsVc0xHqzO4ZAE4tJ79o2VZjRn9CGk1PcCuEyKDabqZAq37H/3wfYNTUqHMj/ErG22nW4OtINSxNTjfynFlVFcSWzqwDFgMI7/zGxBbtn9CfozA620d2wW/vXBBLGFCernWaBgxHgRajd3O+Oo5OMti+3c5etNrSiTdyyLLq51jYUIAf3cAj2BYvm5ra243DcjKhRefT73R+eUtgXrCiiqclK+YusIChpa+PYxLcw46rSOEPGRBKKgDcAuOJgH/jIml5CC0PKSwTpQfd5heAA88v+6ZVrp1Nc0nvlXckPM7jsKiWfwghoxib7gkebsDZy1q/h3gtdnW10ve0qvpkleycaHZeWnN1pC/06x60GINTNyKqJQ2IbU929jMmpm4xhqQ8vdWZ0PA+Z41fQSPYYj1Nqt8ZuKvFiDLvrUBtsgQE2NESIb2044K6Bkbg86ccFB+CBzP+qqj0p8CwEY4pN6ROl3KBKRHd/SCWpcs0 liHGkhHi 7D7EYsEoPiyRfqyfp65sFBvsCt1fb6XSXBJoH15oXzk9EuCAVIM02b3xfMiLXklLOyqpgHXdkQT9ZktglvLmk0Hysyp/pnf2Stix+LNzDu4bRp9ZiReUMu/B/yw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Other than the obvious "remove calls to compound_head" changes, the fundamental belief here is that iterating a linked list is much slower than iterating an array (5-15x slower in my testing). There's also an associated belief that since we iterate the batch of folios three times, we do better when the array is small (ie 15 entries) than we do with a batch that is hundreds of entries long, which only gives us the opportunity for the first pages to fall out of cache by the time we get to the end. It is possible we should increase the size of folio_batch. Hopefully the bots let us know if this introduces any performance regressions. v2: - Redo the shrink_folio_list() patch to free the mapped folios at the end instead of calling try_to_unmap_flush() more often. - Improve a number of commit messages - Use pcp_allowed_order() instead of PAGE_ALLOC_COSTLY_ORDER (Ryan) - Fix move_folios_to_lru() comment (Ryan) - Add patches 15-18 - Collect R-b tags from Ryan Matthew Wilcox (Oracle) (18): mm: Make folios_put() the basis of release_pages() mm: Convert free_unref_page_list() to use folios mm: Add free_unref_folios() mm: Use folios_put() in __folio_batch_release() memcg: Add mem_cgroup_uncharge_folios() mm: Remove use of folio list from folios_put() mm: Use free_unref_folios() in put_pages_list() mm: use __page_cache_release() in folios_put() mm: Handle large folios in free_unref_folios() mm: Allow non-hugetlb large folios to be batch processed mm: Free folios in a batch in shrink_folio_list() mm: Free folios directly in move_folios_to_lru() memcg: Remove mem_cgroup_uncharge_list() mm: Remove free_unref_page_list() mm: Remove lru_to_page() mm: Convert free_pages_and_swap_cache() to use folios_put() mm: Use a folio in __collapse_huge_page_copy_succeeded() mm: Convert free_swap_cache() to take a folio include/linux/memcontrol.h | 26 +++--- include/linux/mm.h | 20 +---- include/linux/swap.h | 8 +- mm/internal.h | 4 +- mm/khugepaged.c | 30 +++---- mm/memcontrol.c | 16 ++-- mm/memory.c | 2 +- mm/mlock.c | 3 +- mm/page_alloc.c | 76 ++++++++-------- mm/swap.c | 180 ++++++++++++++++++++----------------- mm/swap_state.c | 25 ++++-- mm/vmscan.c | 52 +++++------ 12 files changed, 218 insertions(+), 224 deletions(-)