From patchwork Fri Aug 2 12:20:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Barry Song <21cnbao@gmail.com> X-Patchwork-Id: 13751515 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6CE78C3DA4A for ; Fri, 2 Aug 2024 12:21:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0149E6B0085; Fri, 2 Aug 2024 08:21:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F04956B0088; Fri, 2 Aug 2024 08:21:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DCC166B0089; Fri, 2 Aug 2024 08:21:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id BF7126B0085 for ; Fri, 2 Aug 2024 08:21:08 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 2EC0C1C43AF for ; Fri, 2 Aug 2024 12:21:08 +0000 (UTC) X-FDA: 82407215016.23.C6DC0A3 Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com [209.85.214.181]) by imf25.hostedemail.com (Postfix) with ESMTP id 36C43A002B for ; Fri, 2 Aug 2024 12:21:05 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=eX9ryNqF; spf=pass (imf25.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722601220; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SZw+mrWAtNO8gx3RRID0EOwy0TVTv1NvTSzmaII/3QU=; b=02OSy4WaX8L3JPYSrD4lKQGm/2NGlAiIOZDfdBLXSCWjkw+XspB1frB67BDhs+/DTo1tWS gBmueMet7Tv9Flv+9tDIjvhHX1bRygm6ATR1AunutLqPz/stuAU2/vpIfAV2qzUdFt0D5R ikH8cZ/VnXKKj3L0bsjtUr4M1Q47Txo= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=eX9ryNqF; spf=pass (imf25.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.181 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722601220; a=rsa-sha256; cv=none; b=amBqOhDkCl3Fhi81BxWubqiKrl83N5aTpHQUko+vKiegsRCHalor+V2KhuQy1hTK0mqURS fv1WRjyAM/AEYuBbh98aq4KQCfo2VpuRK4Szxe+70hLzKMJ3AZ+9n5Z9iODSSmneQEuUR8 pxn++A2jbLx/X95Ze1wrnHz6Jm1WR8Q= Received: by mail-pl1-f181.google.com with SMTP id d9443c01a7336-1fc4fcbb131so70578965ad.3 for ; Fri, 02 Aug 2024 05:21:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722601265; x=1723206065; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SZw+mrWAtNO8gx3RRID0EOwy0TVTv1NvTSzmaII/3QU=; b=eX9ryNqFa/nZkX4ndrKyiJ6O8k9Z2viVkCrQO4vZlz8K2bpuGPEvjlBymODyIRAupH Uaw58aDMK27RXZcP0LRXesXqwylgxXaDtyU63AAf3WO4mmav61mbxneq8CQ+X8nvFRUo LxWRblZReDNsTNrrHySTvZclG0yBaDhCVEVJguNZpNtiQhP8QmPasTpXJ3cowLZG45vq n6zHlDucXYeLvePHxVwZhm50hFTIGo7IIHNMWWm2gX3W3VypjdQ0vDNgqqTZ8rNAcgGg 9EPLdRPyybR2ylp1KOtph2wzstVjofSfsmGNIWdOpFSWF/R7XOmFkOsRMrR3CyZIJc0l FGcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722601265; x=1723206065; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SZw+mrWAtNO8gx3RRID0EOwy0TVTv1NvTSzmaII/3QU=; b=JdtKQZBtCmWHlhjIZog4iZXhsvsyT6PBhe5mLVa9EouZ4iDfWMQg9NT5J1TMR0z0zq O08xQdyvybV7qkkMLbwBWWYRDrf8SAhA1TzLnScA52aFkwb2IGSHKlKyR8AmLYxqrGVR FXJma2JKIB9GSfzM0KOcynefostzcPDRhrsCY7c8KEfe3VxEWUu3etcLyj/Z2e66OO7z WDvk0GM1IlorBA5O7GLpXr22AephIk2RblKY/uE+GhUnLIRl5ksODv5Idk2m2lM3KN6H odzDp2u2A6TWxKwFCm6nqZ5/0yue683LppI7/MkamLBmjzdXjFXwgpUsU279JGwjaSBV eTng== X-Forwarded-Encrypted: i=1; AJvYcCXTUMl/QGaamXzrIGv+czBUUwFoTZSdenUxcvIWVKcfDKSrt1jeO+vHgOt2x9IheyHQacmuduCHj+xTH4kOATpj5n8= X-Gm-Message-State: AOJu0Yx77NvJlJYQLgcG3BBq9JUepj0xb3GjA4PZY5fHMwskKfdFmW8q +1mYsIwLZSc0wbBe2acSjuW7MEIji8SqkXtl+XLqq2FTRZm/j83Bihqjrg== X-Google-Smtp-Source: AGHT+IH1pVDAT1Yc267JouGX6+3PsrKXzfDOleAEismvtPezwW0HljZJprD4nR6qZc1Oy9FTD3tnXg== X-Received: by 2002:a17:902:db07:b0:1fd:9e44:e5e9 with SMTP id d9443c01a7336-1ff5744c810mr42469645ad.53.1722601264728; Fri, 02 Aug 2024 05:21:04 -0700 (PDT) Received: from localhost.localdomain ([2407:7000:8942:5500:aaa1:59ff:fe57:eb97]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1ff590600e4sm15841875ad.144.2024.08.02.05.20.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Aug 2024 05:21:04 -0700 (PDT) From: Barry Song <21cnbao@gmail.com> To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: baolin.wang@linux.alibaba.com, chrisl@kernel.org, david@redhat.com, hannes@cmpxchg.org, hughd@google.com, kaleshsingh@google.com, kasong@tencent.com, linux-kernel@vger.kernel.org, mhocko@suse.com, minchan@kernel.org, nphamcs@gmail.com, ryan.roberts@arm.com, senozhatsky@chromium.org, shakeel.butt@linux.dev, shy828301@gmail.com, surenb@google.com, v-songbaohua@oppo.com, willy@infradead.org, xiang@kernel.org, ying.huang@intel.com, yosryahmed@google.com, hch@infradead.org Subject: [PATCH v6 1/2] mm: add nr argument in mem_cgroup_swapin_uncharge_swap() helper to support large folios Date: Sat, 3 Aug 2024 00:20:30 +1200 Message-Id: <20240802122031.117548-2-21cnbao@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240802122031.117548-1-21cnbao@gmail.com> References: <20240726094618.401593-1-21cnbao@gmail.com> <20240802122031.117548-1-21cnbao@gmail.com> MIME-Version: 1.0 X-Stat-Signature: c1k11w8syney7axrhsyyjhnjzircn1pi X-Rspam-User: X-Rspamd-Queue-Id: 36C43A002B X-Rspamd-Server: rspam02 X-HE-Tag: 1722601265-483380 X-HE-Meta: U2FsdGVkX1+1gNHqOmt4jnRjQVPnmwto6MvCKjnfcCg66W1R3VXdXv8MgzlpXTkNklJrcJtxzFcOA2OcX+HJD0N28Z5C4qv9nwt9hXyzD1JD+SJB2MUzUncXJs7WEFXMoxnZz28WLFJ2/EJTyfw1MdfUDbEjR/VfURQevCxh4bJrxUbr1j562bEyZkAfZn7pqr3a8TdWNejZWivJ54eLBegbfV2fPGT3JB+T1bnrg94QbpyyXe61A5DX2G+1SWvncI9J8Jf9wFYAhigVmTAld/qVl5H/NHGWos8Dq+l0gFG2vzblxemMl2ojO6Y+2Zlk7E6v9dVRUiNjZgAZ2HFTdW8nmBcbJoPAqP3/NrBdEzTCfYvrxAjtZ/2tZm50jGrPLCK9DX1Yq/p+5dOqGEtxtpx1yUUsphYJJULUoDJku3pOzwmsujC6IWQzHyldstI/VeM4PwGE8SeUgLUdUIYFJAvTwyl9GYj6M79J01HK/TiPBSUqBeRythvCY1YSAZtd6wQYiN2lEqE/MqKzaRd1bh++sCr7BzFcAUz4KGIYwGTaEciiiK1ZJnLsCzPGBqN6kXveAsCgEYm04pwv3VRP67Pwp/J4D8NpP8wzPXSQsp+LX5lQeN7tz8a1hhahxxOxxX0cAxwGUw1clB+l4X0aM+knpOFC6hn8ruE6/I2XPhCf9EDPuDA3lXA6SPqGvjy+VkwB5apBRLpeD0ADftx29OL7m1wFxWXlWKi3xgtvwz9+ZK4ejefIkQ6VEflLdB64iV27NzMiucJWfCKvs+fnti1xMKbpGJ+3mXadqMw9eAGsemOcxsQY4PG+GinwXsBiwizEFTn2G0eVJzlZeNlyp7jx8uBzoKI95qG2mX9QZJxHW2ci8mhz5voqALWwnWEdy81zIbU+l8USFM8kFgWKX/JRpFSpjKtGTbwkLkL1KdskFCvvgTct+bA+QJC6/wX4TWM+Ms5LYesAcrLH07T HuzLTwJe yDNsaBkvwm39z3d47b0HiytNq1FoZUBUp4VASqbw7EVKSeDK7iO1UoFiPOQ3t5MqXV4KPTD4x9IbsHKHhN86QTjOK6cbWRspYgW6ESe0CamDRwyj+51tL9CkcN9n2oHfHizf+uMvFvjk1sqeLPALTMHugR97aST/FZBL7b+4Bk4nF85V96lGGayahlpqXOIzTZrigLVcbLWfpufzBZvIm+UHFqHLDjjMwo5gGQziDpa56fwXyCH6iRyxl14BF78LmloU8eSYuFmipcruDKFosZWJPgGlnRWt3Wt7LjV0/Ub/VOc19mvFkAKq9NlT/bwHXgxlK1XC89X5NRpuCVURuaSfscgijZz3eMQl8jrEZywZznyKPKheJqghB6Es5VrfzbbnfuzGRSjaGQaY7F2+crz85D0YOeWZJhTDGC3QgX98xxfZaNLiyi4/H0X5aa8G9wDHrNPfzsVgq6RzgrQT0TsvrOjaorBObtaF2 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Barry Song With large folios swap-in, we might need to uncharge multiple entries all together, add nr argument in mem_cgroup_swapin_uncharge_swap(). For the existing two users, just pass nr=1. Signed-off-by: Barry Song Acked-by: Chris Li --- include/linux/memcontrol.h | 5 +++-- mm/memcontrol.c | 7 ++++--- mm/memory.c | 2 +- mm/swap_state.c | 2 +- 4 files changed, 9 insertions(+), 7 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 1b79760af685..44f7fb7dc0c8 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -682,7 +682,8 @@ int mem_cgroup_hugetlb_try_charge(struct mem_cgroup *memcg, gfp_t gfp, int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, gfp_t gfp, swp_entry_t entry); -void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry); + +void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry, unsigned int nr_pages); void __mem_cgroup_uncharge(struct folio *folio); @@ -1181,7 +1182,7 @@ static inline int mem_cgroup_swapin_charge_folio(struct folio *folio, return 0; } -static inline void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry) +static inline void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry, unsigned int nr) { } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index b889a7fbf382..5d763c234c44 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -4572,14 +4572,15 @@ int mem_cgroup_swapin_charge_folio(struct folio *folio, struct mm_struct *mm, /* * mem_cgroup_swapin_uncharge_swap - uncharge swap slot - * @entry: swap entry for which the page is charged + * @entry: the first swap entry for which the pages are charged + * @nr_pages: number of pages which will be uncharged * * Call this function after successfully adding the charged page to swapcache. * * Note: This function assumes the page for which swap slot is being uncharged * is order 0 page. */ -void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry) +void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry, unsigned int nr_pages) { /* * Cgroup1's unified memory+swap counter has been charged with the @@ -4599,7 +4600,7 @@ void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry) * let's not wait for it. The page already received a * memory+swap charge, drop the swap entry duplicate. */ - mem_cgroup_uncharge_swap(entry, 1); + mem_cgroup_uncharge_swap(entry, nr_pages); } } diff --git a/mm/memory.c b/mm/memory.c index 4c8716cb306c..4cf4902db1ec 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4102,7 +4102,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) ret = VM_FAULT_OOM; goto out_page; } - mem_cgroup_swapin_uncharge_swap(entry); + mem_cgroup_swapin_uncharge_swap(entry, 1); shadow = get_shadow_from_swap_cache(entry); if (shadow) diff --git a/mm/swap_state.c b/mm/swap_state.c index 293ff1afdca4..1159e3225754 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -522,7 +522,7 @@ struct folio *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, if (add_to_swap_cache(new_folio, entry, gfp_mask & GFP_RECLAIM_MASK, &shadow)) goto fail_unlock; - mem_cgroup_swapin_uncharge_swap(entry); + mem_cgroup_swapin_uncharge_swap(entry, 1); if (shadow) workingset_refault(new_folio, shadow); From patchwork Fri Aug 2 12:20:31 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Barry Song <21cnbao@gmail.com> X-Patchwork-Id: 13751516 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9300C52D6D for ; Fri, 2 Aug 2024 12:21:17 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7DA1B6B0089; Fri, 2 Aug 2024 08:21:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 78B256B008A; Fri, 2 Aug 2024 08:21:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 604876B008C; Fri, 2 Aug 2024 08:21:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 3B8EA6B0089 for ; Fri, 2 Aug 2024 08:21:17 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id D9E24A1217 for ; Fri, 2 Aug 2024 12:21:16 +0000 (UTC) X-FDA: 82407215352.13.E0CAE80 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by imf17.hostedemail.com (Postfix) with ESMTP id F23FF40025 for ; Fri, 2 Aug 2024 12:21:14 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=dOxTD16b; spf=pass (imf17.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1722601216; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=T4kFGlQ3P9/qg9SMcB706prs3lrZrpD1NyU/I35BuaI=; b=Z/lsWUdUeF960e7NxyB/RG4OAEsymRlNRmUJKxBebAdzEg4yey9K9mb4IKikvm/B2GOXyu RB70onY7PI4GSsbeA0b9ym7iFgZl4c0/HmezXW2U/mFoVcw5LZPmr5t7krd8Trnn+EV8M6 4z+fIt07piZjQJEDhgFdMH3FYd9ckpo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1722601216; a=rsa-sha256; cv=none; b=xNICiMcHjy5I0NTFWe89n8tB6ttRVBlg49E0mvJ4dtqI1Hqu5R3uvj+0Oniizc5AGEYvZ5 CChRdv4iHYQlS+iWMd9qbwmL4engw9xcuWeTaVtk0UEVgZoHYYxEajGpAGg4zV3ZAbW3C8 vf4EYVvYl8I0Ym5xtcDKmzfggOObcBU= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=dOxTD16b; spf=pass (imf17.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.214.180 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-1ff1cd07f56so60163075ad.2 for ; Fri, 02 Aug 2024 05:21:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1722601274; x=1723206074; darn=kvack.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=T4kFGlQ3P9/qg9SMcB706prs3lrZrpD1NyU/I35BuaI=; b=dOxTD16bKjuZbZYGFDKjwt7rY87W4/ozTod65xb/UjFv3aCLUzb00iT04fiGBrKDdK fKFmFPlrIfFkIxbo45DgTgpwWsOMvGR0Pturnqi4JceyfI5gBIE2+Z19fS2QA82JWroR 63w2AfFPz4LGafLjRh4ArppKlQtHbkAGGq7ZZryjIjUxHNRBiPQa6g1WdaD7iAhYzEIl YR8qPEVnyosEl0Qxn+LWAUGQmGaOoEw32ORXGFUbPhpi/X/hu8nphyQ1TJ3FbPINTYtr IJK7zX6IHQ3ZjahzdTkpMgn9/bhmkaJa0lz7Thx8PYkjzWILNZRiYfAqKn8lcdkRMTWp 8l0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1722601274; x=1723206074; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=T4kFGlQ3P9/qg9SMcB706prs3lrZrpD1NyU/I35BuaI=; b=dpv2i/uPRkr9K8keOdUj4kgHIBNz2vtI0QurPJXr/SZ+EWBG8N99gB848TeCJvsPoX RpPB8qOO8ya5XctR5xPNFji8jGyYnuaPJHuKeBOd3a0/FPCDJjgX4CjWj0zg1+hRGbVJ OuDNBQR+OKkoGiDwoXODizATT02M7Lkbxk1iLYvuvbiAyO8dXlEe1Ebw+6sll4Dimuo7 DfkZNXH6xLQaVWA2Ysi/jDyciRSVGsh45HNB0vKzs2hXmx1pJx2St05xo9zS5Tj0XM3O oz1ApvBZjiDmUIZ9pp+CD/tT9yrF/rGEaTThNPdh36w2gefYZzXd8ZKFxvMPfPinEgBh attA== X-Forwarded-Encrypted: i=1; AJvYcCVXQcxW3l9CwTROONV5jfLJeRvt3WsVlalsaZtpdP7pV4NQtdMX6GWipwAJbL0eNTuBwX2kbknq+tJoN9sGz+TIgCw= X-Gm-Message-State: AOJu0YwqGTVXDPBCkyAEnfs5klsEAtfjpStB0ccr1vhtzh40nJ1gl73P CQ7Mtizxh8GWeugVQlEKcO7ZR4qEYUnvZqbNsW7SkEmz8AR+cFcB X-Google-Smtp-Source: AGHT+IFspuGjo7i44LD4lZkqlOJD5G7rbdSrKoQkk+kKJkp83UHxDiHKjj4IanneYSzs6/eAxZtR1g== X-Received: by 2002:a17:902:cec6:b0:1fb:1b16:eb7b with SMTP id d9443c01a7336-1ff57258abbmr49466575ad.16.1722601273549; Fri, 02 Aug 2024 05:21:13 -0700 (PDT) Received: from localhost.localdomain ([2407:7000:8942:5500:aaa1:59ff:fe57:eb97]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1ff590600e4sm15841875ad.144.2024.08.02.05.21.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 02 Aug 2024 05:21:13 -0700 (PDT) From: Barry Song <21cnbao@gmail.com> To: akpm@linux-foundation.org, linux-mm@kvack.org Cc: baolin.wang@linux.alibaba.com, chrisl@kernel.org, david@redhat.com, hannes@cmpxchg.org, hughd@google.com, kaleshsingh@google.com, kasong@tencent.com, linux-kernel@vger.kernel.org, mhocko@suse.com, minchan@kernel.org, nphamcs@gmail.com, ryan.roberts@arm.com, senozhatsky@chromium.org, shakeel.butt@linux.dev, shy828301@gmail.com, surenb@google.com, v-songbaohua@oppo.com, willy@infradead.org, xiang@kernel.org, ying.huang@intel.com, yosryahmed@google.com, hch@infradead.org, Chuanhua Han Subject: [PATCH v6 2/2] mm: support large folios swap-in for zRAM-like devices Date: Sat, 3 Aug 2024 00:20:31 +1200 Message-Id: <20240802122031.117548-3-21cnbao@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240802122031.117548-1-21cnbao@gmail.com> References: <20240726094618.401593-1-21cnbao@gmail.com> <20240802122031.117548-1-21cnbao@gmail.com> MIME-Version: 1.0 X-Stat-Signature: 6ikwoz7ndfr95yxot6gehb1ocmpokh6n X-Rspamd-Queue-Id: F23FF40025 X-Rspam-User: X-Rspamd-Server: rspam08 X-HE-Tag: 1722601274-579267 X-HE-Meta: U2FsdGVkX1+/hp4z9/8SXF2UxODnfrOp61Swdm+2m9hj2Wm8gWfEUkaGinBzDmX7GouQQoJBeRKZeIhoGj4QeWXExYWKVkN58xZlOB9we+Bj7yI9SHSUla5Ivqy3rWDWk9kA7CgLcky5w29IykePxXQ/3B2+Ve1RJbdIKrFYyuHs29xsLoNWRqqedKREo0e8BPhhPxEDViuhf7QtD6CMCjsvMkWmBZ46jU4jzLxyhC7ZqmtXZvptaj5asvqHD+tBRVyPuIAkiOMm4K/jF1FqQT6hQbz6AUXMVqtfb8/423V83BUcR7pvQsKm4AKD7SR0noDNjPkVdmbMvk5NgO5ZFzOMC5OJFKY35ch9V3Ixx1ybEMM0oxSFC6vACRzpX+mrmDO4Q6/B045TV5jws6+nL+o6B5Y8krla6UN6oN4v6AV27qJI3imuk9RZnlyOdk+p1GpI0MHx8P3bCq6b1OxHkXzmSRcdF/vRpIxeMcfOjc7HYs2RFjejS0FX688j3HABNyH+Tdv68gTBpxz64nkhfpBUC0D2Pr6jJnoMrPvrjo/S0dn/7Do+CENaarkN1w0pB4NfjZKlPuZmaXOqwO88zKJFesrUEehOD9HPMTlqo7WmYGv0NYW48ptsvvx4eROWiDiT0BL8HaBxJ8IU4IoeEezznQEs6PUT8zvWU13RX0cacEOAJS/LpFVJgbS/OXk588FsAK0cIG55Gi9w86ICsGz5uXmpQTSgfJqJSrb9EYGs/V2JvCVvl1zi45lgEWiXCJjxOBrcU0P4JPJZlnbUk293qmyRvIl64UEbwaO+kvfO8uohBewxl6NEjLrYhO+19UXiC0oJmSFPkVrgM3CYRWUKRGMnlEkPLo454H1bWDIUd/eEv80jQ3EgPpaomdcbnV+MIKUfx+MFQBYlNrd7G32qQbdtk444Ia4O2k6NoUL3k6s98HyFilUN6Y54aZF20c45Q4n1TDfncDD8wk8 6G5TWcIY vY1sTf8emiaGEJk0Myz+DcFAzphBNJGkaZqBsWYIXfkVuU7l7MKnlDFVdlkHeoE/kQqEOQs38OB+y3K4h5zucm88/qqQXu7gaZCJ3Mf0I512KJkallqoNKrqmYr7J6VDM6r4bCWq95d6STAcIkQ1b3i7cDd/q++5UjS9nQz8MUnk3n3d/2iB31xKatODaVH3rgoIXkBs37HUPlwv+flJAmkeH9Tx9fFWo2XMjSCIaSr9MPyAKR+MAbIsJJpbSsPXUP7Xs+Q1mGW3RJSFLbOhioqQmC/UsbVUrSjhtdHB9qBUakszvKOX3yOgtI7qfnqUjmhOz4OaeaRnZFtivtj7du/cqUBbXIioGGB6zvTFTP4qhEWRTt5YOMGSGNatZExNmVeUCoIHDhmEgy0BFcefKriTWJcdGZyc2vK2QylrOLJ9YAMc+T6Bxo+N5JlWn6c0dGL5Ua8jnK8vL6AAsZ+U7NgWWrR52EvT+BA16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Chuanhua Han Currently, we have mTHP features, but unfortunately, without support for large folio swap-ins, once these large folios are swapped out, they are lost because mTHP swap is a one-way process. The lack of mTHP swap-in functionality prevents mTHP from being used on devices like Android that heavily rely on swap. This patch introduces mTHP swap-in support. It starts from sync devices such as zRAM. This is probably the simplest and most common use case, benefiting billions of Android phones and similar devices with minimal implementation cost. In this straightforward scenario, large folios are always exclusive, eliminating the need to handle complex rmap and swapcache issues. It offers several benefits: 1. Enables bidirectional mTHP swapping, allowing retrieval of mTHP after swap-out and swap-in. Large folios in the buddy system are also preserved as much as possible, rather than being fragmented due to swap-in. 2. Eliminates fragmentation in swap slots and supports successful THP_SWPOUT. w/o this patch (Refer to the data from Chris's and Kairui's latest swap allocator optimization while running ./thp_swap_allocator_test w/o "-a" option [1]): ./thp_swap_allocator_test Iteration 1: swpout inc: 233, swpout fallback inc: 0, Fallback percentage: 0.00% Iteration 2: swpout inc: 131, swpout fallback inc: 101, Fallback percentage: 43.53% Iteration 3: swpout inc: 71, swpout fallback inc: 155, Fallback percentage: 68.58% Iteration 4: swpout inc: 55, swpout fallback inc: 168, Fallback percentage: 75.34% Iteration 5: swpout inc: 35, swpout fallback inc: 191, Fallback percentage: 84.51% Iteration 6: swpout inc: 25, swpout fallback inc: 199, Fallback percentage: 88.84% Iteration 7: swpout inc: 23, swpout fallback inc: 205, Fallback percentage: 89.91% Iteration 8: swpout inc: 9, swpout fallback inc: 219, Fallback percentage: 96.05% Iteration 9: swpout inc: 13, swpout fallback inc: 213, Fallback percentage: 94.25% Iteration 10: swpout inc: 12, swpout fallback inc: 216, Fallback percentage: 94.74% Iteration 11: swpout inc: 16, swpout fallback inc: 213, Fallback percentage: 93.01% Iteration 12: swpout inc: 10, swpout fallback inc: 210, Fallback percentage: 95.45% Iteration 13: swpout inc: 16, swpout fallback inc: 212, Fallback percentage: 92.98% Iteration 14: swpout inc: 12, swpout fallback inc: 212, Fallback percentage: 94.64% Iteration 15: swpout inc: 15, swpout fallback inc: 211, Fallback percentage: 93.36% Iteration 16: swpout inc: 15, swpout fallback inc: 200, Fallback percentage: 93.02% Iteration 17: swpout inc: 9, swpout fallback inc: 220, Fallback percentage: 96.07% w/ this patch (always 0%): Iteration 1: swpout inc: 948, swpout fallback inc: 0, Fallback percentage: 0.00% Iteration 2: swpout inc: 953, swpout fallback inc: 0, Fallback percentage: 0.00% Iteration 3: swpout inc: 950, swpout fallback inc: 0, Fallback percentage: 0.00% Iteration 4: swpout inc: 952, swpout fallback inc: 0, Fallback percentage: 0.00% Iteration 5: swpout inc: 950, swpout fallback inc: 0, Fallback percentage: 0.00% Iteration 6: swpout inc: 950, swpout fallback inc: 0, Fallback percentage: 0.00% Iteration 7: swpout inc: 947, swpout fallback inc: 0, Fallback percentage: 0.00% Iteration 8: swpout inc: 950, swpout fallback inc: 0, Fallback percentage: 0.00% Iteration 9: swpout inc: 950, swpout fallback inc: 0, Fallback percentage: 0.00% Iteration 10: swpout inc: 945, swpout fallback inc: 0, Fallback percentage: 0.00% Iteration 11: swpout inc: 947, swpout fallback inc: 0, Fallback percentage: 0.00% ... 3. With both mTHP swap-out and swap-in supported, we offer the option to enable zsmalloc compression/decompression with larger granularity[2]. The upcoming optimization in zsmalloc will significantly increase swap speed and improve compression efficiency. Tested by running 100 iterations of swapping 100MiB of anon memory, the swap speed improved dramatically: time consumption of swapin(ms) time consumption of swapout(ms) lz4 4k 45274 90540 lz4 64k 22942 55667 zstdn 4k 85035 186585 zstdn 64k 46558 118533 The compression ratio also improved, as evaluated with 1 GiB of data: granularity orig_data_size compr_data_size 4KiB-zstd 1048576000 246876055 64KiB-zstd 1048576000 199763892 Without mTHP swap-in, the potential optimizations in zsmalloc cannot be realized. 4. Even mTHP swap-in itself can reduce swap-in page faults by a factor of nr_pages. Swapping in content filled with the same data 0x11, w/o and w/ the patch for five rounds (Since the content is the same, decompression will be very fast. This primarily assesses the impact of reduced page faults): swp in bandwidth(bytes/ms) w/o w/ round1 624152 1127501 round2 631672 1127501 round3 620459 1139756 round4 606113 1139756 round5 624152 1152281 avg 621310 1137359 +83% [1] https://lore.kernel.org/all/20240730-swap-allocator-v5-0-cb9c148b9297@kernel.org/ [2] https://lore.kernel.org/all/20240327214816.31191-1-21cnbao@gmail.com/ Signed-off-by: Chuanhua Han Co-developed-by: Barry Song Signed-off-by: Barry Song Signed-off-by: Andrew Morton Reported-by: Kairui Song Signed-off-by: Barry Song --- mm/memory.c | 211 ++++++++++++++++++++++++++++++++++++++++++++++------ 1 file changed, 188 insertions(+), 23 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 4cf4902db1ec..07029532469a 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3986,6 +3986,152 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf) return VM_FAULT_SIGBUS; } +/* + * check a range of PTEs are completely swap entries with + * contiguous swap offsets and the same SWAP_HAS_CACHE. + * ptep must be first one in the range + */ +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +static bool can_swapin_thp(struct vm_fault *vmf, pte_t *ptep, int nr_pages) +{ + struct swap_info_struct *si; + unsigned long addr; + swp_entry_t entry; + pgoff_t offset; + char has_cache; + int idx, i; + pte_t pte; + + addr = ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE); + idx = (vmf->address - addr) / PAGE_SIZE; + pte = ptep_get(ptep); + + if (!pte_same(pte, pte_move_swp_offset(vmf->orig_pte, -idx))) + return false; + entry = pte_to_swp_entry(pte); + offset = swp_offset(entry); + if (swap_pte_batch(ptep, nr_pages, pte) != nr_pages) + return false; + + si = swp_swap_info(entry); + has_cache = si->swap_map[offset] & SWAP_HAS_CACHE; + for (i = 1; i < nr_pages; i++) { + /* + * while allocating a large folio and doing swap_read_folio for the + * SWP_SYNCHRONOUS_IO path, which is the case the being faulted pte + * doesn't have swapcache. We need to ensure all PTEs have no cache + * as well, otherwise, we might go to swap devices while the content + * is in swapcache + */ + if ((si->swap_map[offset + i] & SWAP_HAS_CACHE) != has_cache) + return false; + } + + return true; +} + +static inline unsigned long thp_swap_suitable_orders(pgoff_t swp_offset, + unsigned long addr, unsigned long orders) +{ + int order, nr; + + order = highest_order(orders); + + /* + * To swap-in a THP with nr pages, we require its first swap_offset + * is aligned with nr. This can filter out most invalid entries. + */ + while (orders) { + nr = 1 << order; + if ((addr >> PAGE_SHIFT) % nr == swp_offset % nr) + break; + order = next_order(&orders, order); + } + + return orders; +} +#else +static inline bool can_swapin_thp(struct vm_fault *vmf, pte_t *ptep, int nr_pages) +{ + return false; +} +#endif + +static struct folio *alloc_swap_folio(struct vm_fault *vmf) +{ + struct vm_area_struct *vma = vmf->vma; +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + unsigned long orders; + struct folio *folio; + unsigned long addr; + swp_entry_t entry; + spinlock_t *ptl; + pte_t *pte; + gfp_t gfp; + int order; + + /* + * If uffd is active for the vma we need per-page fault fidelity to + * maintain the uffd semantics. + */ + if (unlikely(userfaultfd_armed(vma))) + goto fallback; + + /* + * A large swapped out folio could be partially or fully in zswap. We + * lack handling for such cases, so fallback to swapping in order-0 + * folio. + */ + if (!zswap_never_enabled()) + goto fallback; + + entry = pte_to_swp_entry(vmf->orig_pte); + /* + * Get a list of all the (large) orders below PMD_ORDER that are enabled + * and suitable for swapping THP. + */ + orders = thp_vma_allowable_orders(vma, vma->vm_flags, + TVA_IN_PF | TVA_ENFORCE_SYSFS, BIT(PMD_ORDER) - 1); + orders = thp_vma_suitable_orders(vma, vmf->address, orders); + orders = thp_swap_suitable_orders(swp_offset(entry), vmf->address, orders); + + if (!orders) + goto fallback; + + pte = pte_offset_map_lock(vmf->vma->vm_mm, vmf->pmd, vmf->address & PMD_MASK, &ptl); + if (unlikely(!pte)) + goto fallback; + + /* + * For do_swap_page, find the highest order where the aligned range is + * completely swap entries with contiguous swap offsets. + */ + order = highest_order(orders); + while (orders) { + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); + if (can_swapin_thp(vmf, pte + pte_index(addr), 1 << order)) + break; + order = next_order(&orders, order); + } + + pte_unmap_unlock(pte, ptl); + + /* Try allocating the highest of the remaining orders. */ + gfp = vma_thp_gfp_mask(vma); + while (orders) { + addr = ALIGN_DOWN(vmf->address, PAGE_SIZE << order); + folio = vma_alloc_folio(gfp, order, vma, addr, true); + if (folio) + return folio; + order = next_order(&orders, order); + } + +fallback: +#endif + return vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, vma, vmf->address, false); +} + + /* * We enter with non-exclusive mmap_lock (to exclude vma changes, * but allow concurrent faults), and pte mapped but not yet locked. @@ -4074,35 +4220,37 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) if (!folio) { if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) == 1) { - /* - * Prevent parallel swapin from proceeding with - * the cache flag. Otherwise, another thread may - * finish swapin first, free the entry, and swapout - * reusing the same entry. It's undetectable as - * pte_same() returns true due to entry reuse. - */ - if (swapcache_prepare(entry, 1)) { - /* Relax a bit to prevent rapid repeated page faults */ - schedule_timeout_uninterruptible(1); - goto out; - } - need_clear_cache = true; - /* skip swapcache */ - folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0, - vma, vmf->address, false); + folio = alloc_swap_folio(vmf); page = &folio->page; if (folio) { __folio_set_locked(folio); __folio_set_swapbacked(folio); + nr_pages = folio_nr_pages(folio); + if (folio_test_large(folio)) + entry.val = ALIGN_DOWN(entry.val, nr_pages); + /* + * Prevent parallel swapin from proceeding with + * the cache flag. Otherwise, another thread may + * finish swapin first, free the entry, and swapout + * reusing the same entry. It's undetectable as + * pte_same() returns true due to entry reuse. + */ + if (swapcache_prepare(entry, nr_pages)) { + /* Relax a bit to prevent rapid repeated page faults */ + schedule_timeout_uninterruptible(1); + goto out_page; + } + need_clear_cache = true; + if (mem_cgroup_swapin_charge_folio(folio, vma->vm_mm, GFP_KERNEL, entry)) { ret = VM_FAULT_OOM; goto out_page; } - mem_cgroup_swapin_uncharge_swap(entry, 1); + mem_cgroup_swapin_uncharge_swap(entry, nr_pages); shadow = get_shadow_from_swap_cache(entry); if (shadow) @@ -4209,6 +4357,22 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) goto out_nomap; } + /* allocated large folios for SWP_SYNCHRONOUS_IO */ + if (folio_test_large(folio) && !folio_test_swapcache(folio)) { + unsigned long nr = folio_nr_pages(folio); + unsigned long folio_start = ALIGN_DOWN(vmf->address, nr * PAGE_SIZE); + unsigned long idx = (vmf->address - folio_start) / PAGE_SIZE; + pte_t *folio_ptep = vmf->pte - idx; + + if (!can_swapin_thp(vmf, folio_ptep, nr)) + goto out_nomap; + + page_idx = idx; + address = folio_start; + ptep = folio_ptep; + goto check_folio; + } + nr_pages = 1; page_idx = 0; address = vmf->address; @@ -4340,11 +4504,12 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) folio_add_lru_vma(folio, vma); } else if (!folio_test_anon(folio)) { /* - * We currently only expect small !anon folios, which are either - * fully exclusive or fully shared. If we ever get large folios - * here, we have to be careful. + * We currently only expect small !anon folios which are either + * fully exclusive or fully shared, or new allocated large folios + * which are fully exclusive. If we ever get large folios within + * swapcache here, we have to be careful. */ - VM_WARN_ON_ONCE(folio_test_large(folio)); + VM_WARN_ON_ONCE(folio_test_large(folio) && folio_test_swapcache(folio)); VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio); folio_add_new_anon_rmap(folio, vma, address, rmap_flags); } else { @@ -4387,7 +4552,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) out: /* Clear the swap cache pin for direct swapin after PTL unlock */ if (need_clear_cache) - swapcache_clear(si, entry, 1); + swapcache_clear(si, entry, nr_pages); if (si) put_swap_device(si); return ret; @@ -4403,7 +4568,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) folio_put(swapcache); } if (need_clear_cache) - swapcache_clear(si, entry, 1); + swapcache_clear(si, entry, nr_pages); if (si) put_swap_device(si); return ret;