From patchwork Mon May 6 21:13:33 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yosry Ahmed X-Patchwork-Id: 13656010 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 698CEC25B10 for ; Mon, 6 May 2024 21:13:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ECC436B009C; Mon, 6 May 2024 17:13:38 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E7BC46B009D; Mon, 6 May 2024 17:13:38 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D43856B009E; Mon, 6 May 2024 17:13:38 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id B6EC06B009C for ; Mon, 6 May 2024 17:13:38 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 50142120553 for ; Mon, 6 May 2024 21:13:38 +0000 (UTC) X-FDA: 82089222516.24.345DE34 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by imf16.hostedemail.com (Postfix) with ESMTP id 8B176180017 for ; Mon, 6 May 2024 21:13:36 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=weAPW5j4; spf=pass (imf16.hostedemail.com: domain of 3_0c5ZgoKCKwkaedkMTYQPSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--yosryahmed.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3_0c5ZgoKCKwkaedkMTYQPSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--yosryahmed.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1715030016; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding:in-reply-to: references:dkim-signature; bh=tkIOqZtuIQ65ixx+A75YRomt7I/eCCvdSDVc11nJkCU=; b=4k0f6qYBzwn1gx71Aag88BA0Ag1sXkcjyT5P7Ze2BsTRCy7dupN5BwK0Wk4wqmXHQlvdec nIAhHy3LrF7Vva5UWlw8q9AkmTFFolfsRlBHRCDgESqCBjVtvtiBh5ZLJo4r//uCb4UBRE y8OJSXU/eIJfG5ZlZZhZW0+cv9x83C4= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=weAPW5j4; spf=pass (imf16.hostedemail.com: domain of 3_0c5ZgoKCKwkaedkMTYQPSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--yosryahmed.bounces.google.com designates 209.85.128.201 as permitted sender) smtp.mailfrom=3_0c5ZgoKCKwkaedkMTYQPSaaSXQ.OaYXUZgj-YYWhMOW.adS@flex--yosryahmed.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1715030016; a=rsa-sha256; cv=none; b=BhxwFG+EDDDFnrp9Zujr6e067J85N+8JKr5no7/gfFzOaYavCSHJnLEyl3gjRKRVfFqjR0 aNYvUUwjyTMmduoV3yx34d5juyNxDkfBgQ/NZ2JvhLGMXfxTZQzs/8N21MiANTxrW02Jhn 9C3tIrKgWcl3V0xdvUYNHSy+aWQ8bMo= Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-61be26af113so46230637b3.1 for ; Mon, 06 May 2024 14:13:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1715030015; x=1715634815; darn=kvack.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=tkIOqZtuIQ65ixx+A75YRomt7I/eCCvdSDVc11nJkCU=; b=weAPW5j4ebFfwVqvpBqiqTBfaCnor/dG3c96/VGwUiET412SCBXFAUZwH+1bsmij7h unZc+hw0YaK1nFsigNXOKWngCm7+2phrSA1OYA0jKF1H4wZG5imbDPQlNUerqA2zJjug tw4K2Ls2U9tjgntSLLx2j5npeK3Q6ePr0RnIm/VMEVnQUKAXLYmUigCNkYeKyOozZ7CX GKs2lYd465h/lwydqbQFB8UOkUGxTLJr5Q/qHywfwsomgi/M3VVbGKwv9vq1qUsE+XdA paWwBXhtKyHZjkHAL0dIWgaBVTVSpZj2ZogODeW+1EHQKrFyMelEMUjdVq4t5EzCAQVc CuAA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715030015; x=1715634815; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=tkIOqZtuIQ65ixx+A75YRomt7I/eCCvdSDVc11nJkCU=; b=GyEc/odgo2eLGfmOvlpjzl70KxOqeGZ8VUSIBvRZujLVjgOIDWLylf+gsF4B1r8J89 +ntpKeyGN5XAwbkSlpSPgOFG0Mn3v+CaIdrFtXdGqBqLEiYvd/MyJZPf3/vslMr8aIOA xqyySqd9hWZ7ggC6UcJsbOLxEu0bdChoK3ysqnG9/bKvzU4Ju7jXWKi609hLwBPOv48B L9G/38R4NYEIQ9+yKglbw6ZBtpqaiTWqWIPM1ca4StlQvBlt+Ydf+DGCeYPWjcSwB3WO QpAav9E22KusXBTE7qSJDaLPDbMNzz+5UhuUWSoFzRwBpsHH72KEytFonnzDMP/7J7Rq DKvQ== X-Forwarded-Encrypted: i=1; AJvYcCVyAe/ZEbRhC98yDOGdDuMsthDteI0xdYp1kUqX7TLpBZigSXREJXXpNL0mxrAjiNhl6CV4EVJ/IumJpbNPkk9Ctwc= X-Gm-Message-State: AOJu0Yxnu2ULtgFDw0YgML19k15YSuAtQ2e662XKsGjB66sDtxFIrVSl x0btZtX+n+AZbtDXtoCxo2/U5Ce8DOn2lpTmbCMynRRzKp21rncCjydAbB3qsuBBdFRO2210Ckq dHsnKSFaa/yLWhXK8IQ== X-Google-Smtp-Source: AGHT+IG3PlOMnRhDMcxmvDdUmj2IwCD67o3STYlCV7CNoOWegWDmZgsvLAbMnPMgBvJyBssMqEqIOArJ5gB+jDH6 X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:20:ed76:c0a8:29b4]) (user=yosryahmed job=sendgmr) by 2002:a05:690c:11:b0:61b:ee2c:d5ab with SMTP id bc17-20020a05690c001100b0061bee2cd5abmr3146524ywb.1.1715030015496; Mon, 06 May 2024 14:13:35 -0700 (PDT) Date: Mon, 6 May 2024 21:13:33 +0000 Mime-Version: 1.0 X-Mailer: git-send-email 2.45.0.rc1.225.g2a3ae87e7f-goog Message-ID: <20240506211333.346605-1-yosryahmed@google.com> Subject: [PATCH mm-unstable] mm: rmap: abstract updating per-node and per-memcg stats From: Yosry Ahmed To: Andrew Morton Cc: David Hildenbrand , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Yosry Ahmed X-Rspamd-Queue-Id: 8B176180017 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: irmdyjqro6mo1oaxport3y4yjr93txa8 X-HE-Tag: 1715030016-725417 X-HE-Meta: U2FsdGVkX1/gdFfq96Phs40ayhyjnq8xES8p2og5Mx4qu0ZsiD8JhRU8fIoydk2SA0xKNa3rxp5GJClLhxTo2bAwIYP1gGmKNmARcCh1VkLEXrlAoyh0+GeeNRo+YB0N6GjLY4oBInmmbxGERfl3HUw8HQiDX3h9nWhBFrmR3QmuVhxWFEcttYQK1/WHHBk7PEDdxsOeNjNcSjSjXtkJLQ7XaxZ822atq2eY8n32jkSyJy5+0JXlesRn3DoOBUkK0xlqxbJWWCXUiHtoUNVihpRIllSA5xbplDN1bGodBj6I0P9hAzNHBqX25vcz34a0Cm3r3k69o8UVStWpVE/qdZYQOe1sd4K4KBVI56oMZOpa5X7PkUjpUSUbVWoRluBniytmmFdvDeqdVkbXmINLriSbS3BdbQSwcIBoH8V8l3rF4bHjNTQRrUkS7qjWbJF3Cn5UMFyog3jcp36e37mZFZ51QrS0xoOohomSFH9xBAu4LfDdMzqSKgirbFvin8IyKPsXpO2Xvxf5boewfk59rqCBinVDyCExEZCJIuYQGMRW7xL+usclKaRTpyi6EXUC65ffuLMqgVYf8BvRMZeWx7DJcLvajs15SljdxoqXRqwrTtgrDNivG2OErS8FBSPGFkePx7VcLcreS79wcU069Ha1cJPYhHqN5vG6GbsRsa5Wel6YNPA8Q0hl2ekxbN/xQiFlZUCRPEHDfK6XEg+dCpSjoWRM8EP3KqrpefFumXtmGFdxkTsyabmDdq4K9oNkHp9bAM87BOXur+Dd/sA9FmK3VwDeDlnKBGq+vmTgfcIgrC8bxCCalGu1cSRcW/J1gLXuYhJa0AyxsI+asJFZz35J21+q9Fs8/lVqwRpXFQJyPtAPPAbj+B1Ev9tmC9UU4xDSYZrcI6xmH9NhPBLYrngCKU5uC02z/6mhjO+2apSjfWF4c6DsCiyZIBzKXZr2Ayor9/fpIoxdKjn/sRC qc3p6XZb rTVBO85LLwnMEibC54/il1Vqwfj7U4TQQzJcWhL7TXyfSCqV5GMPWBJWxK0DTEcqeQy065jTy6GRvyOa3CriuBjXSPNE2PNSGzFqQAh35nW7oWn+94Smv5zIPRRrcuTmmPexQ5PqHBHmO3X/8awte+/gvtN9LfwaP2NCQmD9fy9tGYlXaCkjD9LGC5y91SsiTwziJ2w19zv2GZzobijD2gRDtJpGin7jXRDuSom0lb33uUcdTztQ62DI6qi2LKmdaH2lr23O4N9AJ2j5c3sIa7SSXK5YJehXt4Pt64iQfmqsAz1pAe1Dx2yriOaAFTSFF5tkW/LHbIw6YpUgkAnHEfAZhphI4HgnmrrNLLPkO3iaDUKjxsjB2l1dEKeItsnpR3YBpWqa/Q0i2r+hWODdlxcbzvSuClIjQcbUX8sDBa8sa7uaHjfOCoITHo6ROuJMyO17TxP4OcFlk4XhFNODhbSLUgWtbKE+AW0WpOQud/4NFFTIwAb6N1KUu0pj1rk5gFui1DJrXm7z70t+hrGYTJQVrWIzVyRwJcHSzy0MYpXTz34Xn/IeCyn1kmUJ4aus0I3csQcIHu9x8I6WpzHtBq5CEajIikgQVy6aFDfOTXkwgEC+zwxqsryahuUXKakfdUmiMEpmT+9M0BcpwKR5BaNJbWcpLJr6RcCoNGOkdpsJXOJaO3J0WKfJToiNX1vUihqUb X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: A lot of intricacies go into updating the stats when adding or removing mappings: which stat index to use and which function. Abstract this away into a new static helper in rmap.c, __folio_mod_stat(). This adds an unnecessary call to folio_test_anon() in __folio_add_anon_rmap() and __folio_add_file_rmap(). However, the folio struct should already be in the cache at this point, so it shouldn't cause any noticeable overhead. No functional change intended. Signed-off-by: Yosry Ahmed Reviewed-by: David Hildenbrand --- This applies on top of "mm: do not update memcg stats for NR_{FILE/SHMEM}_PMDMAPPED": https://lore.kernel.org/lkml/20240506192924.271999-1-yosryahmed@google.com/ David, I was on the fence about adding a Suggested-by here. You did suggest adding a helper, but the one with the extra folio_test_anon() was my idea and I didn't want to blame it on you. So I'll leave this up to you :) --- mm/rmap.c | 56 ++++++++++++++++++++++++++++--------------------------- 1 file changed, 29 insertions(+), 27 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index ed7f820369864..9ed995da47099 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1269,6 +1269,28 @@ static void __page_check_anon_rmap(struct folio *folio, struct page *page, page); } +static void __folio_mod_stat(struct folio *folio, int nr, int nr_pmdmapped) +{ + int idx; + + if (nr) { + idx = folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED; + __lruvec_stat_mod_folio(folio, idx, nr); + } + if (nr_pmdmapped) { + if (folio_test_anon(folio)) { + idx = NR_ANON_THPS; + __lruvec_stat_mod_folio(folio, idx, nr_pmdmapped); + } else { + /* NR_*_PMDMAPPED are not maintained per-memcg */ + idx = folio_test_swapbacked(folio) ? + NR_SHMEM_PMDMAPPED : NR_FILE_PMDMAPPED; + __mod_node_page_state(folio_pgdat(folio), idx, + nr_pmdmapped); + } + } +} + static __always_inline void __folio_add_anon_rmap(struct folio *folio, struct page *page, int nr_pages, struct vm_area_struct *vma, unsigned long address, rmap_t flags, enum rmap_level level) @@ -1276,10 +1298,6 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio, int i, nr, nr_pmdmapped = 0; nr = __folio_add_rmap(folio, page, nr_pages, level, &nr_pmdmapped); - if (nr_pmdmapped) - __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr_pmdmapped); - if (nr) - __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr); if (unlikely(!folio_test_anon(folio))) { VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio); @@ -1297,6 +1315,8 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio, __page_check_anon_rmap(folio, page, vma, address); } + __folio_mod_stat(folio, nr, nr_pmdmapped); + if (flags & RMAP_EXCLUSIVE) { switch (level) { case RMAP_LEVEL_PTE: @@ -1393,6 +1413,7 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, unsigned long address) { int nr = folio_nr_pages(folio); + int nr_pmdmapped = 0; VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); VM_BUG_ON_VMA(address < vma->vm_start || @@ -1425,27 +1446,22 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, atomic_set(&folio->_large_mapcount, 0); atomic_set(&folio->_nr_pages_mapped, ENTIRELY_MAPPED); SetPageAnonExclusive(&folio->page); - __lruvec_stat_mod_folio(folio, NR_ANON_THPS, nr); + nr_pmdmapped = nr; } - __lruvec_stat_mod_folio(folio, NR_ANON_MAPPED, nr); + __folio_mod_stat(folio, nr, nr_pmdmapped); } static __always_inline void __folio_add_file_rmap(struct folio *folio, struct page *page, int nr_pages, struct vm_area_struct *vma, enum rmap_level level) { - pg_data_t *pgdat = folio_pgdat(folio); int nr, nr_pmdmapped = 0; VM_WARN_ON_FOLIO(folio_test_anon(folio), folio); nr = __folio_add_rmap(folio, page, nr_pages, level, &nr_pmdmapped); - if (nr_pmdmapped) - __mod_node_page_state(pgdat, folio_test_swapbacked(folio) ? - NR_SHMEM_PMDMAPPED : NR_FILE_PMDMAPPED, nr_pmdmapped); - if (nr) - __lruvec_stat_mod_folio(folio, NR_FILE_MAPPED, nr); + __folio_mod_stat(folio, nr, nr_pmdmapped); /* See comments in folio_add_anon_rmap_*() */ if (!folio_test_large(folio)) @@ -1494,10 +1510,8 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, enum rmap_level level) { atomic_t *mapped = &folio->_nr_pages_mapped; - pg_data_t *pgdat = folio_pgdat(folio); int last, nr = 0, nr_pmdmapped = 0; bool partially_mapped = false; - enum node_stat_item idx; __folio_rmap_sanity_checks(folio, page, nr_pages, level); @@ -1541,20 +1555,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, break; } - if (nr_pmdmapped) { - /* NR_{FILE/SHMEM}_PMDMAPPED are not maintained per-memcg */ - if (folio_test_anon(folio)) - __lruvec_stat_mod_folio(folio, NR_ANON_THPS, -nr_pmdmapped); - else - __mod_node_page_state(pgdat, - folio_test_swapbacked(folio) ? - NR_SHMEM_PMDMAPPED : NR_FILE_PMDMAPPED, - -nr_pmdmapped); - } if (nr) { - idx = folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED; - __lruvec_stat_mod_folio(folio, idx, -nr); - /* * Queue anon large folio for deferred split if at least one * page of the folio is unmapped and at least one page @@ -1566,6 +1567,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, list_empty(&folio->_deferred_list)) deferred_split_folio(folio); } + __folio_mod_stat(folio, nr, nr_pmdmapped); /* * It would be tidy to reset folio_test_anon mapping when fully