From patchwork Thu Feb 22 16:09:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13567556 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 306F8C48BF8 for ; Thu, 22 Feb 2024 16:11:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7C9876B0075; Thu, 22 Feb 2024 11:11:09 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 779196B0078; Thu, 22 Feb 2024 11:11:09 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 640916B007B; Thu, 22 Feb 2024 11:11:09 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 55C0A6B0075 for ; Thu, 22 Feb 2024 11:11:09 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 2F1901C1057 for ; Thu, 22 Feb 2024 16:11:09 +0000 (UTC) X-FDA: 81819929058.05.166E928 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf16.hostedemail.com (Postfix) with ESMTP id CF930180034 for ; Thu, 22 Feb 2024 16:09:51 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Kf4v7Kzg; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1708618192; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:references:dkim-signature; bh=WUdidtCwvNxI4MXqkhDzpFFrMP8MQ9qo1Vo++/exepg=; b=JE+rbQyOHdC4+gZn6rqHmh8vZZggg1FqOAE9RPBgjXuHrs/LrjI/hS6E2p3TpXXcSBIFF3 zCMydUEPm18asgEx5rR9HNNC1Yd7+kGZQVmNpa9XS6ZceOp8rircZG9B1O1mDCBxexPBNG KN8XAj55/HOD2yjHCb+6Fj2S71Ds24c= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Kf4v7Kzg; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1708618192; a=rsa-sha256; cv=none; b=CgdOLSlj0soh8m2X6C0cqexOkmfmd5w5lAZ41PZRFHYLJytb1+78ki1Qi31N+xGA/tFm8W HYpWn6Uem05NbNfhNYoo6hfJ52+zM7Yj+jfiDew7Gymn5jHN+yGzDA1AaGjMRNmO+xPl1y jRfFAFc7RVs0Frmpzb57u/5ck/0MaRM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1708618191; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=WUdidtCwvNxI4MXqkhDzpFFrMP8MQ9qo1Vo++/exepg=; b=Kf4v7Kzgv4XuzmrEnbn2VLhe1jziVmATYHEgb3QjVFTVoaJIlGF1ceSJcLKqeYVLY1u5xH YbXM02jiuCDo/V4h7bHzQOk3fXRBrC9Pq5dGCNM3ud60cI1aQ4Utt28xeCe97OQL9sKrfI GTiNI+S3jzfahlsqhgKLaGjBiOXQxQQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-496-vCeQ0Lt5OvuE7nUtaq2fKg-1; Thu, 22 Feb 2024 11:09:46 -0500 X-MC-Unique: vCeQ0Lt5OvuE7nUtaq2fKg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 24F4B10726A2; Thu, 22 Feb 2024 16:09:46 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.194.96]) by smtp.corp.redhat.com (Postfix) with ESMTP id 895472864; Thu, 22 Feb 2024 16:09:44 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , Matthew Wilcox Subject: [PATCH v1] mm: remove total_mapcount() Date: Thu, 22 Feb 2024 17:09:43 +0100 Message-ID: <20240222160943.622386-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: CF930180034 X-Stat-Signature: x6qabmr1rke6danqhe4ggzxawgfrprk4 X-HE-Tag: 1708618191-370573 X-HE-Meta: U2FsdGVkX1/6StXo67LDquhG93H4MXv8CGmZmT6SjQwvq99pH4wkFS6Pq7VazCkZ8Ys5zmr6yxFJRZmN6UM9PEMS2F3MoFDcNShKWUd3Chv6H1CC3XAw4eOGNoxpZJllm4hBtmY9dfJaUj+8mGpFprOEKGJA+je75kQbgwU3JSgetzwtBfpbmrbQEd+VrgmtjAr/9rAo0xPuIy2OS4+KyYeeZ3GCATD2crEfqtj2Vjhxn/ZYbHXHSKvvcAXOlikQMAmgFUx3Y5b6VXS1qrDnrIzq0tdAEkzywpdPClEak+OXZCdRvdEgMbuv7TrrbBvuohLHVgcFNQkNBKhIVBFfPO+AySLPmy3hoUU7Bg5OwjPefkhvQ/boRlu1A8vadzdj9ERx0Pl8MzLVC2zGhxk3MjgSndtulBK6TEZHptOt+Mm2eXfdLcmq3qTjVlen8sAAtI7ZLWnLgG7N4JCL753C9wVJRaxt+AlMrt0Wb4O9otwPQA0heWSmyofYfFQVasqf1z8G3tzBnT8bHPYY5Q7hWY3YFENHhudnMNsd3Qdx7DpCdowOKlTCxvunBojCztNuyw/qyIHjagtpUM5M/mKOXpJb4WFsNQT0ioBfh4YLWBmExFBFVXnIF8HB671z7xHxWbJ+je/nPnzo4eGqA+RhJJ3NMOuKH63UUhLQGrMOb1rvxVsvWbqul5OUajGIGHyRbEFGB0TRvwbk2rd/gXjvyFyrTwVkN3BtBWjEENB2LxRLqrHd8LPJ2/iui64IK17XSssGqq+ysgMLGjOdMrp+UlhXVgmmLUzCUuE4xn8ywtC1kvqkVZ8xLh3+klJi7cmFV19h7j5uflT/hEDTHOuvtxgWncpm4/B92UrJaq/220lD8cJrxcqwpxq5OsvQqdyXW+sJhD+/I90iiN0VtTUZOqsO8L2Iu1jPhLJYMa0h1cwRdV9CLbIyLtHwY3PJSMX6 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: mm/memfd.c is that last remaining user of total_mapcount(). Let's convert memfd_tag_pins() and memfd_wait_for_pins() to use folios instead of pages, so we can remove total_mapcount() for good. We always get a head page, so we can just naturally interpret is as a folio (similar to other code). Cc: Andrew Morton Cc: Matthew Wilcox (Oracle) Signed-off-by: David Hildenbrand Signed-off-by: Matthew Wilcox (Oracle) --- Did a quick test with write-sealing a memfd backed by THP. Seems to work as it used to. --- include/linux/mm.h | 9 +-------- mm/memfd.c | 34 ++++++++++++++++++---------------- 2 files changed, 19 insertions(+), 24 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 6f4825d82965..49e22a2f6ccc 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1183,7 +1183,7 @@ static inline int is_vmalloc_or_module_addr(const void *x) * How many times the entire folio is mapped as a single unit (eg by a * PMD or PUD entry). This is probably not what you want, except for * debugging purposes - it does not include PTE-mapped sub-pages; look - * at folio_mapcount() or page_mapcount() or total_mapcount() instead. + * at folio_mapcount() or page_mapcount() instead. */ static inline int folio_entire_mapcount(struct folio *folio) { @@ -1243,13 +1243,6 @@ static inline int folio_mapcount(struct folio *folio) return folio_total_mapcount(folio); } -static inline int total_mapcount(struct page *page) -{ - if (likely(!PageCompound(page))) - return atomic_read(&page->_mapcount) + 1; - return folio_total_mapcount(page_folio(page)); -} - static inline bool folio_large_is_mapped(struct folio *folio) { /* diff --git a/mm/memfd.c b/mm/memfd.c index d3a1ba4208c9..0a6c1a6ee03b 100644 --- a/mm/memfd.c +++ b/mm/memfd.c @@ -31,24 +31,25 @@ static void memfd_tag_pins(struct xa_state *xas) { - struct page *page; + struct folio *folio; int latency = 0; int cache_count; lru_add_drain(); xas_lock_irq(xas); - xas_for_each(xas, page, ULONG_MAX) { + xas_for_each(xas, folio, ULONG_MAX) { cache_count = 1; - if (!xa_is_value(page) && - PageTransHuge(page) && !PageHuge(page)) + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && + !xa_is_value(folio) && folio_test_large(folio) && + !folio_test_hugetlb(folio)) cache_count = HPAGE_PMD_NR; - if (!xa_is_value(page) && - page_count(page) - total_mapcount(page) != cache_count) + if (!xa_is_value(folio) && cache_count != + folio_ref_count(folio) - folio_mapcount(folio)) xas_set_mark(xas, MEMFD_TAG_PINNED); if (cache_count != 1) - xas_set(xas, page->index + cache_count); + xas_set(xas, folio->index + cache_count); latency += cache_count; if (latency < XA_CHECK_SCHED) @@ -66,16 +67,16 @@ static void memfd_tag_pins(struct xa_state *xas) /* * Setting SEAL_WRITE requires us to verify there's no pending writer. However, * via get_user_pages(), drivers might have some pending I/O without any active - * user-space mappings (eg., direct-IO, AIO). Therefore, we look at all pages + * user-space mappings (eg., direct-IO, AIO). Therefore, we look at all folios * and see whether it has an elevated ref-count. If so, we tag them and wait for * them to be dropped. * The caller must guarantee that no new user will acquire writable references - * to those pages to avoid races. + * to those folios to avoid races. */ static int memfd_wait_for_pins(struct address_space *mapping) { XA_STATE(xas, &mapping->i_pages, 0); - struct page *page; + struct folio *folio; int error, scan; memfd_tag_pins(&xas); @@ -95,20 +96,21 @@ static int memfd_wait_for_pins(struct address_space *mapping) xas_set(&xas, 0); xas_lock_irq(&xas); - xas_for_each_marked(&xas, page, ULONG_MAX, MEMFD_TAG_PINNED) { + xas_for_each_marked(&xas, folio, ULONG_MAX, MEMFD_TAG_PINNED) { bool clear = true; cache_count = 1; - if (!xa_is_value(page) && - PageTransHuge(page) && !PageHuge(page)) + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && + !xa_is_value(folio) && folio_test_large(folio) && + !folio_test_hugetlb(folio)) cache_count = HPAGE_PMD_NR; - if (!xa_is_value(page) && cache_count != - page_count(page) - total_mapcount(page)) { + if (!xa_is_value(folio) && cache_count != + folio_ref_count(folio) - folio_mapcount(folio)) { /* * On the last scan, we clean up all those tags * we inserted; but make a note that we still - * found pages pinned. + * found folios pinned. */ if (scan == LAST_SCAN) error = -EBUSY;