From patchwork Mon Dec 4 14:21:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13478526 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1843DC4167B for ; Mon, 4 Dec 2023 14:23:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3AEA46B02A9; Mon, 4 Dec 2023 09:22:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 335396B02AC; Mon, 4 Dec 2023 09:22:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1AFD26B02AE; Mon, 4 Dec 2023 09:22:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 05D116B02A9 for ; Mon, 4 Dec 2023 09:22:39 -0500 (EST) Received: from smtpin18.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id C1C351A0295 for ; Mon, 4 Dec 2023 14:22:38 +0000 (UTC) X-FDA: 81529351596.18.DE60C60 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf27.hostedemail.com (Postfix) with ESMTP id E10B240028 for ; Mon, 4 Dec 2023 14:22:36 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=c2B1P9nH; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf27.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1701699757; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MMp/rsTScJnGvEHSBNzvW50JXqbYQ0nspFlZU7tqJ/c=; b=KEeqvjUxxPZVNGQPYD5Yxfgkhz3VosNmwM+pja0ulkarPB57nX6fi1/JJptJfEA3DIKnYc Vj+TUHudkCaFVn53163hCYeQR97u04c4t7Q1LBW2IRSKcDIz2t1gAoS2Dq1ISBNJRI6twm fcY0QT1npADV3R/e//MNIbZeejQEgxU= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=c2B1P9nH; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf27.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1701699757; a=rsa-sha256; cv=none; b=a4Fz46b9Yq5EcPWF4LkkuyT0blOlEOsCkFqDc6g824tvec2kM6hDiOmFXjILzshWgWNH+I 4sp/Wjoq+snfzuDQKnwRUT+a/ucQB3zLDh92JSSLzeejKwOehjKA5qu5U1aaOESBRVAil6 7jFzVRGi01dkQl9ZkUmeG78i8beIMHg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1701699756; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MMp/rsTScJnGvEHSBNzvW50JXqbYQ0nspFlZU7tqJ/c=; b=c2B1P9nHHGBI70/0yW1U9DrZ7wOGe8mXF2DLbxr7u6v8r/tmsLff2FILxkbrFOmPBabMxO RoxUE8bLgLHdpdz9CysHpbB/jdl8i4IBhcT4Ie5tI7kbJM8wKDzCzc6BeJlLFKVsxcebQR QrGY4awItnH2yBT/VROehMVfpG6Gf0E= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-494-KFAuLerwMeGWU5ALc3K4CQ-1; Mon, 04 Dec 2023 09:22:32 -0500 X-MC-Unique: KFAuLerwMeGWU5ALc3K4CQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4F04185A58C; Mon, 4 Dec 2023 14:22:32 +0000 (UTC) Received: from t14s.fritz.box (unknown [10.39.195.87]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3A5802026D4C; Mon, 4 Dec 2023 14:22:30 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Hugh Dickins , Ryan Roberts , Yin Fengwei , Mike Kravetz , Muchun Song , Peter Xu Subject: [PATCH RFC 23/39] mm/rmap: introduce folio_remove_rmap_[pte|ptes|pmd]() Date: Mon, 4 Dec 2023 15:21:30 +0100 Message-ID: <20231204142146.91437-24-david@redhat.com> In-Reply-To: <20231204142146.91437-1-david@redhat.com> References: <20231204142146.91437-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Rspamd-Queue-Id: E10B240028 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: mszawxrhrjg6ppa4yty1hzqd5365e9j9 X-HE-Tag: 1701699756-139959 X-HE-Meta: U2FsdGVkX1+n/1+qY2fh1Ef/Y6mNUcG7vWmNoGPohsm9rYgJB78HZh4oixQh/clRIalZYSm+pZlMbnCFgc23G3VpjdDNffmUlvC0i/4izirLvgRsZOkbkBXUdDRm3sZc/gtQRpK3MJfV9qEJjD9xwWAQoUzG+mA1MRD+HE+CIBxvMN7cdqlBnzRKCFsSDoMDOcplCpb/tjReb9A/jX2SC/16w603PajVpAZCOUdG7p19y6Wa8z4zlJM44QpcnsyopRBAE7eKZGQBgCk8Mh1ZUNqjRNJIO4yK2xhEZtdg7ZlPpmY6+pydOy50jQ+5M1TeUJ88vRyKWMCb3MTqsgkA2PYJfFHXsY+kTbdQUH+EPtLoc8koGYuqBN67oiDwmO21ZX5GmIWa7TaxZ82Dane7ZpUcarlNVMQ9NlkplGrD2OizbJTEFRWfimiusOOAutl7H0c9BqhovSNW6yLyEDrQCx4TRYIldjEywp8kCE+eO2JVt2GqqUUuiow9G/Gnhmmn/seMbMXP8QHzLywAhTgyrWIi29LaC0eb7PVEi7vnJx/P5MYQu+E6V/RCNpa1igJ1MGYzysViH5L+sqYpqiFJS0RD39CYHTD070LApgDnu2dyh/Gj2lqKPVdGSK1pbWGGVh4Cx0eDUJc9zYh9XD4IEmH0bwTQuiC69UY7L6mGQ3RzzHJPPg4KjrSnjOBf9NRUD1OSAQSB4FVyIy7XyjkrhDzoKbwTTQ8zydku78sgSVC/kIGqiACQbZ5DSl5haOtFZGQy52mN+wO1LtAOsFkO4qyL9dWCPcy7bLDwv82UFy/tBIWwS5XGfrdZeFHQO1QHZ+c9fc3Ed52ncQnsxKq3127i/DR4wvgro0KdgLYwGIwXqVVDZ8Qb7uwljWLRKfsNR5D8zgiy5SEz81jH4yFXJejfQgnM0+rNZK9rxwEZEYmhQu0gdIB6bGJj6Wwf/slihrUT4NNwqTF4fD2KIWG r7exV8ep PTIVlx2WGKJf4SBNK9dPUPhNpm6lorzHrbDmbAufeJSFlnHLTkx0j24B+KDM71nKgIueq/wIv/CiukFhPDeNuLulRY1Hr0ho93IyTaRKKW88uen0k2aFkqP5YaKAS7INlU835CQBhfvHgd7cxVKWOuhysGVYFeUe+ryQ5a1H7n4j7lsPtBuSJBCAHms/mqmYwfRTBigjh/02s5w6ybo/px2gsvYScPJ9NxTwf X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's mimic what we did with folio_add_file_rmap_*() and folio_add_anon_rmap_*() so we can similarly replace page_remove_rmap() next. Make the compiler always special-case on the granularity by using __always_inline. We're adding folio_remove_rmap_ptes() handling right away, as we want to use that soon for batching rmap operations when unmapping PTE-mapped large folios. Signed-off-by: David Hildenbrand --- include/linux/rmap.h | 6 ++++ mm/rmap.c | 76 ++++++++++++++++++++++++++++++++++++-------- 2 files changed, 68 insertions(+), 14 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 017b216915f19..dd4ffb1d8ae04 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -241,6 +241,12 @@ void folio_add_file_rmap_pmd(struct folio *, struct page *, struct vm_area_struct *); void page_remove_rmap(struct page *, struct vm_area_struct *, bool compound); +void folio_remove_rmap_ptes(struct folio *, struct page *, unsigned int nr, + struct vm_area_struct *); +#define folio_remove_rmap_pte(folio, page, vma) \ + folio_remove_rmap_ptes(folio, page, 1, vma) +void folio_remove_rmap_pmd(struct folio *, struct page *, + struct vm_area_struct *); void hugetlb_add_anon_rmap(struct folio *, struct vm_area_struct *, unsigned long address, rmap_t flags); diff --git a/mm/rmap.c b/mm/rmap.c index 3587225055c5e..50b6909157ac1 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1463,25 +1463,36 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma, bool compound) { struct folio *folio = page_folio(page); + + if (likely(!compound)) + folio_remove_rmap_pte(folio, page, vma); + else + folio_remove_rmap_pmd(folio, page, vma); +} + +static __always_inline void __folio_remove_rmap(struct folio *folio, + struct page *page, unsigned int nr_pages, + struct vm_area_struct *vma, enum rmap_mode mode) +{ atomic_t *mapped = &folio->_nr_pages_mapped; - int nr = 0, nr_pmdmapped = 0; - bool last; + int last, nr = 0, nr_pmdmapped = 0; enum node_stat_item idx; - VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); - VM_BUG_ON_PAGE(compound && !PageHead(page), page); + __folio_rmap_sanity_checks(folio, page, nr_pages, mode); /* Is page being unmapped by PTE? Is this its last map to be removed? */ - if (likely(!compound)) { - last = atomic_add_negative(-1, &page->_mapcount); - nr = last; - if (last && folio_test_large(folio)) { - nr = atomic_dec_return_relaxed(mapped); - nr = (nr < COMPOUND_MAPPED); - } - } else if (folio_test_pmd_mappable(folio)) { - /* That test is redundant: it's for safety or to optimize out */ + if (likely(mode == RMAP_MODE_PTE)) { + do { + last = atomic_add_negative(-1, &page->_mapcount); + if (last && folio_test_large(folio)) { + last = atomic_dec_return_relaxed(mapped); + last = (last < COMPOUND_MAPPED); + } + if (last) + nr++; + } while (page++, --nr_pages > 0); + } else if (mode == RMAP_MODE_PMD) { last = atomic_add_negative(-1, &folio->_entire_mapcount); if (last) { nr = atomic_sub_return_relaxed(COMPOUND_MAPPED, mapped); @@ -1517,7 +1528,7 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma, * is still mapped. */ if (folio_test_pmd_mappable(folio) && folio_test_anon(folio)) - if (!compound || nr < nr_pmdmapped) + if (mode == RMAP_MODE_PTE || nr < nr_pmdmapped) deferred_split_folio(folio); } @@ -1532,6 +1543,43 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma, munlock_vma_folio(folio, vma); } +/** + * folio_remove_rmap_ptes - remove PTE mappings from a page range of a folio + * @folio: The folio to remove the mappings from + * @page: The first page to remove + * @nr_pages: The number of pages that will be removed from the mapping + * @vma: The vm area from which the mappings are removed + * + * The page range of the folio is defined by [page, page + nr_pages) + * + * The caller needs to hold the page table lock. + */ +void folio_remove_rmap_ptes(struct folio *folio, struct page *page, + unsigned int nr_pages, struct vm_area_struct *vma) +{ + __folio_remove_rmap(folio, page, nr_pages, vma, RMAP_MODE_PTE); +} + +/** + * folio_remove_rmap_pmd - remove a PMD mapping from a page range of a folio + * @folio: The folio to remove the mapping from + * @page: The first page to remove + * @vma: The vm area from which the mapping is removed + * + * The page range of the folio is defined by [page, page + HPAGE_PMD_NR) + * + * The caller needs to hold the page table lock. + */ +void folio_remove_rmap_pmd(struct folio *folio, struct page *page, + struct vm_area_struct *vma) +{ +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + __folio_remove_rmap(folio, page, HPAGE_PMD_NR, vma, RMAP_MODE_PMD); +#else + WARN_ON_ONCE(true); +#endif +} + /* * @arg: enum ttu_flags will be passed to this argument */