From patchwork Thu Aug 29 16:56:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13783454 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0B4DFC87FC8 for ; Thu, 29 Aug 2024 16:57:10 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 939956B0089; Thu, 29 Aug 2024 12:57:09 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 8E89E6B008A; Thu, 29 Aug 2024 12:57:09 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7AFE86B008C; Thu, 29 Aug 2024 12:57:09 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 5B0E26B0089 for ; Thu, 29 Aug 2024 12:57:09 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id BBC0B160DC4 for ; Thu, 29 Aug 2024 16:57:08 +0000 (UTC) X-FDA: 82505888136.24.EAEA0AC Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf01.hostedemail.com (Postfix) with ESMTP id 153D34000E for ; Thu, 29 Aug 2024 16:57:06 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Knp5avh+; spf=pass (imf01.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724950556; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2JqLQO5zRqaD9D1zDEVy24po9QCWNVhICktX5RsFGi4=; b=WqZv0+yxssChCa7e+aMYoeMRWTNsPXyDW9ErWQAeGi+8SEkS6xVhly5SsBhY7JwIRyJFfx wLRrxNMjdherYla6L2QwNuTXo5M1wFkg0L0nLUIE59lTQ6ynWpMKbNf5BkpIzlHcbeRUHW ILIABHgac358MJNvbRAAGI9tTQKYKO4= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Knp5avh+; spf=pass (imf01.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724950556; a=rsa-sha256; cv=none; b=PcBcgN1ap16quuJdlbXLjtqrNXBMpowhEvuvxLYti7HFiRfo8V0XZxDh7pc1AA0d3sFeXH ooh6cWduoeHL8B/HofLVja1iVR3zNkwfQty9H681Migm7TPb3jSoqGIbHCKeSFQlbzhSr3 PXYdf8G1eDsD+v9qt6nx+kttajq5IS4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724950626; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=2JqLQO5zRqaD9D1zDEVy24po9QCWNVhICktX5RsFGi4=; b=Knp5avh+Fzc2Ehz5rv0fp2SWyfNe8dAQdUkFY5196vKlM+KbQKd9gcPTdTPWXAE2SLoDN9 7FQP9tdgQvmW4frE/a48ZFJs+GM8Fyb85yFxwalvU1yCrv+i5g3hVAnsKHf7/7eu9JpNYD HMFdZbvBdBcdN1pLecUaRMpMWaxA/X0= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-654-Dwmx9zCBO_qNwGe7_gEFEw-1; Thu, 29 Aug 2024 12:57:03 -0400 X-MC-Unique: Dwmx9zCBO_qNwGe7_gEFEw-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 8FBF21955D52; Thu, 29 Aug 2024 16:56:58 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.193.245]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 02D9E1955F21; Thu, 29 Aug 2024 16:56:47 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH v1 01/17] mm: factor out large folio handling from folio_order() into folio_large_order() Date: Thu, 29 Aug 2024 18:56:04 +0200 Message-ID: <20240829165627.2256514-2-david@redhat.com> In-Reply-To: <20240829165627.2256514-1-david@redhat.com> References: <20240829165627.2256514-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Stat-Signature: 3ujdrcjwutbhy14crgy8tqy58itmpsic X-Rspam-User: X-Rspamd-Queue-Id: 153D34000E X-Rspamd-Server: rspam02 X-HE-Tag: 1724950626-542892 X-HE-Meta: U2FsdGVkX1/Hk4LQaD5BC9WbWwyE1MTQiMv+6dkqz7Vf95RtcJhVKoBFuFJJ39GpTzEyEZRXO27Epps1rx1Q4prijOpgMF4HoPfF88Glyu0U6fnJJXL+84635U1Hgvu0IEYgATRpxEXBj0U3FPFK6TTudtB1J62SJFt+WGxRctM2d2AcEllbNn1NnTgWVIU+roRQxhaRd/+LhzIT6Jr3NqhrEGvgxeE1SRQMhyO3MB2dvS/QouJXZ2yiD2rtF4w355zKxvCAemv2xrLEWEkqwDo6RUkjLeCLXg7C2PX8KicuRtzclvF56lnNUjfPmND4tBUCszoDpK8X6qZgW25XmfKp134nk5dlo3gnDtlN2IFX/XxjxsxygbQGvQCysUhuy4+MYmgiX+HL9QzkGzaoAvw2ANTICUpvfleF1btpHS+KiRCIEC6TgDDNSezTC9Mv/3ioMHcwP+fmO13o7iueRHRahnXoMSWFsN6RCF8S0Pi96/0AJJ9/Q2gkPmlMbN6j3/AUDws1wWOXqWLVckSGsbkbKaJ+qkjtudr57GHVbccjfZv8FpAbyv+WYXkcCcrzZGBYKxWlQQldyARmCMZ8T1xPFcm77yiFx7Rk3UVD/b1iZy+2q9Ua9KHqr5J/9UCU2AmYa9Jw1gCD17Rd3dgNZg/pN+iTI+i6kkXK+3XxzFD2JDnQV++7kl6yFUAJ+qnwkUCLKUFFQ3lIx2afOTd2Rq6v3DnawpatzEZySHdHfHhzsucILE/yUnBbDryXGTPINNaCZl15WKB5CR2ySgzP5InuJcuMdK3GlzYu8pfZmMNmzEbeanPttT45aWADle8P0d5FOa6Ujmpdsv1MXYZWlyU4J6ALDIWPVOpcpbb37o04q0KPgt7hFE+7P4hW2oNSG7VG+/RyWGjfNcuuea7l9VnM2hr4+2ZcLD87m90JrvZuT1t0Ww2Li2N8QvftcW0jqgvQkl3272lbVzbyWN4 fIRIYmVH 1j+8VDminCHoxIMxy1kzgt7oVPs7yS0viRz2zpMKR1bHyCn/gp6gUOEXUS8/GjscMn9DDRC4k9+nt5FErSN+xx+N17g/xo3rD+/n5meeGdesgB6dvPaAbJq7OrdXYirbTeza1TxiB9Q/4aoGnwAJb2s4hB0q76mf6Up/zXDvs4nMDanb+We5pEhZ86b2D5K9SE7W4vj1BuPUC8JFXF1BXXUxfd7g7yQ/PS6E9kAXKcLPvxA1FAWkqOkXbBTBGB1UFzWWhN013TFoVtbW4Z4O85BIweqqiX4XTP1RSZ13RiShQmipw6fFf+8ZNw+54MT097ptFxt/byvBLezip/rBJ2iHwocxmaPZkDvC5SayGFTs5I9kCycNv68R3zw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's factor it out into a simple helper function. This helper will also come in handy when working with code where we know that our folio is large. Signed-off-by: David Hildenbrand Reviewed-by: Lance Yang Reviewed-by: Kirill A. Shutemov --- include/linux/mm.h | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index b31d4bdd65ad5..3c6270f87bdc3 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1071,6 +1071,11 @@ int vma_is_stack_for_current(struct vm_area_struct *vma); struct mmu_gather; struct inode; +static inline unsigned int folio_large_order(const struct folio *folio) +{ + return folio->_flags_1 & 0xff; +} + /* * compound_order() can be called without holding a reference, which means * that niceties like page_folio() don't work. These callers should be @@ -1084,7 +1089,7 @@ static inline unsigned int compound_order(struct page *page) if (!test_bit(PG_head, &folio->flags)) return 0; - return folio->_flags_1 & 0xff; + return folio_large_order(folio); } /** @@ -1100,7 +1105,7 @@ static inline unsigned int folio_order(const struct folio *folio) { if (!folio_test_large(folio)) return 0; - return folio->_flags_1 & 0xff; + return folio_large_order(folio); } #include @@ -2035,7 +2040,7 @@ static inline long folio_nr_pages(const struct folio *folio) #ifdef CONFIG_64BIT return folio->_folio_nr_pages; #else - return 1L << (folio->_flags_1 & 0xff); + return 1L << folio_large_order(folio); #endif } @@ -2060,7 +2065,7 @@ static inline unsigned long compound_nr(struct page *page) #ifdef CONFIG_64BIT return folio->_folio_nr_pages; #else - return 1L << (folio->_flags_1 & 0xff); + return 1L << folio_large_order(folio); #endif } From patchwork Thu Aug 29 16:56:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13783455 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 208D3C87FC3 for ; Thu, 29 Aug 2024 16:57:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9D5716B008C; Thu, 29 Aug 2024 12:57:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9844B6B0092; Thu, 29 Aug 2024 12:57:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 84C3F6B0093; Thu, 29 Aug 2024 12:57:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 659E26B008C for ; Thu, 29 Aug 2024 12:57:15 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id E8B821C27BE for ; Thu, 29 Aug 2024 16:57:14 +0000 (UTC) X-FDA: 82505888388.06.DCF36F5 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf21.hostedemail.com (Postfix) with ESMTP id 4B7751C0006 for ; Thu, 29 Aug 2024 16:57:13 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=adc3hfGV; spf=pass (imf21.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724950588; a=rsa-sha256; cv=none; b=jwlcU0xqKLhaaro4lEyryt2ouyh6YBQf9G4XZjV0ONLOxDcoIZrVB/3s8Ljzn6b90DEuCQ //i4mV1czFGrto/ZADAoqrYF5mFcG1Vq00ltzK+tsn6bBzmAQrV9POSXVIR6yzNOReVP6P Ct/gDDWDkao13YE1arY0JrYI0W1NfqA= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=adc3hfGV; spf=pass (imf21.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724950588; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=lJMXjWc4+IBsQZT+SpDHJ06KBG+eQaVO9mpec+pSrAQ=; b=xnmdcPoto62rUvA35CxkP+oX3hu8obYtLcTgGoUECMcat6XYQpUZcQU9TOlJtuV3OQ5qSN mJsCuDsNMXz3+VKjyQI0QyvYtyGWCKj5ySiFVZrQS4mjRxJ3kEuJ9SyM0dAxHCZAocHwLF 8Doc+va6t+/9pWF4J1hevtJl2rheHYs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724950632; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=lJMXjWc4+IBsQZT+SpDHJ06KBG+eQaVO9mpec+pSrAQ=; b=adc3hfGVjc3+OrxV9LbFw+4c1f8iRsc0fwHqBqpYjzNh2H1/Dp1o1z9QogEgj+hP9/hcbG k8CNZB9QnOtOgJ837KzpnjUGHKbnOkNcpf9dw2uO0xkAM97CArs2Q7G9v+CnN0e6uVBe8e LvMAiwCkPRFMCr99KR2/L1xs4xymH0o= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-688-deQlVP0ONhqTZhWhlLdllw-1; Thu, 29 Aug 2024 12:57:11 -0400 X-MC-Unique: deQlVP0ONhqTZhWhlLdllw-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 36A261954B00; Thu, 29 Aug 2024 16:57:08 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.193.245]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E03F91955F66; Thu, 29 Aug 2024 16:56:59 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH v1 02/17] mm: factor out large folio handling from folio_nr_pages() into folio_large_nr_pages() Date: Thu, 29 Aug 2024 18:56:05 +0200 Message-ID: <20240829165627.2256514-3-david@redhat.com> In-Reply-To: <20240829165627.2256514-1-david@redhat.com> References: <20240829165627.2256514-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Stat-Signature: kr395j9zez3sby9pzczu34u3dk6umnim X-Rspamd-Queue-Id: 4B7751C0006 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1724950633-649292 X-HE-Meta: U2FsdGVkX1+FASnFd/NC9HDyV9cC1zJRmf9dTa7xTplHxOec2HI5eI8RuCB9TUbO58EVpqzXPuw8yTh48qkP645EvwpvWIFcbiqOzZaF9DJi164jL2eJtUuOHDKaFrTNonQPMcEKqSB6hpi/OZTq95oqKxEKVYHR3eriDXPmZ+aeh1Vg8RGCYufFezwE0pz9RJBXZsrNI+szgf1rXSGd60B1CEuqI9BzTaES+eMG5QQUGIpHUW5W9sxd5I68P1BSN9tZ23h7Ik9KVlWmcRWaGurk9J8hzcdsQB/IFcHUoE0lkLaafhzOt1yD7FDGE9Vzoas96tyybO+uZClCftkScxF+n/t2uIlzHvixzgUn/FDAmxC1sFzzcFvuN7Gxt0/+enTddRADZ4mGsWebrPPeXmO/haEwerFKbREVDu25ZZnPGVmpGhPgV7YzIEN3H7lyEctY4egpwADoueMcG0jzONeLrY+9LUMgPoxDhKpS51hE8JYwKZFTaq2Tyala+gyOW9dd3gajxSFFJyappuB0uE27OrjO39SLUrmYBIMNP0mlrl61BWs8KKctGLF8SBgr/j5nLsrMTv9l1LZ22vqj+4UJT3wWJNHhGWnY048eyM7QM02OMnzH9U+vtcDALjDRiJsoatLVsi7vs2MF/o3WqWo1SQ94NgfyBy6JmQyryEe+fj2F9ykTJ20QT+3v5AFNK59n5FCkp4IghA3MPeSl93jx9ByP660h+effrMXQeTwfVOCwuWz9Vbdfq0hH3H1igGnHNgifHl/Qu01eL5burFEzsGmJ1RWEFVSJSaFfXZ/ov9KKiiJQ/ynMBq1nqz0sUfwiVa8XuysfO9sf+ocdW3GcbN5k57vb60/5nST61E0BaPLwy4jHedo/TvUoSyWUgtbU3q7l8PRz98qsGszhK/SGjejbxcn5ikYychM5vSsK82k3oFS6LSPyJOpO1Hl993zzmBxYkkdoGkyULF7 PN8kSK0z Ax0xi9SWMFxzKNqFkWQ1rXWmJ/nt3/D/zSwWnCQms0vwadT5pPHm7YhaR4+Qk04MgSaefAzxpBW6hCuzFNscqWIl3sq8EObxXoBQ5rM8cNJ5jVl18kVarIEbMqtYkbtloKkvUIDkhtHpQmN15zIeJtdsBR3gwZ7A8xtjdBntrHw3BlckzUphakE/R9N+fNkPF6Fn4PWQThRzFXD5d6cEePVQMtas5kbNyv8/AD5C1im0IRhghsyBbwOPXMcthukOOQRm4D7O1KJPwUQaXR594iU3nj/ElvWrtC9z/ZFlAewRTk2FrT8CZZ8Kj2OxAD0PPvFHT4AJPx61HwWJLUNvLOIO3APEZuwjYXeh1gzRf4UpvJ2ssYshHstRjGg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's factor it out into a simple helper function. This helper will also come in handy when working with code where we know that our folio is large. Make use of it in internal.h and mm.h, where applicable. While at it, let's consistently return a "long" value from all these similar functions. Note that we cannot use "unsigned int" (even though _folio_nr_pages is of that type), because it would break some callers that do stuff like "-folio_nr_pages()". Both "int" or "unsigned long" would work as well. Signed-off-by: David Hildenbrand Reviewed-by: Kirill A. Shutemov --- include/linux/mm.h | 27 ++++++++++++++------------- mm/internal.h | 2 +- 2 files changed, 15 insertions(+), 14 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 3c6270f87bdc3..fa8b6ce54235c 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1076,6 +1076,15 @@ static inline unsigned int folio_large_order(const struct folio *folio) return folio->_flags_1 & 0xff; } +static inline long folio_large_nr_pages(const struct folio *folio) +{ +#ifdef CONFIG_64BIT + return folio->_folio_nr_pages; +#else + return 1L << folio_large_order(folio); +#endif +} + /* * compound_order() can be called without holding a reference, which means * that niceties like page_folio() don't work. These callers should be @@ -2037,11 +2046,7 @@ static inline long folio_nr_pages(const struct folio *folio) { if (!folio_test_large(folio)) return 1; -#ifdef CONFIG_64BIT - return folio->_folio_nr_pages; -#else - return 1L << folio_large_order(folio); -#endif + return folio_large_nr_pages(folio); } /* Only hugetlbfs can allocate folios larger than MAX_ORDER */ @@ -2056,24 +2061,20 @@ static inline long folio_nr_pages(const struct folio *folio) * page. compound_nr() can be called on a tail page, and is defined to * return 1 in that case. */ -static inline unsigned long compound_nr(struct page *page) +static inline long compound_nr(struct page *page) { struct folio *folio = (struct folio *)page; if (!test_bit(PG_head, &folio->flags)) return 1; -#ifdef CONFIG_64BIT - return folio->_folio_nr_pages; -#else - return 1L << folio_large_order(folio); -#endif + return folio_large_nr_pages(folio); } /** * thp_nr_pages - The number of regular pages in this huge page. * @page: The head page of a huge page. */ -static inline int thp_nr_pages(struct page *page) +static inline long thp_nr_pages(struct page *page) { return folio_nr_pages((struct folio *)page); } @@ -2183,7 +2184,7 @@ static inline bool folio_likely_mapped_shared(struct folio *folio) return false; /* If any page is mapped more than once we treat it "mapped shared". */ - if (folio_entire_mapcount(folio) || mapcount > folio_nr_pages(folio)) + if (folio_entire_mapcount(folio) || mapcount > folio_large_nr_pages(folio)) return true; /* Let's guess based on the first subpage. */ diff --git a/mm/internal.h b/mm/internal.h index 44c8dec1f0d75..97d6b94429ebd 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -159,7 +159,7 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr, pte_t *start_ptep, pte_t pte, int max_nr, fpb_t flags, bool *any_writable, bool *any_young, bool *any_dirty) { - unsigned long folio_end_pfn = folio_pfn(folio) + folio_nr_pages(folio); + unsigned long folio_end_pfn = folio_pfn(folio) + folio_large_nr_pages(folio); const pte_t *end_ptep = start_ptep + max_nr; pte_t expected_pte, *ptep; bool writable, young, dirty; From patchwork Thu Aug 29 16:56:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13783456 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 65BE8C87FC8 for ; Thu, 29 Aug 2024 16:57:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 037DB6B0093; Thu, 29 Aug 2024 12:57:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id F2A106B0095; Thu, 29 Aug 2024 12:57:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF1A06B0096; Thu, 29 Aug 2024 12:57:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id BE6FC6B0093 for ; Thu, 29 Aug 2024 12:57:26 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 862A7160FA7 for ; Thu, 29 Aug 2024 16:57:26 +0000 (UTC) X-FDA: 82505888892.26.571C54A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf11.hostedemail.com (Postfix) with ESMTP id AEC6840030 for ; Thu, 29 Aug 2024 16:57:24 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="PrWyXNV/"; spf=pass (imf11.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724950574; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=s96zD9Isldkc+L1oUyGNWIMTG4igSbKtEM89OEtQ5iM=; b=PDRjRYNxBqRjpheGGG6AQYQz3Q3uc/r02qsY1lpZmpC/7BaRhsBXoIjmx/Wjg84X0i7HGl MoPymV4MV9h+QDGD6+CVhV30GyUNUEBoL3yCie37Q7LisALDzsPLkhRF88utwmIBbthhjE 4zJaan1QhbX4SwkK7YkuamIa2mOOTpI= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="PrWyXNV/"; spf=pass (imf11.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724950574; a=rsa-sha256; cv=none; b=JkbHHoPBNEGJblGsXIbgmKafvKaZEKZBgUle8C+BOX4pAckmlhDK/vEkBjFkMNwVvT0GF6 FB2cy1CAqEnytukUoS3/stJ7Yj4EZEvYT6Pi++lsJWuJkV7Q+jw4qHV8xC/G1+pnSDNumU pkglx63kJQjOyi4QsgvxwBk58/59bUk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724950644; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=s96zD9Isldkc+L1oUyGNWIMTG4igSbKtEM89OEtQ5iM=; b=PrWyXNV/cR80poWjxf1AYyAvwZCzwhOfyV1F/noPwVjgCbjpsmMHIDAHXHOVvcXv5FK1/W VpOpin1avfqCNWoMqakiWj8e466wWMOT7DnP04oXWkeIFJLwU3gZUpRnDx9IQCd2NZI4Tt GYoLsMBbaqS2vc000bb3+K3Wj0LtiM0= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-679-8IQa2j95PSO4YVvjSC6x9A-1; Thu, 29 Aug 2024 12:57:18 -0400 X-MC-Unique: 8IQa2j95PSO4YVvjSC6x9A-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 689B21955D56; Thu, 29 Aug 2024 16:57:16 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.193.245]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id A88031955F21; Thu, 29 Aug 2024 16:57:08 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH v1 03/17] mm/rmap: use folio_large_nr_pages() in add/remove functions Date: Thu, 29 Aug 2024 18:56:06 +0200 Message-ID: <20240829165627.2256514-4-david@redhat.com> In-Reply-To: <20240829165627.2256514-1-david@redhat.com> References: <20240829165627.2256514-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Stat-Signature: nt9zz173677kn7i6ysqrsezfpm17rrre X-Rspam-User: X-Rspamd-Queue-Id: AEC6840030 X-Rspamd-Server: rspam02 X-HE-Tag: 1724950644-972107 X-HE-Meta: U2FsdGVkX18UmL5gprJBXv+rmlFEQDjH/mbn1D8nV/tln2JlFtKT6XyIxUoh4lhztA/CTHIgsPGOuheRF+a0ux7+F3ai8l62gaxlUAuFW4p501qMqfJEnBf6NtaERhksNvocPZsYiWZBkMCapQYYnU0BMknjtihxjDX4ptEwO8FAFOqnd0+Mi5TNLV27SejwATCUnZ1hxSLz4cCcchLeHH6uwlzH4LpqHZgJXq2F0Fkl8jY1pgwg990uDZGCLpAmIZC+7w1+eB/5KGrdMud131cbfJ2SKbOZHPYn1ym4ZRzKjWOUXr9NX13Q7JxU93R2b7jGZqScZa0pCS0zHffHDPzKgpF1+TgzC3nMUfSS8Rnmz3GtFFS9jmK/qkbDOEcOTOlQhgBKh+V2A5sN7y1ysYUJVbPV15kJ5DpObLDKZ7ofAsKfi8r7soX+cF6cfrLx5vrOk6s24asJb1i4gkSLqofAsZyRzvFTcImLJDtEhB9owqG2lb9DuW5GgPCpptLleUn/v84HFpKeYjLvbSaN3xLyCHAfS4zZXN+DjGTNXOUD+nMP5BqZp4GHVw9ZccwPZwpDAQqwmN1EAqX/ZNCA4+CKCTB6Ck5H1S16GRyWHsRKoruMVayWuizofI6NoPrVoIeZd91Bv070Z0zlMe3+YCJ+NPttFnQUi/lL0mf6FkEPadqlgHKFcw5yeOTNhWIT4LSB2RfD/YeWP/cO2BlzgluuLt01QwbNKU8b5cjNnIGdyGPQehuN0Tkj1YFCh6qxu4jVOqHpKFyicP5CQGaqu92p9Dc5sH1SBta48gRrIo9Hrd1OyKgGqwvjh22s3uM51M3rE1/EtlfHn2SC//jy6JjwDeCHl5nuZ1mSorx7f4nWbcCtxFw16ST1a0TduWsBpLmjqmDpLcncb3nL9Ytsd0T7EJkVtNpwoN2i3z1s8Uv30LgqV0apd0z5LOCoFzPuZfmF3SX+8T7NqA1CY3p zfLP2xT5 AshZLoW3yIwYSDvLMXRwiZ6QmfmTynu8MQvab238fAOrj8PczbHyZGUCoeNQEXMe1JYznDNgSQmWdHtqB3MCu635sDgypQYwA1kRdcPpO9DZPe6UiLWxu3G0nTde3T4ehzVLRSFqkzCiLTuv71//+ZIudF4G8k324Y2jOLSlzNH57n09UbZjDtuE7GgQGS4B63A8euE2XhthavJBPNRsRGZEZXvyfntSArbPZfXvZMyf9RDO8s0EGEqN1WA+n9bKsnHwCmFXMsqOICKBo0IOXbYEBmbiVrsotpi7PhRg50Z1rl4s9sz/tcs2h7rolghzkKmEVXHpJJMDfhNWy01rv7hVwgw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's just use the "large" variant in code where we are sure that we have a large folio in our hands: this way we are sure that we don't perform any unnecessary "large" checks. While at it, convert the VM_BUG_ON_VMA to a VM_WARN_ON_ONCE. Signed-off-by: David Hildenbrand Reviewed-by: Kirill A. Shutemov --- mm/rmap.c | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 78529cf0fd668..6594c122a5895 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1184,7 +1184,7 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio, if (first) { nr = atomic_add_return_relaxed(ENTIRELY_MAPPED, mapped); if (likely(nr < ENTIRELY_MAPPED + ENTIRELY_MAPPED)) { - *nr_pmdmapped = folio_nr_pages(folio); + *nr_pmdmapped = folio_large_nr_pages(folio); nr = *nr_pmdmapped - (nr & FOLIO_PAGES_MAPPED); /* Raced ahead of a remove and another add? */ if (unlikely(nr < 0)) @@ -1418,14 +1418,11 @@ void folio_add_anon_rmap_pmd(struct folio *folio, struct page *page, void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, unsigned long address, rmap_t flags) { - const int nr = folio_nr_pages(folio); const bool exclusive = flags & RMAP_EXCLUSIVE; - int nr_pmdmapped = 0; + int nr = 1, nr_pmdmapped = 0; VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio); VM_WARN_ON_FOLIO(!exclusive && !folio_test_locked(folio), folio); - VM_BUG_ON_VMA(address < vma->vm_start || - address + (nr << PAGE_SHIFT) > vma->vm_end, vma); /* * VM_DROPPABLE mappings don't swap; instead they're just dropped when @@ -1443,6 +1440,7 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, } else if (!folio_test_pmd_mappable(folio)) { int i; + nr = folio_large_nr_pages(folio); for (i = 0; i < nr; i++) { struct page *page = folio_page(folio, i); @@ -1456,6 +1454,7 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, atomic_set(&folio->_large_mapcount, nr - 1); atomic_set(&folio->_nr_pages_mapped, nr); } else { + nr = folio_large_nr_pages(folio); /* increment count (starts at -1) */ atomic_set(&folio->_entire_mapcount, 0); /* increment count (starts at -1) */ @@ -1466,6 +1465,9 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, nr_pmdmapped = nr; } + VM_WARN_ON_ONCE(address < vma->vm_start || + address + (nr << PAGE_SHIFT) > vma->vm_end); + __folio_mod_stat(folio, nr, nr_pmdmapped); mod_mthp_stat(folio_order(folio), MTHP_STAT_NR_ANON, 1); } @@ -1557,7 +1559,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, if (last) { nr = atomic_sub_return_relaxed(ENTIRELY_MAPPED, mapped); if (likely(nr < ENTIRELY_MAPPED)) { - nr_pmdmapped = folio_nr_pages(folio); + nr_pmdmapped = folio_large_nr_pages(folio); nr = nr_pmdmapped - (nr & FOLIO_PAGES_MAPPED); /* Raced ahead of another remove and an add? */ if (unlikely(nr < 0)) From patchwork Thu Aug 29 16:56:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13783457 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0A9C6C87FC8 for ; Thu, 29 Aug 2024 16:57:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9ADFB6B0096; Thu, 29 Aug 2024 12:57:34 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 95E526B0098; Thu, 29 Aug 2024 12:57:34 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 825CD6B0099; Thu, 29 Aug 2024 12:57:34 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6465C6B0096 for ; Thu, 29 Aug 2024 12:57:34 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id E70D8C0E83 for ; Thu, 29 Aug 2024 16:57:33 +0000 (UTC) X-FDA: 82505889186.12.7B06AB3 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf24.hostedemail.com (Postfix) with ESMTP id 36D5018001B for ; Thu, 29 Aug 2024 16:57:32 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="UCnj8Bt/"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf24.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724950586; a=rsa-sha256; cv=none; b=WADUe0BiFHppTIwX3d5IN/qx1Cvugrm7xK1tK9b6HlyHYXtW/RRYy+YEilSr7lymV7S4EB yceIglpTUN1wrK7GKGwmra3JSZQBjVeYuhe+pgNg3zcuFhflorT0Q/BLmkLOURuSdaUM7L v0QjhCOdlv2H6BmBuCiuIhGgkktQi+8= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="UCnj8Bt/"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf24.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724950586; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=EpD7Yg2+jtoN65+dYA8OeTl4VXhFFh1pjoJ0uEfzgHA=; b=tJPMeyumebZZS75re3cvncPcuBxvi+1OKTgTlyizLd9DF79I6Z2YqL9+c/CAeapb7hmD1F Yalb5IL1vwPJVMAl4sKZbjvebcyofUXoJr2GcZm9LoDl+VaQnTzBT1ZueKaxNbT0p+JJ69 PpAC4WTdpwz2NqOYH6qoiCDMI2QnODs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724950651; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=EpD7Yg2+jtoN65+dYA8OeTl4VXhFFh1pjoJ0uEfzgHA=; b=UCnj8Bt/cL41sar+sV8WwnsSF+yecqYSKH1X45z3ny5FdSJV0U+8DpfNL7f+jr5fnxFw0x X4sFwJvD6yhF0ZtdpEDQ/WxBU5VTz8NUHtLqs+yXLuimoJFCbeqmnpFhsrZOllGW+X54f2 uOfuJ+KH/f0rL0GyCl94F0TWgUpkAuM= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-62-X3BqNIGuM3SJk-RFpxTW0Q-1; Thu, 29 Aug 2024 12:57:27 -0400 X-MC-Unique: X3BqNIGuM3SJk-RFpxTW0Q-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E4CFD18EA8EA; Thu, 29 Aug 2024 16:57:24 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.193.245]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id DDA2A1955F21; Thu, 29 Aug 2024 16:57:16 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH v1 04/17] mm: let _folio_nr_pages overlay memcg_data in first tail page Date: Thu, 29 Aug 2024 18:56:07 +0200 Message-ID: <20240829165627.2256514-5-david@redhat.com> In-Reply-To: <20240829165627.2256514-1-david@redhat.com> References: <20240829165627.2256514-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspamd-Queue-Id: 36D5018001B X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: ug1x9tn94nrk8e415h1isoefqzwk3td4 X-HE-Tag: 1724950652-715404 X-HE-Meta: U2FsdGVkX1/Y/Ho6G2WoBVoO1sSSncTvmnUiSMpADXDcyi/WZKk2I2rat/ZSNO26DpeLPejXNviBXHp1bjHsJBP2Krs0jxphVNg06Ay52x6piei6dzaF0Qy9spQUSnTtSKKxSNDtZUCcBRCKQxgo8uxDEUX4WqrCrstm27OU/WzirImqGAyl7nfCf1U6oyqOQQt08bqgdD77GyXKw5nClZwG/BSVfDqcqIElqFP6tPjMs+MeeqaEl70OiR16F6MLHTXDdIh7NB9EQP4W6bVoyRP99eDjPqoespSdnQEMogw8rURxxXeJRRv0zaM2EKEwMEx+MecCEDVbAqH30vWC32LkDf9MPPcmIHiEc4QNzMfASiSA3BDrOdDoc566HAK87sBqF42Q/eBgC+UaxXpp//GX1R5iF9adv60t4qOD7txBioOhsJZ3DLGoeDl0WwqX8ku5A+B3vwYjCalcUAAY0mHHm/dkoxyN13521EhK2zI7+pRtA3JZuj94xVn0bQRSp8nyGJKwRrUA35ocXEUQ2M91Pf/GPPDNk1nPRa7LqsqmdlLVlRu1vIDKkxsDASWPhC726zN9I9Mb8/2xYzq5wZzkhR6aN7ozLcnJYOWpr1BaNSHYRLdZfR2QeaPgBZHhB6tVmnnfBBZhaMu868leyzWg1U50m/VNSlmklO94wG/JzXkXJG/zjoFFWWzKknWq3SiAsUE+vCvzf1BVsFVIMLL9tIujIUUuGboPRZIN2D+PnP0AWGJ2stGlmFeIUmAfb5jSMWdE0fNMRIQM927kPv1Ma3kpWShTK7RZlgHaHgZnxNpTZOc1g2BknAc+bgvtjKqOw2YIeGSN5ZEXMFgYmdE3P4nQ9dcCmyHDKRjfyZXo94+WtMnNCDwvqwFTc+08a2A1VjFmIwAtKxSCl76r7SAr+xqAQBtImQg/Ov5UUgo/kCabuTHdYvP6I0TNd0NcKgKuO04XfmVHKiYTPuF 2Y1khH3Y n+kwTlneD6uUiya36ND1JOtZ889Pg2R1uLRB/BNAHLlkN/END8DSz0Fr9L7lK1nTSZR3bF4gcfDjD9ITw3EK6f9DYa8yyCD7vQOYCjEO6m9nbrZ/9vyaH5UGmLKHa+QkUBBDesSIYHcpPmKbZaWOihkEHhOSeSflvuHeM0R9jKsy4zL7Sc/3Eb/4F+3WvRqpCP/RmzqN2T/6waP3HPBQviqrYd3uip7URvhMR1aqfvgYg9dUhCO47vDpYBSJ6LqUfcCBkNkK4mWYoVVOecOzWgpZK1VeLXg6IFLpxbuoErEdmwJA/mAtELhdl2J2tU2M4Vbuyu0tSCUr0mLCceFLmKHmsKvyIR0T+jBTlGWcrt5tJCAKUtd0kpGDbqA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's free up some more of the "unconditionally available on 64BIT" space in order-1 folios by letting _folio_nr_pages overlay memcg_data in the first tail page (second folio page). Consequently, we have the optimization now whenever we have CONFIG_MEMCG, independent of 64BIT. We have to make sure that page->memcg on tail pages does not return "surprises". page_memcg_check() already properly refuses PageTail(). Let's do that earlier in print_page_owner_memcg() to avoid printing wrong "Slab cache page" information. No other code should touch that field on tail pages of compound pages. Reset the "_nr_pages" to 0 when splitting folios, or when freeing them back to the buddy (to avoid false page->memcg_data "bad page" reports). Note that in __split_huge_page(), folio_nr_pages() would stop working already as soon as we start messing with the subpages. Most kernel configs should have at least CONFIG_MEMCG enabled, even if disabled at runtime. 64byte "struct memmap" is what we usually have on 64BIT. While at it, rename "_folio_nr_pages" to "_nr_pages". Signed-off-by: David Hildenbrand Reviewed-by: Kirill A. Shutemov --- include/linux/mm.h | 4 ++-- include/linux/mm_types.h | 30 ++++++++++++++++++++++-------- mm/huge_memory.c | 8 ++++++++ mm/internal.h | 4 ++-- mm/page_alloc.c | 6 +++++- mm/page_owner.c | 2 +- 6 files changed, 40 insertions(+), 14 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index fa8b6ce54235c..98411e53da916 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1078,8 +1078,8 @@ static inline unsigned int folio_large_order(const struct folio *folio) static inline long folio_large_nr_pages(const struct folio *folio) { -#ifdef CONFIG_64BIT - return folio->_folio_nr_pages; +#ifdef NR_PAGES_IN_LARGE_FOLIO + return folio->_nr_pages; #else return 1L << folio_large_order(folio); #endif diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 6e3bdf8e38bca..480548552ea54 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -283,6 +283,11 @@ typedef struct { unsigned long val; } swp_entry_t; +#if defined(CONFIG_MEMCG) || defined(CONFIG_SLAB_OBJ_EXT) +/* We have some extra room after the refcount in tail pages. */ +#define NR_PAGES_IN_LARGE_FOLIO +#endif + /** * struct folio - Represents a contiguous set of bytes. * @flags: Identical to the page flags. @@ -305,7 +310,7 @@ typedef struct { * @_large_mapcount: Do not use directly, call folio_mapcount(). * @_nr_pages_mapped: Do not use outside of rmap and debug code. * @_pincount: Do not use directly, call folio_maybe_dma_pinned(). - * @_folio_nr_pages: Do not use directly, call folio_nr_pages(). + * @_nr_pages: Do not use directly, call folio_nr_pages(). * @_hugetlb_subpool: Do not use directly, use accessor in hugetlb.h. * @_hugetlb_cgroup: Do not use directly, use accessor in hugetlb_cgroup.h. * @_hugetlb_cgroup_rsvd: Do not use directly, use accessor in hugetlb_cgroup.h. @@ -366,13 +371,20 @@ struct folio { unsigned long _flags_1; unsigned long _head_1; /* public: */ - atomic_t _large_mapcount; - atomic_t _entire_mapcount; - atomic_t _nr_pages_mapped; - atomic_t _pincount; -#ifdef CONFIG_64BIT - unsigned int _folio_nr_pages; -#endif + union { + struct { + atomic_t _large_mapcount; + atomic_t _entire_mapcount; + atomic_t _nr_pages_mapped; + atomic_t _pincount; + }; + unsigned long _usable_1[4]; + }; + atomic_t _mapcount_1; + atomic_t _refcount_1; +#ifdef NR_PAGES_IN_LARGE_FOLIO + unsigned int _nr_pages; +#endif /* NR_PAGES_IN_LARGE_FOLIO */ /* private: the union with struct page is transitional */ }; struct page __page_1; @@ -424,6 +436,8 @@ FOLIO_MATCH(_last_cpupid, _last_cpupid); offsetof(struct page, pg) + sizeof(struct page)) FOLIO_MATCH(flags, _flags_1); FOLIO_MATCH(compound_head, _head_1); +FOLIO_MATCH(_mapcount, _mapcount_1); +FOLIO_MATCH(_refcount, _refcount_1); #undef FOLIO_MATCH #define FOLIO_MATCH(pg, fl) \ static_assert(offsetof(struct folio, fl) == \ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 15418ffdd3774..28d12573fcf8c 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3171,6 +3171,14 @@ static void __split_huge_page(struct page *page, struct list_head *list, int order = folio_order(folio); unsigned int nr = 1 << order; + /* + * Reset any memcg data overlay in the tail pages. folio_nr_pages() + * is unreliable after this point. + */ +#ifdef NR_PAGES_IN_LARGE_FOLIO + folio->_nr_pages = 0; +#endif + /* complete memcg works before add pages to LRU */ split_page_memcg(head, order, new_order); diff --git a/mm/internal.h b/mm/internal.h index 97d6b94429ebd..f627fd2200464 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -625,8 +625,8 @@ static inline void folio_set_order(struct folio *folio, unsigned int order) return; folio->_flags_1 = (folio->_flags_1 & ~0xffUL) | order; -#ifdef CONFIG_64BIT - folio->_folio_nr_pages = 1U << order; +#ifdef NR_PAGES_IN_LARGE_FOLIO + folio->_nr_pages = 1U << order; #endif } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c2ffccf9d2131..e276cbaf97054 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1077,8 +1077,12 @@ __always_inline bool free_pages_prepare(struct page *page, if (unlikely(order)) { int i; - if (compound) + if (compound) { page[1].flags &= ~PAGE_FLAGS_SECOND; +#ifdef NR_PAGES_IN_LARGE_FOLIO + ((struct folio *)page)->_nr_pages = 0; +#endif + } for (i = 1; i < (1 << order); i++) { if (compound) bad += free_tail_page_prepare(page, page + i); diff --git a/mm/page_owner.c b/mm/page_owner.c index 2d6360eaccbb6..a409e2561a8fd 100644 --- a/mm/page_owner.c +++ b/mm/page_owner.c @@ -507,7 +507,7 @@ static inline int print_page_owner_memcg(char *kbuf, size_t count, int ret, rcu_read_lock(); memcg_data = READ_ONCE(page->memcg_data); - if (!memcg_data) + if (!memcg_data || PageTail(page)) goto out_unlock; if (memcg_data & MEMCG_DATA_OBJEXTS) From patchwork Thu Aug 29 16:56:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13783458 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5D77CC87FC8 for ; Thu, 29 Aug 2024 16:57:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id E687D6B007B; Thu, 29 Aug 2024 12:57:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E18716B0083; Thu, 29 Aug 2024 12:57:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CE0DD6B0099; Thu, 29 Aug 2024 12:57:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id B13A26B007B for ; Thu, 29 Aug 2024 12:57:42 -0400 (EDT) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 42157C0538 for ; Thu, 29 Aug 2024 16:57:42 +0000 (UTC) X-FDA: 82505889564.01.2030303 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf27.hostedemail.com (Postfix) with ESMTP id 9345D40002 for ; Thu, 29 Aug 2024 16:57:40 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=S+aNo0kZ; spf=pass (imf27.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724950615; a=rsa-sha256; cv=none; b=fUKwQJUGY/lPHZdXQvSGQwMenj5wvl4XZjb8a+dceOTmP4XkUalBHLX1rKJHXCOFPeLIjT e3wJ1FFRDQuE9EhdtuMHpwXyuR+Y70d87v3W+R0zeIm2iiyeDTk99xhMPySc9pq303YZ4U LN89HBb1pyvAuPn5SkLs1XBUqekJsgk= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=S+aNo0kZ; spf=pass (imf27.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724950615; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=P9e2t85ahUim8/CUtgfAvgDNVpQ7xBKAii0+WP0hxAw=; b=TckDP+RwozZncTMsfpW/Ug8wODff1miZ6kF3UK8ylhtA0NEe6N4WCbufsLlQSX1Vp+meU4 3SLaryOvKbi3ou7cvZhLCx2PIWH0ERrUrdKhu2wm1bgGTLo41vMnoX0Oohp1cqBnhZVB4x /IcCMEcYD6GyIJaXMFs2VNLsvudMf4s= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724950660; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=P9e2t85ahUim8/CUtgfAvgDNVpQ7xBKAii0+WP0hxAw=; b=S+aNo0kZFuezJmBgbpbEJ8HjweqmYXWHPhUXsHkRzIzIRZaOVvmAya/vjvHvQB/epXhauX XFCY3ajx1cTROlMtD0j/LhNKkB+p+xoFPC8gpbL73a3Ofu/b0NJFayln7J9a4C3WNqI/s2 n8+kkyZmnTSViQR76zG8rAUHk467QoE= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-634-4iL4jv5LNyWo6OsN4hoihg-1; Thu, 29 Aug 2024 12:57:36 -0400 X-MC-Unique: 4iL4jv5LNyWo6OsN4hoihg-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 1D1C71956048; Thu, 29 Aug 2024 16:57:32 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.193.245]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 414471955F66; Thu, 29 Aug 2024 16:57:25 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH v1 05/17] mm/rmap: pass dst_vma to page_try_dup_anon_rmap() and page_dup_file_rmap() Date: Thu, 29 Aug 2024 18:56:08 +0200 Message-ID: <20240829165627.2256514-6-david@redhat.com> In-Reply-To: <20240829165627.2256514-1-david@redhat.com> References: <20240829165627.2256514-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Stat-Signature: 7y6ucfrf38bckw6yd443wukquttcz5gh X-Rspamd-Queue-Id: 9345D40002 X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1724950660-163387 X-HE-Meta: U2FsdGVkX1/XOq7Ijr9iISwtDj7a+D5WQi63eQ2umxjOerATKKDluWUQ4C14UGjhB5g+7N1jYZbN+/4qnisYWBmjj/meVDmagn5kxKiX4PXgXEeo8uofd3vGrOh65oYmIksIjtoFqlzzUm5m4SZDPufeqgGLG0kiW4FT5lJP1jkDbbX77cza31D2/d72lOWo2p4GsRXmj1F+zSntOvTFbfQIoz9FchhPtC7Y3jp6zo56lwCSpYnFdj4lWeV564/eq8xMG1+wW3xNDJGIsTiUj0DQNiuN+K1JkTNxCmNFxXpC4XSgY7MlW0LR79O53CmVKNcK1CbWqbatxMxUZY082hrZlVfphaNpyabj6pusAzI309g6G0nEGgKGrcL2VAeyztRRDbW9aAvUeRdOWnpVg1PD3s2mopU2KbFaWwgyH9zt/AiZx5qGkdUx5MeNiLSjj97t9PgxEFjydZ4HsxNSKEPN2xjUmIkcIHQTCTBklX0BsesKVUmZ8igW19/TruW9fjeod3TbfAZoVMh2ciiXA57GoRLUJlf+57dqUjIOSemrO3ZncUbNMx+iaSuncEvhC3YuCqMVPpCcwRseVvQmIoBF5z66u7imlsCE297MZBc4LT7zNM2J615E7RnQeS26kqhtIThao4Qs72r1UWeNS0AVohyPPezkMIS3EShKNEHmTLEiXIgsvxie1sp7OqGmyAYKDeNDtCEhxu6wPbSFeQcMxIoRgC4rTRNe6y3ZlnFfq0XyVBOLx8HAPFPgHk9S/IRUasvV6WIiv0nM55ga8Wzxt/Pfzj+fB9HNjnaisNjtiS0Nhj/DANsGjL7zjAjj2I0L/tTwViR+pxB2QMGn25SyOagl9WaK3mjIxcG7D7aG3vnYRwi3hkv3CsNVQaB9kF1EvbHVAK7gY0maGf3OUgwcZJsI78DD7+gASP3y9hsAJ3t2Hp9BZyAqAqFgoGqdBD+DOP+4hlDpAnjQ5q3 7ojLAmor tkVAO2RKd6XnqVbRa070mEDxNWX+zho4p8Ptel4HE4nzQuVq23f7N5ZB9OKUDh88MeEeF8DvTmshd4bOaPvczBP4F1WDLiaUy9M0gDX/AZkiOhYVHNC0QekvxneCjN3P6/VNmpZj2QkQvDBxkV8Ne4bUZMQX+7BTw8PmWHBlYhuGYfv9Rm7maOj+A5NqxJpgPzboLRiG7pTnngjC3fffIiwrPprXAEw856WJsYlnEvKTk+VCLrGmpgAINE8kG2b/iymvbv03p019Vq7Roh9eE2cqtyyukp5yBtKhmruAH8p62MGtnNQgIGCCESb9myckZDCUk+l0MvByFuyyacsVFZo4a6M/ErJAJza6UJ7xGkusZPNiSsjLUCKppEg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We'll need access to the destination MM when modifying the total mapcount of a non-hugetlb large folios next. So pass in the destination VMA. Signed-off-by: David Hildenbrand --- include/linux/rmap.h | 42 +++++++++++++++++++++++++----------------- mm/huge_memory.c | 2 +- mm/memory.c | 10 +++++----- 3 files changed, 31 insertions(+), 23 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 91b5935e8485e..9e275986f0ef6 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -322,7 +322,8 @@ static inline void hugetlb_remove_rmap(struct folio *folio) } static __always_inline void __folio_dup_file_rmap(struct folio *folio, - struct page *page, int nr_pages, enum rmap_level level) + struct page *page, int nr_pages, struct vm_area_struct *dst_vma, + enum rmap_level level) { const int orig_nr_pages = nr_pages; @@ -352,45 +353,47 @@ static __always_inline void __folio_dup_file_rmap(struct folio *folio, * @folio: The folio to duplicate the mappings of * @page: The first page to duplicate the mappings of * @nr_pages: The number of pages of which the mapping will be duplicated + * @dst_vma: The destination vm area * * The page range of the folio is defined by [page, page + nr_pages) * * The caller needs to hold the page table lock. */ static inline void folio_dup_file_rmap_ptes(struct folio *folio, - struct page *page, int nr_pages) + struct page *page, int nr_pages, struct vm_area_struct *dst_vma) { - __folio_dup_file_rmap(folio, page, nr_pages, RMAP_LEVEL_PTE); + __folio_dup_file_rmap(folio, page, nr_pages, dst_vma, RMAP_LEVEL_PTE); } static __always_inline void folio_dup_file_rmap_pte(struct folio *folio, - struct page *page) + struct page *page, struct vm_area_struct *dst_vma) { - __folio_dup_file_rmap(folio, page, 1, RMAP_LEVEL_PTE); + __folio_dup_file_rmap(folio, page, 1, dst_vma, RMAP_LEVEL_PTE); } /** * folio_dup_file_rmap_pmd - duplicate a PMD mapping of a page range of a folio * @folio: The folio to duplicate the mapping of * @page: The first page to duplicate the mapping of + * @dst_vma: The destination vm area * * The page range of the folio is defined by [page, page + HPAGE_PMD_NR) * * The caller needs to hold the page table lock. */ static inline void folio_dup_file_rmap_pmd(struct folio *folio, - struct page *page) + struct page *page, struct vm_area_struct *dst_vma) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - __folio_dup_file_rmap(folio, page, HPAGE_PMD_NR, RMAP_LEVEL_PTE); + __folio_dup_file_rmap(folio, page, HPAGE_PMD_NR, dst_vma, RMAP_LEVEL_PTE); #else WARN_ON_ONCE(true); #endif } static __always_inline int __folio_try_dup_anon_rmap(struct folio *folio, - struct page *page, int nr_pages, struct vm_area_struct *src_vma, - enum rmap_level level) + struct page *page, int nr_pages, struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma, enum rmap_level level) { const int orig_nr_pages = nr_pages; bool maybe_pinned; @@ -455,6 +458,7 @@ static __always_inline int __folio_try_dup_anon_rmap(struct folio *folio, * @folio: The folio to duplicate the mappings of * @page: The first page to duplicate the mappings of * @nr_pages: The number of pages of which the mapping will be duplicated + * @dst_vma: The destination vm area * @src_vma: The vm area from which the mappings are duplicated * * The page range of the folio is defined by [page, page + nr_pages) @@ -473,16 +477,18 @@ static __always_inline int __folio_try_dup_anon_rmap(struct folio *folio, * Returns 0 if duplicating the mappings succeeded. Returns -EBUSY otherwise. */ static inline int folio_try_dup_anon_rmap_ptes(struct folio *folio, - struct page *page, int nr_pages, struct vm_area_struct *src_vma) + struct page *page, int nr_pages, struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma) { - return __folio_try_dup_anon_rmap(folio, page, nr_pages, src_vma, - RMAP_LEVEL_PTE); + return __folio_try_dup_anon_rmap(folio, page, nr_pages, dst_vma, + src_vma, RMAP_LEVEL_PTE); } static __always_inline int folio_try_dup_anon_rmap_pte(struct folio *folio, - struct page *page, struct vm_area_struct *src_vma) + struct page *page, struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma) { - return __folio_try_dup_anon_rmap(folio, page, 1, src_vma, + return __folio_try_dup_anon_rmap(folio, page, 1, dst_vma, src_vma, RMAP_LEVEL_PTE); } @@ -491,6 +497,7 @@ static __always_inline int folio_try_dup_anon_rmap_pte(struct folio *folio, * of a folio * @folio: The folio to duplicate the mapping of * @page: The first page to duplicate the mapping of + * @dst_vma: The destination vm area * @src_vma: The vm area from which the mapping is duplicated * * The page range of the folio is defined by [page, page + HPAGE_PMD_NR) @@ -509,11 +516,12 @@ static __always_inline int folio_try_dup_anon_rmap_pte(struct folio *folio, * Returns 0 if duplicating the mapping succeeded. Returns -EBUSY otherwise. */ static inline int folio_try_dup_anon_rmap_pmd(struct folio *folio, - struct page *page, struct vm_area_struct *src_vma) + struct page *page, struct vm_area_struct *dst_vma, + struct vm_area_struct *src_vma) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE - return __folio_try_dup_anon_rmap(folio, page, HPAGE_PMD_NR, src_vma, - RMAP_LEVEL_PMD); + return __folio_try_dup_anon_rmap(folio, page, HPAGE_PMD_NR, dst_vma, + src_vma, RMAP_LEVEL_PMD); #else WARN_ON_ONCE(true); return -EBUSY; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 28d12573fcf8c..6de84377e8e77 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1642,7 +1642,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, src_folio = page_folio(src_page); folio_get(src_folio); - if (unlikely(folio_try_dup_anon_rmap_pmd(src_folio, src_page, src_vma))) { + if (unlikely(folio_try_dup_anon_rmap_pmd(src_folio, src_page, dst_vma, src_vma))) { /* Page maybe pinned: split and retry the fault on PTEs. */ folio_put(src_folio); pte_free(dst_mm, pgtable); diff --git a/mm/memory.c b/mm/memory.c index 06b42db8a2db7..c2143c40a134b 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -856,7 +856,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm, folio_get(folio); rss[mm_counter(folio)]++; /* Cannot fail as these pages cannot get pinned. */ - folio_try_dup_anon_rmap_pte(folio, page, src_vma); + folio_try_dup_anon_rmap_pte(folio, page, dst_vma, src_vma); /* * We do not preserve soft-dirty information, because so @@ -1007,14 +1007,14 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma folio_ref_add(folio, nr); if (folio_test_anon(folio)) { if (unlikely(folio_try_dup_anon_rmap_ptes(folio, page, - nr, src_vma))) { + nr, dst_vma, src_vma))) { folio_ref_sub(folio, nr); return -EAGAIN; } rss[MM_ANONPAGES] += nr; VM_WARN_ON_FOLIO(PageAnonExclusive(page), folio); } else { - folio_dup_file_rmap_ptes(folio, page, nr); + folio_dup_file_rmap_ptes(folio, page, nr, dst_vma); rss[mm_counter_file(folio)] += nr; } if (any_writable) @@ -1032,7 +1032,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma * guarantee the pinned page won't be randomly replaced in the * future. */ - if (unlikely(folio_try_dup_anon_rmap_pte(folio, page, src_vma))) { + if (unlikely(folio_try_dup_anon_rmap_pte(folio, page, dst_vma, src_vma))) { /* Page may be pinned, we have to copy. */ folio_put(folio); err = copy_present_page(dst_vma, src_vma, dst_pte, src_pte, @@ -1042,7 +1042,7 @@ copy_present_ptes(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma rss[MM_ANONPAGES]++; VM_WARN_ON_FOLIO(PageAnonExclusive(page), folio); } else { - folio_dup_file_rmap_pte(folio, page); + folio_dup_file_rmap_pte(folio, page, dst_vma); rss[mm_counter_file(folio)]++; } From patchwork Thu Aug 29 16:56:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13783459 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7CE8EC87FC3 for ; Thu, 29 Aug 2024 16:57:53 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 172586B0099; Thu, 29 Aug 2024 12:57:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1222E6B009A; Thu, 29 Aug 2024 12:57:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F2B626B009B; Thu, 29 Aug 2024 12:57:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id D223B6B0099 for ; Thu, 29 Aug 2024 12:57:52 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 9DEE51206D9 for ; Thu, 29 Aug 2024 16:57:52 +0000 (UTC) X-FDA: 82505889984.02.0C507F5 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf02.hostedemail.com (Postfix) with ESMTP id EED3180012 for ; Thu, 29 Aug 2024 16:57:50 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=YxTIiV4C; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf02.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724950650; a=rsa-sha256; cv=none; b=YVeb3wVJdsz+wyyy6CkeL6NN7e3BA8z8/tmcPHbu62za64QD3bK14qfsxepqcwlHeiPZV4 NAXVBvHeZGE1XKcyWYd10pjhOPTeyP7nE1xth4Wk/XHx4JqAaV6OkAzfjBfiNFeIRQyARi ULaorORFldr8hQWnY+S+adS3BpyQ/ls= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=YxTIiV4C; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf02.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724950650; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UQpm0aZ+NCHrJn7YVxv+CdOsqon69CXtB9EHB2F3FEM=; b=CWqoFjJgSuIiGBUIggjnBHmOIso6U0oSjMD6gxC+ep4aUx5iK5JzL67HrIYqDB04r3zVTA kLlDjT07XwSlS7xeLsKSEmivHuHmo0Oklymh+4zZPsq82cmolT2EQxdCgW1WDBzBlKoOeF w1ccuziKcSVACspu3E1/if/ZBtFntz8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724950670; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UQpm0aZ+NCHrJn7YVxv+CdOsqon69CXtB9EHB2F3FEM=; b=YxTIiV4CT6G7xXJqhcX/bYldPMDxnwXpm5YXVcJiXcaaIn1SVscl9GBfwLRSJyK22qoVnM jyiL8WAsPh+yWevsXxJYEHxXNFbUawriLQRZrkwIuydOu5nt610JEm/vkpqXZ/qhNkNghm /2TdfEwmIx8YpbWuxwULikVSfiQmHf4= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-59-AymGZyrrMjyz7mijX24iug-1; Thu, 29 Aug 2024 12:57:44 -0400 X-MC-Unique: AymGZyrrMjyz7mijX24iug-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 402891913791; Thu, 29 Aug 2024 16:57:41 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.193.245]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 3B8C51955DC0; Thu, 29 Aug 2024 16:57:32 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH v1 06/17] mm/rmap: pass vma to __folio_add_rmap() Date: Thu, 29 Aug 2024 18:56:09 +0200 Message-ID: <20240829165627.2256514-7-david@redhat.com> In-Reply-To: <20240829165627.2256514-1-david@redhat.com> References: <20240829165627.2256514-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspam-User: X-Rspamd-Queue-Id: EED3180012 X-Rspamd-Server: rspam01 X-Stat-Signature: oss83tjz77zk6t7hpcmx38mojpw3wwdx X-HE-Tag: 1724950670-635197 X-HE-Meta: U2FsdGVkX19mYJ6bUr5gMbUxBUhA0KWz3F3uSpwudSFGEZIL5VVKcqkL75RD9dpRUUxjGHLgGemAcQkmOTUbvmqq6K1JwI7MAKZ/4PTdYbCe7kVtj5yVc9Q0HBgTVeQhlElVTpMHZ4o1Fm1hDn6zYbK7gPLEo2n7VcKQRZmGwi7p9QhthtCwNKXuaJmw3hNxvife/7MLs86ROMQhzbqLARIQuku6v9to2HxXNg5EK68qSO/Xvfk6zocPrZx3fIwStRz1q++1MuK4H2HsrL8weM2lXQdx4S9VMfTkoCgXNnnum7fXJP2XmOKUc5me9nZJwPAPC2EaLu/sgvP6j3Rais2pBoVhJnamevhisfByVEgZJZsqTex3P4E7bEXdp+WWCFGz11XVgK0d9qpkMVdwkxkBttfwfZq8FOD1JKCpka2bcXHtz1Mbn2qVhwONFXW7VMa7veXG89xA0xsZX8ywVB1l1mnckZ326B3CpInpibhyqyvX+F8EkOU5aaIS9UFiV1Sk2O7S8+pVAp2GtmGVBkilkQmM5bx2TLmGhEtJLuoQZayT5a8XnVIzzrk9G/IUSYhOK5C0MJ2Y9vYeaIFnfLAHneyX7rkGL6qOv/7xClsnIc434WnHR7g2uL9ukWuiAkJaSt3+EBuKAcoOtyegDWC4HFRzByqoU9+p1pcyHJhrLqWJmcAfexGhtqysmGLF31JNZb+dtEBmNSEOacon/n5z/dldR4BIJH++Ap02Ql+Tz/39H1JBEwgsJW6D7tt2DwD8OmPncu+iADPK3SpccxP28OKGK7qhLszOXemL6TV1lPo8CXvzTS8VC0Eu5gIJMCu/mnhYkDYTVYkbFuyIVzHl1UnMgYPeSWYH1/6LakCosielh8XULWCPUlQo6z3vDjWrKIVhj4/tEKSP3BtCTxnNwkDifWA2UNWopPc1cwrcq7BDIEVzb3A2Lv6KPJacfiQn9g+i63/7IJ6XhGW 6jpAol9S 2GulGxUFkrf8XjRDYDR2PnMni9FYWW9R/5LlgyZEhWY5utdzAuh1F6eodgb5Iwz/WsYoI/oGi7U1Vtk1HfvIBeoHNIIcOrnnBjJpRSPwQXY1aPrhSFcJpWQG8XbAsflHelNw0UZ8f97m0nKfi90YX0RL3nbq7Blj7elMK7K22g16zkfUuxstbW/fxlUgViBEw6D9bKCmS15ULw7o4+mo6WsMRSb5omDpGJ3yHgcVMuw6agi/2QgJ5dIRgknowfojLST51xoLppa320ugnXwpP0rkmWrJzyvDxo8bXkwu0ubPAPUd8LnifHRCYM/ruTgwLFE/2kCMGS/VvJIYSQGuMpNi7Erw6prFYEg/SSq4cjX2EUdS5/rbNrsyOWA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We'll need access to the destination MM when modifying the total mapcount of a non-hugetlb large folios next. So pass in the VMA. Signed-off-by: David Hildenbrand --- mm/rmap.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/rmap.c b/mm/rmap.c index 6594c122a5895..ee1bff1638f90 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1153,8 +1153,8 @@ int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff, } static __always_inline unsigned int __folio_add_rmap(struct folio *folio, - struct page *page, int nr_pages, enum rmap_level level, - int *nr_pmdmapped) + struct page *page, int nr_pages, struct vm_area_struct *vma, + enum rmap_level level, int *nr_pmdmapped) { atomic_t *mapped = &folio->_nr_pages_mapped; const int orig_nr_pages = nr_pages; @@ -1314,7 +1314,7 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio, VM_WARN_ON_FOLIO(!folio_test_anon(folio), folio); - nr = __folio_add_rmap(folio, page, nr_pages, level, &nr_pmdmapped); + nr = __folio_add_rmap(folio, page, nr_pages, vma, level, &nr_pmdmapped); if (likely(!folio_test_ksm(folio))) __page_check_anon_rmap(folio, page, vma, address); @@ -1480,7 +1480,7 @@ static __always_inline void __folio_add_file_rmap(struct folio *folio, VM_WARN_ON_FOLIO(folio_test_anon(folio), folio); - nr = __folio_add_rmap(folio, page, nr_pages, level, &nr_pmdmapped); + nr = __folio_add_rmap(folio, page, nr_pages, vma, level, &nr_pmdmapped); __folio_mod_stat(folio, nr, nr_pmdmapped); /* See comments in folio_add_anon_rmap_*() */ From patchwork Thu Aug 29 16:56:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13783460 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A7496C87FC8 for ; Thu, 29 Aug 2024 16:58:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4567D6B009B; Thu, 29 Aug 2024 12:58:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 407026B009C; Thu, 29 Aug 2024 12:58:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 280716B009D; Thu, 29 Aug 2024 12:58:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 075D46B009B for ; Thu, 29 Aug 2024 12:58:00 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id B7402AA9DE for ; Thu, 29 Aug 2024 16:57:59 +0000 (UTC) X-FDA: 82505890278.04.36BD24D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf29.hostedemail.com (Postfix) with ESMTP id 051B712000A for ; Thu, 29 Aug 2024 16:57:57 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KOHE1vMi; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf29.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724950605; a=rsa-sha256; cv=none; b=kDBgoPfuACu7wZoEpwE19fNR/892mmfKyMeSp/larhO48VRQ0Z9xt8RD2FKnr3RFWgGkXl EHJbtsE9DP2gJusLtl8pGMqsnnrqxSzL5cjG1e16R7IMRYeYRcJLchDi972RPYUtd9vyey cn7ce4Z0sAFLOEqf4BgH1t77NGqGm4E= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KOHE1vMi; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf29.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724950605; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=kg6KaZoa+GHuxR5fT8B3Y3uXOyNmhSnzACXsGLCOA4A=; b=L5H43dM8S3n4d8bkKAkgb4vNmggoRCeEY9yoYbLJAdPKMwxUJJ5CbvvpeVDy/581uFejsp LBB3txWFqJ7ZBRkU+6zVr99YVBMz2X/02+lU7CkzJ/GY2GoT6CXVbIkMN+m6qCg4O/GTTV i7RbEzkZTVE/OEtKB9vdP/9vGZBKRI4= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724950677; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kg6KaZoa+GHuxR5fT8B3Y3uXOyNmhSnzACXsGLCOA4A=; b=KOHE1vMiC5dPz0AhIl67IePIElartihFRIXiRFTPlV2eZ7Y+bYUnj9cDppjvtyuDJjTSM1 TUIiid9ICREN1CpDwRJeANEDpk8GTL5LNELcdqQ78Cgh1inzLzJp62ZW+dllfO0qZy2lAZ QaJ/nxoBZ+Teh+Y/Bk02FjIGr4T452o= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-125-b73DxecZOFi28Pm89Y2ZEg-1; Thu, 29 Aug 2024 12:57:55 -0400 X-MC-Unique: b73DxecZOFi28Pm89Y2ZEg-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E5DFB1954223; Thu, 29 Aug 2024 16:57:50 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.193.245]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 920AC1955F66; Thu, 29 Aug 2024 16:57:41 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH v1 07/17] mm/rmap: abstract large mapcount operations for large folios (!hugetlb) Date: Thu, 29 Aug 2024 18:56:10 +0200 Message-ID: <20240829165627.2256514-8-david@redhat.com> In-Reply-To: <20240829165627.2256514-1-david@redhat.com> References: <20240829165627.2256514-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 051B712000A X-Stat-Signature: rc64qhsxhnreux69sdezwuqihb68srzj X-Rspam-User: X-HE-Tag: 1724950677-791990 X-HE-Meta: U2FsdGVkX1/CAsb8SMckg0fOkCs55M/SUE8vRjw0ffLiWH6HX6iQkjeFJpkew2OQpFUhVJFIE44KwLFZb+2Ab8DqSS+4HDkT/Wxp9gL5FEVz91JDjb2gCNGytEoE34Vqrz3j3FktdBnK1u3tV+W1LRK/XBp5lBgNWrsbpHN1XvjnOxzlLTiaXjaaeqecOgE2h16uGGOJziwG41zDh426oL8f1ubecPrlyfpIlMs/IHenQTYO4wF56r7uQ7A3tlA4NOgYZk91/blMyiPAGsLNaDPcL90ugTepu7fPjm6vwICUuwhozP/dd+DD8FGUs9tUkxRKxZZ1s1Ts1VOo3s3OxDctRyyv0i1tuCVKMXNS1BB+YMAIpg10xpNozbyXzKwLkwm9d/5Ge3mXbKo9P0uRj9m/as5kcqJ+tFkdgSNbfyMzwfyjGQcMeo5hFc4oW3yP6pFpL2J5W7AxK2AbbuGIemcClhhJ7us8gGH9jhkpYHFGX6vi5iDh/euLaMHh4/XTuYnfx+5NXlnoQVhUORqMLlFAbvbL5fqioX2BMLxCCFQFFeZdQQJLnVkCCn1z1LOMcFnY/pIQVF2Tg9cQy5s+DZDNj7i9RtQ9OWlv/7oxp0wYcMCct48ocxq7f3tfarsk2sYYCXdUcQHICfu6rDTTPUDJUg/yiwp9XuPFWcC7O5D9vSBIHF22cdEektFdpVy6dTUlwMHf3qLPb5KWb3DgTCglKlvUGgc0fJe4CDCHw/WZGVLHfwEYc3rmH1L8xVfGFGX3Ol41Z+3mziZZchzowPZWCSBpLTH4bwhNxVJxAMTdgTLg6mVLtSwWVP08XGKBkLrgu4hMx7YdClcKR2ne1w8GHON+M/ywsc4w473MYpuYtih2Dw1SeRK1cU/Nu4/YJQS+ZiBMokAwUeSF7MqstE5+DJ78+ZfcOYsQ/01jJamjS7kfxB42d/RnX9EhIy3IE8fdmRz/Z337zSU0OQc LSJ84v/P gaV3+6A9AInjLDg0ulGEj/bgUNRcJCSjC322dg9oead+yJdlwQbu5+9r9WIYvraobJfv/ePb+7Bqa3xZhfs4/CPFkArM9mkZZQdKATb8IehJr1+mnJ/VwptNDkw9spKRC2ntlYfeJtf3+gg+soBTDh6q6FM/XwqjOeh6Ka4jOUFGLH0tCcB4iu+LGkgFl+6IGWyCYfRcN7UAO5iSRpa1uY7YyZ7XBB2s08dMKeEVlPx5EwkQGJIK0cGhJuM29hYVy3boEkpZ7ZY+iFGTlAxvDHiJgIZGzvBxHv8yErN5lsSgl5blNWGjaxg+zKGM2zQBtxmAEtrBnTk+ZVbPJXpEC4zyEOnD6IdSLw0yiPszjPkpO8oYadjUZ525QaQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's abstract the operations so we can extend these operations easily. Signed-off-by: David Hildenbrand --- include/linux/rmap.h | 39 +++++++++++++++++++++++++++++++++++---- mm/rmap.c | 14 ++++++-------- 2 files changed, 41 insertions(+), 12 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index 9e275986f0ef6..e3b82a04b4acb 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -173,6 +173,37 @@ static inline void anon_vma_merge(struct vm_area_struct *vma, struct anon_vma *folio_get_anon_vma(struct folio *folio); +static inline void folio_set_large_mapcount(struct folio *folio, int mapcount, + struct vm_area_struct *vma) +{ + /* Note: mapcounts start at -1. */ + atomic_set(&folio->_large_mapcount, mapcount - 1); +} + +static inline void folio_add_large_mapcount(struct folio *folio, + int diff, struct vm_area_struct *vma) +{ + atomic_add(diff, &folio->_large_mapcount); +} + +static inline void folio_inc_large_mapcount(struct folio *folio, + struct vm_area_struct *vma) +{ + atomic_inc(&folio->_large_mapcount); +} + +static inline void folio_sub_large_mapcount(struct folio *folio, + int diff, struct vm_area_struct *vma) +{ + atomic_sub(diff, &folio->_large_mapcount); +} + +static inline void folio_dec_large_mapcount(struct folio *folio, + struct vm_area_struct *vma) +{ + atomic_dec(&folio->_large_mapcount); +} + /* RMAP flags, currently only relevant for some anon rmap operations. */ typedef int __bitwise rmap_t; @@ -339,11 +370,11 @@ static __always_inline void __folio_dup_file_rmap(struct folio *folio, do { atomic_inc(&page->_mapcount); } while (page++, --nr_pages > 0); - atomic_add(orig_nr_pages, &folio->_large_mapcount); + folio_add_large_mapcount(folio, orig_nr_pages, dst_vma); break; case RMAP_LEVEL_PMD: atomic_inc(&folio->_entire_mapcount); - atomic_inc(&folio->_large_mapcount); + folio_inc_large_mapcount(folio, dst_vma); break; } } @@ -437,7 +468,7 @@ static __always_inline int __folio_try_dup_anon_rmap(struct folio *folio, ClearPageAnonExclusive(page); atomic_inc(&page->_mapcount); } while (page++, --nr_pages > 0); - atomic_add(orig_nr_pages, &folio->_large_mapcount); + folio_add_large_mapcount(folio, orig_nr_pages, dst_vma); break; case RMAP_LEVEL_PMD: if (PageAnonExclusive(page)) { @@ -446,7 +477,7 @@ static __always_inline int __folio_try_dup_anon_rmap(struct folio *folio, ClearPageAnonExclusive(page); } atomic_inc(&folio->_entire_mapcount); - atomic_inc(&folio->_large_mapcount); + folio_inc_large_mapcount(folio, dst_vma); break; } return 0; diff --git a/mm/rmap.c b/mm/rmap.c index ee1bff1638f90..226b188499f91 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1177,7 +1177,7 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio, atomic_add_return_relaxed(first, mapped) < ENTIRELY_MAPPED) nr = first; - atomic_add(orig_nr_pages, &folio->_large_mapcount); + folio_add_large_mapcount(folio, orig_nr_pages, vma); break; case RMAP_LEVEL_PMD: first = atomic_inc_and_test(&folio->_entire_mapcount); @@ -1194,7 +1194,7 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio, nr = 0; } } - atomic_inc(&folio->_large_mapcount); + folio_inc_large_mapcount(folio, vma); break; } return nr; @@ -1450,15 +1450,13 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, SetPageAnonExclusive(page); } - /* increment count (starts at -1) */ - atomic_set(&folio->_large_mapcount, nr - 1); + folio_set_large_mapcount(folio, nr, vma); atomic_set(&folio->_nr_pages_mapped, nr); } else { nr = folio_large_nr_pages(folio); /* increment count (starts at -1) */ atomic_set(&folio->_entire_mapcount, 0); - /* increment count (starts at -1) */ - atomic_set(&folio->_large_mapcount, 0); + folio_set_large_mapcount(folio, 1, vma); atomic_set(&folio->_nr_pages_mapped, ENTIRELY_MAPPED); if (exclusive) SetPageAnonExclusive(&folio->page); @@ -1542,7 +1540,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, break; } - atomic_sub(nr_pages, &folio->_large_mapcount); + folio_sub_large_mapcount(folio, nr_pages, vma); do { last += atomic_add_negative(-1, &page->_mapcount); } while (page++, --nr_pages > 0); @@ -1554,7 +1552,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, partially_mapped = nr && atomic_read(mapped); break; case RMAP_LEVEL_PMD: - atomic_dec(&folio->_large_mapcount); + folio_dec_large_mapcount(folio, vma); last = atomic_add_negative(-1, &folio->_entire_mapcount); if (last) { nr = atomic_sub_return_relaxed(ENTIRELY_MAPPED, mapped); From patchwork Thu Aug 29 16:56:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13783473 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E8E06C87FC8 for ; Thu, 29 Aug 2024 16:58:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 800376B009D; Thu, 29 Aug 2024 12:58:14 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7B1676B009E; Thu, 29 Aug 2024 12:58:14 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 651156B009F; Thu, 29 Aug 2024 12:58:14 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 40D456B009D for ; Thu, 29 Aug 2024 12:58:14 -0400 (EDT) Received: from smtpin27.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 07A98A1165 for ; Thu, 29 Aug 2024 16:58:14 +0000 (UTC) X-FDA: 82505890908.27.5413B7F Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf09.hostedemail.com (Postfix) with ESMTP id 3E7B9140030 for ; Thu, 29 Aug 2024 16:58:12 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=SbHJxO48; spf=pass (imf09.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724950603; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=iJ36TGls6X+NLaUI9I8k1fO0354o+pno0obx3qgSgO4=; b=OAZVmV+N4OnpWLh51lWRPWbCNW/QuBphrGtOxjsjQ1pGKW/B6i63ETKt+EeN4bXgQdDFGf P45QAHgZgsR9DaUcRHX62HRbBgrluuk1PWGUXtKpUQbf7C3ULfOY28gGCWRX+VfVTHx7IF v5pebYN89Pt4MhcaqRr004nSOzrOxMs= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724950603; a=rsa-sha256; cv=none; b=ISL0uPv+cf+BXchFXDG6XFYXFdgAVYp5WhZmwSXolsvjD8OQmoio3NcvLjlHcHIOMXFXuA 4rBOtHO/sa5vxW+8fSEGByy4xx7ADNtnvikDly3pw1ulJnrvryhNt2R8BWigGhuvFZRnbs kaxpGm4GWiS1RP234MbWynHWXSta1d4= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=SbHJxO48; spf=pass (imf09.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724950691; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=iJ36TGls6X+NLaUI9I8k1fO0354o+pno0obx3qgSgO4=; b=SbHJxO48g1D9iYRM8DyGf9bDXQDm7kavnPihTGsXMANVV61L3cZzClCKXWDNemvPCx4g7m 24eL8+IJpY+3uYOk5c8unII+d/eAV7KZwDNp/0LTpy8Nhw2u8Gwy9qc2Xh8M/VHe+TDt3J 7wQnlctY7V56k8sV7OsG76BSRgifKuY= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-592-MTIkZoMGPJK2qRqMLQImJQ-1; Thu, 29 Aug 2024 12:58:06 -0400 X-MC-Unique: MTIkZoMGPJK2qRqMLQImJQ-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id B18C7190ECCE; Thu, 29 Aug 2024 16:58:03 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.193.245]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 45D921955F66; Thu, 29 Aug 2024 16:57:51 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH v1 08/17] mm/rmap: initial MM owner tracking for large folios (!hugetlb) Date: Thu, 29 Aug 2024 18:56:11 +0200 Message-ID: <20240829165627.2256514-9-david@redhat.com> In-Reply-To: <20240829165627.2256514-1-david@redhat.com> References: <20240829165627.2256514-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 3E7B9140030 X-Stat-Signature: 7orhxsubx1wtooencfwa1iadj6d6fygn X-HE-Tag: 1724950692-998067 X-HE-Meta: U2FsdGVkX1+GL3FQcSKH/UYpDOZyFzYFW3l4/P8kYIjs0fDh0WN9+Q1iAZnv5zfcW94nal8xA7v81Wxg24nILptJcQpfLzastR+2fhxClsVu3vqff7RZlpXAO/GzWvmbG0oqEfrjjkoYWgVIB2LfgDB0C7hK1WKjbcQ21DT9H/edcR11tLV0FiF5kq36H/X8zJBdn4vzsyzR+CSi2CGt7HCzv5GkAPUO2XKdma3r0El/jOx0lQO22qvwXleVDai/tv57Qtq3iBUyElPOa6X5o5KE/p/jf0lJbBZ08sZXcbAD3GctVYV2+071Voc57hzCvOpAAJgERir1Xq5wri64dh9158FrfeZNVK9xGc6eS0QnKjHmnQEqFt5127Q10DloihyUGGKoFg74e7EToP/wi/VzQ/uKQHPBmtEdsuGAhZjqB/JzGWAvPiELAhdwZR03N9iuJc1yBvtj6jgVUSDY1flemgPaNYHGLZT9weMHTahvv7W7XdAMeJB/pDxJPnWpf9EZCJ6o5g0N5XFJ0uhqS4WqtQ/Bo/qUaGfVFGmGLKdU/lUpi3H2NiEp2To67+O2zJhY+SPUkkWoZaHDiGaLbf/GxYkN2kqxFa9HmkEsuXBRbVzUF+whU8tE4sGlw0v59pg/2ouJlddGkbJ7N5VNsSXNIFc7kQP6EAMb1Gd7JKSnpKsiWMXUauOzL/+0X0waPIjZFzGYlGd3Zr6cZvFf66oTZfEEYOaQndZxYFD1HcndG1GiLcqSH82z5xq/Uxo9ZEmWur0TTRfh9yuhG3pWmYeSUDqqXeTDsnOudBP75O2msoIqYlFuffYJ71wd54Kv57cT4EJUAAkV2eHL7XMv3Leok2L6Gn2VE9+Fp7fqhwGy26yqKCyALmbz8STXAG9LEoTzAVfNaxs+CoHCeF4lgDJB0s3IEU2s0iSDxzyU20AJMJApQjbjTz+Rxn9nIYjt8Q8eH8IgND8ulj7/RMM ALwTelV5 IC5TlvzCwhKhVx1Z8+ekR6LvgYi+tal86Yn4mXGUye/PO9xtUIAbqjgsSLFJHCMbd7ViRi6bzMNL7rGQjv4YTS93c/CGKSiVb/Yp66V9x+1W/E2n2mY8iFA7+6Ate9yDGgdzewNZUwIqazokAlBf0nF4Tmn6+cBAzdwAXaEX1ZBjjM9HJx1WSbHjpO0G0c31Wq3IjE3ZQegFY7G/sLnWTZebuXx7+rt3xuk571fHhy8vb1s/o4MvKyLk/x9ffggJRVYCZKOYKCfgDdpeeb/2CuSJ+zuP1HHxuZn+xbb0yBw3DkAnVh2/uYdZdfHQZce+KTmAIOhl7i6JWq/GKnbvdw3j5ZvR/xVn5VGNcY9wv5GC3LIPmMCtJTicsZQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's track for each large folio (excluding hugetlb for now) whether it is certainly mapped exclusively (mapped by a single MM), or whether it may be mapped shared (mapped by multiple MMs). In an ideal world, we'd have a more precise "mapped exclusively" vs. "mapped shared" tracking -- avoiding the "maybe" part -- but the approaches to achieve that are a bit more involve, and we are going to start with something simple so we can also make progress on per-page mapcount removal. We can later easily exchange the tracking mechanism. We'll use this information next to optimize COW reuse for PTE-mapped anonymous THP, and implement folio_likely_mapped_shared() in kernel configuration where the per-page mapcounts in large folios are no longer maintained. We could start doing the MM owner tracking for anonymous folios initially (COW reuse only applies to anon folios), but we'll keep it simple and just do it also for pagecache folios: the new tracking must be manually enabled via a kconfig option for now. 64bit only, because we cannot easily squeeze more stuff into the "struct folio" of order-1 folios. 32bit might be possible in the future, for example when limiting order-1 folios to 64bit only. We'll remember for each large folio for two MMs that currently map this folio, how often they are mapping folio pages (mapcount). As long as a folio is unmapped or exclusively mapped, another MM can take a free spot. We won't allow to take a free spot if the folio is not mapped exclusively: primarily to avoid some corner cases where some mappings of a MM are tracked via the slot, and others not (identified while working on this). In addition, we'll remember the current state (exclusive/shared) and use a bit spinlock to sync on updates, and to require only a single atomic operation for our updates. Using a bit spinlock is not ideal, but there are not that many easy alternatives. We might be able to squeeze an arch_spin_lock into the "struct folio" later, for now keep it simple. RT is out of the picture with THP, and we can always optimize this later. As we have to squeeze this information into the "struct folio" of even folios of order-1 (2 pages), and we generally want to reduce the required metadata, we'll assign each MM a unique ID that consumes less than 32 bit. We'll limit the IDs to 20bit / 1M for now: we could allow for up to 30bit, but getting even 1M IDs is unlikely in practice. If required, we could raise the limit later, and the 1M limit might come in handy in the future with other tracking approaches. There won't be any false "mapped shared" detection as long as only two MMs map pages of a folio at one point in time -- for example with fork() and short-lived child processes, or with apps that hand over state from one instance to another, like live-migrating VMs on the same host, effectively migrating guest RAM via a mmap'ed files. As soon as three MMs are involved at the same time, we might detect "mapped shared" although the folio is now "mapped exclusively". Example: (1) App1 faults in a (shmem/file-backed) folio -> Tracked as MM0 (2) App2 faults in the same folio -> Tracked as MM1 (3) App3 faults in the same folio -> Cannot be tracked separately (4) App1 and App2 unmap the folio. (5) We'll still detect "shared" even though only App3 maps the folio. With multiple processes, this might have the potential to result in unexpected owner changes, when migrating pages or when faulting them in: assume a parent process fork()'s two short-lived child processes. We would expect that the parent always remains tracked under MM0, but it could be that at some point both child processes are tracked instead. For file-backed memory, reclaim+refault can trigger something similar. Keep compilation for the vdso32 hack working by un-defining CONFIG_MM_ID like we for CONFIG_64BIT. Make use of __always_inline to keep possible performance degradation when (un)mapping large folios to a minimum. Signed-off-by: David Hildenbrand --- Documentation/mm/transhuge.rst | 8 ++ arch/x86/entry/vdso/vdso32/fake_32bit_build.h | 1 + include/linux/mm_types.h | 23 ++++ include/linux/page-flags.h | 41 ++++++ include/linux/rmap.h | 126 ++++++++++++++++++ kernel/fork.c | 36 +++++ mm/Kconfig | 11 ++ mm/huge_memory.c | 6 + mm/internal.h | 6 + mm/page_alloc.c | 10 ++ 10 files changed, 268 insertions(+) diff --git a/Documentation/mm/transhuge.rst b/Documentation/mm/transhuge.rst index a2cd8800d5279..0ee58108a4d14 100644 --- a/Documentation/mm/transhuge.rst +++ b/Documentation/mm/transhuge.rst @@ -120,11 +120,19 @@ pages: and also increment/decrement folio->_nr_pages_mapped by ENTIRELY_MAPPED when _entire_mapcount goes from -1 to 0 or 0 to -1. + With CONFIG_MM_ID, we also maintain the two slots for tracking MM + owners (MM ID and corresponding mapcount), and the current status + ("mapped shared" vs. "mapped exclusively"). + - map/unmap of individual pages with PTE entry increment/decrement page->_mapcount, increment/decrement folio->_large_mapcount and also increment/decrement folio->_nr_pages_mapped when page->_mapcount goes from -1 to 0 or 0 to -1 as this counts the number of pages mapped by PTE. + With CONFIG_MM_ID, we also maintain the two slots for tracking MM + owners (MM ID and corresponding mapcount), and the current status + ("mapped shared" vs. "mapped exclusively"). + split_huge_page internally has to distribute the refcounts in the head page to the tail pages before clearing all PG_head/tail bits from the page structures. It can be done easily for refcounts taken by page table diff --git a/arch/x86/entry/vdso/vdso32/fake_32bit_build.h b/arch/x86/entry/vdso/vdso32/fake_32bit_build.h index db1b15f686e32..93d2bf13a6280 100644 --- a/arch/x86/entry/vdso/vdso32/fake_32bit_build.h +++ b/arch/x86/entry/vdso/vdso32/fake_32bit_build.h @@ -13,6 +13,7 @@ #undef CONFIG_SPARSEMEM_VMEMMAP #undef CONFIG_NR_CPUS #undef CONFIG_PARAVIRT_XXL +#undef CONFIG_MM_ID #define CONFIG_X86_32 1 #define CONFIG_PGTABLE_LEVELS 2 diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 480548552ea54..6d27856686439 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -311,6 +311,9 @@ typedef struct { * @_nr_pages_mapped: Do not use outside of rmap and debug code. * @_pincount: Do not use directly, call folio_maybe_dma_pinned(). * @_nr_pages: Do not use directly, call folio_nr_pages(). + * @_mm0_mapcount: Do not use outside of rmap code. + * @_mm1_mapcount: Do not use outside of rmap code. + * @_mm_ids: Do not use outside of rmap code. * @_hugetlb_subpool: Do not use directly, use accessor in hugetlb.h. * @_hugetlb_cgroup: Do not use directly, use accessor in hugetlb_cgroup.h. * @_hugetlb_cgroup_rsvd: Do not use directly, use accessor in hugetlb_cgroup.h. @@ -377,6 +380,11 @@ struct folio { atomic_t _entire_mapcount; atomic_t _nr_pages_mapped; atomic_t _pincount; +#ifdef CONFIG_MM_ID + int _mm0_mapcount; + int _mm1_mapcount; + unsigned long _mm_ids; +#endif /* CONFIG_MM_ID */ }; unsigned long _usable_1[4]; }; @@ -1044,6 +1052,9 @@ struct mm_struct { #endif } lru_gen; #endif /* CONFIG_LRU_GEN_WALKS_MMU */ +#ifdef CONFIG_MM_ID + unsigned int mm_id; +#endif } __randomize_layout; /* @@ -1053,6 +1064,18 @@ struct mm_struct { unsigned long cpu_bitmap[]; }; +#ifdef CONFIG_MM_ID +/* + * For init_mm and friends, we don't allocate an ID and use the dummy value + * instead. Limit ourselves to 1M MMs for now: even though we might support + * up to 4M PIDs, having more than 1M MM instances is highly unlikely. + */ +#define MM_ID_DUMMY 0 +#define MM_ID_NR_BITS 20 +#define MM_ID_MIN (MM_ID_DUMMY + 1) +#define MM_ID_MAX ((1U << MM_ID_NR_BITS) - 1) +#endif /* CONFIG_MM_ID */ + #define MM_MT_FLAGS (MT_FLAGS_ALLOC_RANGE | MT_FLAGS_LOCK_EXTERN | \ MT_FLAGS_USE_RCU) extern struct mm_struct init_mm; diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 2175ebceb41cb..140de182811f2 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -11,6 +11,7 @@ #include #ifndef __GENERATING_BOUNDS_H #include +#include #include #endif /* !__GENERATING_BOUNDS_H */ @@ -1187,6 +1188,46 @@ static inline int folio_has_private(const struct folio *folio) return !!(folio->flags & PAGE_FLAGS_PRIVATE); } +#ifdef CONFIG_MM_ID +/* + * We store two flags (including the bit spinlock) in the upper bits of + * folio->_mm_ids, whereby that whole value is protected by the bit spinlock. + * This allows for only using an atomic op for acquiring the lock. + */ +#define FOLIO_MM_IDS_EXCLUSIVE_BITNUM 62 +#define FOLIO_MM_IDS_LOCK_BITNUM 63 + +static __always_inline void folio_lock_large_mapcount_data(struct folio *folio) +{ + VM_WARN_ON_ONCE(!folio_test_large(folio) || folio_test_hugetlb(folio)); + bit_spin_lock(FOLIO_MM_IDS_LOCK_BITNUM, &folio->_mm_ids); +} + +static __always_inline void folio_unlock_large_mapcount_data(struct folio *folio) +{ + VM_WARN_ON_ONCE(!folio_test_large(folio) || folio_test_hugetlb(folio)); + __bit_spin_unlock(FOLIO_MM_IDS_LOCK_BITNUM, &folio->_mm_ids); +} + +static inline void folio_set_large_mapped_exclusively(struct folio *folio) +{ + VM_WARN_ON_ONCE(!folio_test_large(folio) || folio_test_hugetlb(folio)); + __set_bit(FOLIO_MM_IDS_EXCLUSIVE_BITNUM, &folio->_mm_ids); +} + +static inline void folio_clear_large_mapped_exclusively(struct folio *folio) +{ + VM_WARN_ON_ONCE(!folio_test_large(folio) || folio_test_hugetlb(folio)); + __clear_bit(FOLIO_MM_IDS_EXCLUSIVE_BITNUM, &folio->_mm_ids); +} + +static inline bool folio_test_large_mapped_exclusively(struct folio *folio) +{ + VM_WARN_ON_ONCE(!folio_test_large(folio) || folio_test_hugetlb(folio)); + return test_bit(FOLIO_MM_IDS_EXCLUSIVE_BITNUM, &folio->_mm_ids); +} +#endif /* CONFIG_MM_ID */ + #undef PF_ANY #undef PF_HEAD #undef PF_NO_TAIL diff --git a/include/linux/rmap.h b/include/linux/rmap.h index e3b82a04b4acb..ff2a16864deed 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -173,6 +173,131 @@ static inline void anon_vma_merge(struct vm_area_struct *vma, struct anon_vma *folio_get_anon_vma(struct folio *folio); +#ifdef CONFIG_MM_ID + +/* + * We don't restrict ID0 to less bit, so we can get a slightly more efficient + * implementation when reading/writing ID0. The high bits are used for flags, + * see FOLIO_MM_IDS_*_BITNUM. + */ +#define FOLIO_MM_IDS_ID0_MASK 0x00000000fffffffful +#define FOLIO_MM_IDS_ID1_SHIFT 32 +#define FOLIO_MM_IDS_ID1_MASK 0x00ffffff00000000ul + +static inline unsigned int folio_mm0_id(struct folio *folio) +{ + return folio->_mm_ids & FOLIO_MM_IDS_ID0_MASK; +} + +static inline void folio_set_mm0_id(struct folio *folio, unsigned int id) +{ + folio->_mm_ids &= ~FOLIO_MM_IDS_ID0_MASK; + folio->_mm_ids |= id; +} + +static inline unsigned int folio_mm1_id(struct folio *folio) +{ + return (folio->_mm_ids & FOLIO_MM_IDS_ID1_MASK) >> FOLIO_MM_IDS_ID1_SHIFT; +} + +static inline void folio_set_mm1_id(struct folio *folio, unsigned int id) +{ + folio->_mm_ids &= ~FOLIO_MM_IDS_ID1_MASK; + folio->_mm_ids |= (unsigned long)id << FOLIO_MM_IDS_ID1_SHIFT; +} + +static __always_inline void folio_set_large_mapcount(struct folio *folio, + int mapcount, struct vm_area_struct *vma) +{ + VM_WARN_ON_ONCE(!folio_test_large(folio) || folio_test_hugetlb(folio)); + + /* Note: mapcounts start at -1. */ + atomic_set(&folio->_large_mapcount, mapcount - 1); + folio->_mm0_mapcount = mapcount - 1; + folio_set_mm0_id(folio, vma->vm_mm->mm_id); + VM_WARN_ON_ONCE(!folio_test_large_mapped_exclusively(folio)); + VM_WARN_ON_ONCE(folio->_mm1_mapcount >= 0); +} + +static __always_inline void folio_add_large_mapcount(struct folio *folio, + int diff, struct vm_area_struct *vma) +{ + const unsigned int mm_id = vma->vm_mm->mm_id; + int mapcount_val; + + VM_WARN_ON_ONCE(!folio_test_large(folio) || folio_test_hugetlb(folio)); + VM_WARN_ON_ONCE(diff <= 0 || mm_id < MM_ID_MIN || mm_id > MM_ID_MAX); + + folio_lock_large_mapcount_data(folio); + /* + * We expect that unmapped folios always have the "mapped exclusively" + * flag set for simplicity. + */ + VM_WARN_ON_ONCE(atomic_read(&folio->_large_mapcount) < 0 && + !folio_test_large_mapped_exclusively(folio)); + + mapcount_val = atomic_read(&folio->_large_mapcount) + diff; + atomic_set(&folio->_large_mapcount, mapcount_val); + + if (folio_mm0_id(folio) == mm_id) { + folio->_mm0_mapcount += diff; + if (folio->_mm0_mapcount != mapcount_val) + folio_clear_large_mapped_exclusively(folio); + } else if (folio_mm1_id(folio) == mm_id) { + folio->_mm1_mapcount += diff; + if (folio->_mm1_mapcount != mapcount_val) + folio_clear_large_mapped_exclusively(folio); + } else if (folio_test_large_mapped_exclusively(folio)) { + /* + * We only allow taking over a tracking slot if the folio is + * exclusive, meaning that any mappings belong to exactly one + * tracked MM (which cannot be this MM). + */ + if (folio->_mm0_mapcount < 0) { + folio_set_mm0_id(folio, mm_id); + folio->_mm0_mapcount = diff - 1; + } else { + VM_WARN_ON_ONCE(folio->_mm1_mapcount >= 0); + folio_set_mm1_id(folio, mm_id); + folio->_mm1_mapcount = diff - 1; + } + folio_clear_large_mapped_exclusively(folio); + } + folio_unlock_large_mapcount_data(folio); +} +#define folio_inc_large_mapcount(folio, vma) \ + folio_add_large_mapcount(folio, 1, vma) + +static __always_inline void folio_sub_large_mapcount(struct folio *folio, + int diff, struct vm_area_struct *vma) +{ + const unsigned int mm_id = vma->vm_mm->mm_id; + int mapcount_val; + + VM_WARN_ON_ONCE(!folio_test_large(folio) || folio_test_hugetlb(folio)); + VM_WARN_ON_ONCE(diff <= 0 || mm_id < MM_ID_MIN || mm_id > MM_ID_MAX); + + folio_lock_large_mapcount_data(folio); + mapcount_val = atomic_read(&folio->_large_mapcount) - diff; + atomic_set(&folio->_large_mapcount, mapcount_val); + + if (folio_mm0_id(folio) == mm_id) + folio->_mm0_mapcount -= diff; + else if (folio_mm1_id(folio) == mm_id) + folio->_mm1_mapcount -= diff; + + /* + * We only consider folios exclusive if there are no mappings or if + * one tracked MM owns all mappings. + */ + if (folio->_mm0_mapcount == mapcount_val || + folio->_mm1_mapcount == mapcount_val) + folio_set_large_mapped_exclusively(folio); + folio_unlock_large_mapcount_data(folio); +} +#define folio_dec_large_mapcount(folio, vma) \ + folio_sub_large_mapcount(folio, 1, vma) +#else /* !CONFIG_MM_ID */ static inline void folio_set_large_mapcount(struct folio *folio, int mapcount, struct vm_area_struct *vma) { @@ -203,6 +328,7 @@ static inline void folio_dec_large_mapcount(struct folio *folio, { atomic_dec(&folio->_large_mapcount); } +#endif /* !CONFIG_MM_ID */ /* RMAP flags, currently only relevant for some anon rmap operations. */ typedef int __bitwise rmap_t; diff --git a/kernel/fork.c b/kernel/fork.c index ebc9132840872..7b9df4c881387 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -813,6 +813,36 @@ static int dup_mmap(struct mm_struct *mm, struct mm_struct *oldmm) #define mm_free_pgd(mm) #endif /* CONFIG_MMU */ +#ifdef CONFIG_MM_ID +static DEFINE_IDA(mm_ida); + +static inline int mm_alloc_id(struct mm_struct *mm) +{ + int ret; + + ret = ida_alloc_range(&mm_ida, MM_ID_MIN, MM_ID_MAX, GFP_KERNEL); + if (ret < 0) + return ret; + mm->mm_id = ret; + return 0; +} + +static inline void mm_free_id(struct mm_struct *mm) +{ + const int id = mm->mm_id; + + mm->mm_id = MM_ID_DUMMY; + if (id == MM_ID_DUMMY) + return; + if (WARN_ON_ONCE(id < MM_ID_MIN || id > MM_ID_MAX)) + return; + ida_free(&mm_ida, id); +} +#else +static inline int mm_alloc_id(struct mm_struct *mm) { return 0; } +static inline void mm_free_id(struct mm_struct *mm) {} +#endif + static void check_mm(struct mm_struct *mm) { int i; @@ -916,6 +946,7 @@ void __mmdrop(struct mm_struct *mm) WARN_ON_ONCE(mm == current->active_mm); mm_free_pgd(mm); + mm_free_id(mm); destroy_context(mm); mmu_notifier_subscriptions_destroy(mm); check_mm(mm); @@ -1293,6 +1324,9 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, if (mm_alloc_pgd(mm)) goto fail_nopgd; + if (mm_alloc_id(mm)) + goto fail_noid; + if (init_new_context(p, mm)) goto fail_nocontext; @@ -1312,6 +1346,8 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p, fail_cid: destroy_context(mm); fail_nocontext: + mm_free_id(mm); +fail_noid: mm_free_pgd(mm); fail_nopgd: free_mm(mm); diff --git a/mm/Kconfig b/mm/Kconfig index b23913d4e47e2..0877be8c50b6c 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -846,6 +846,17 @@ choice enabled at runtime via sysfs. endchoice +config MM_ID + bool "MM ID tracking" + depends on TRANSPARENT_HUGEPAGE && 64BIT + help + Use unique per-MM IDs to track whether large allocations, such + as transparent huge pages, that span multiple physical pages + are "mapped shared" or "mapped exclusively" into user page tables. + This information is useful to determine the current owner of such a + large allocation, for example, helpful for the Copy-On-Write reuse + optimization. + config THP_SWAP def_bool y depends on TRANSPARENT_HUGEPAGE && ARCH_WANTS_THP_SWAP && SWAP && 64BIT diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6de84377e8e77..7fa84ba506563 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3193,6 +3193,12 @@ static void __split_huge_page(struct page *page, struct list_head *list, ClearPageHasHWPoisoned(head); +#ifdef CONFIG_MM_ID + if (!new_order) + /* Make sure page->private on the second page is 0. */ + folio->_mm_ids = 0; +#endif + for (i = nr - new_nr; i >= new_nr; i -= new_nr) { __split_huge_page_tail(folio, i, lruvec, list, new_order); /* Some pages can be beyond EOF: drop them from page cache */ diff --git a/mm/internal.h b/mm/internal.h index f627fd2200464..da38c747c73d4 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -665,6 +665,12 @@ static inline void prep_compound_head(struct page *page, unsigned int order) atomic_set(&folio->_entire_mapcount, -1); atomic_set(&folio->_nr_pages_mapped, 0); atomic_set(&folio->_pincount, 0); +#ifdef CONFIG_MM_ID + folio->_mm0_mapcount = -1; + folio->_mm1_mapcount = -1; + folio->_mm_ids = 0; + folio_set_large_mapped_exclusively(folio); +#endif if (order > 1) INIT_LIST_HEAD(&folio->_deferred_list); } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e276cbaf97054..c81f29e29b82d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -959,6 +959,16 @@ static int free_tail_page_prepare(struct page *head_page, struct page *page) bad_page(page, "nonzero pincount"); goto out; } +#ifdef CONFIG_MM_ID + if (unlikely(folio->_mm0_mapcount + 1)) { + bad_page(page, "nonzero _mm0_mapcount"); + goto out; + } + if (unlikely(folio->_mm1_mapcount + 1)) { + bad_page(page, "nonzero _mm1_mapcount"); + goto out; + } +#endif break; case 2: /* the second tail page: deferred_list overlaps ->mapping */ From patchwork Thu Aug 29 16:56:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13783474 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 442B0C87FC3 for ; Thu, 29 Aug 2024 16:58:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CE2586B009F; Thu, 29 Aug 2024 12:58:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C92166B00A0; Thu, 29 Aug 2024 12:58:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B34036B00A1; Thu, 29 Aug 2024 12:58:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 932696B009F for ; Thu, 29 Aug 2024 12:58:23 -0400 (EDT) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 506D116102A for ; Thu, 29 Aug 2024 16:58:23 +0000 (UTC) X-FDA: 82505891286.26.6D2DE7D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf18.hostedemail.com (Postfix) with ESMTP id 936401C0004 for ; Thu, 29 Aug 2024 16:58:21 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=b1APO7w8; spf=pass (imf18.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724950613; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UScoC5uAD98XoEORkSS5jabd7Wa591naPepP4SsdqqM=; b=Z1u5A2wDLyuJFeur44H183xHfPnmf7wmpamQYPgxn7Mbx9Ta4z7Az89o1/P2i8sKFNZ67Y p1rsscYl49coHFuT4fg4EwIdw9OuA3lzOhXMj5a6zLXKk7v9kXpja464bTpciDgIabkLD4 ZltNUPGrRx5Qwa9DYe9BBGRNZ8JIwp4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724950613; a=rsa-sha256; cv=none; b=56kJUQKub7xC3Kyyp4DUGyTJO6/KK5Iu0Zrgka7wUShNDKIFa3o4mALvAc1JQZa1fRcUIb CaCXdrPo96FB7V8ymUrJEULM4DjTHb/Fs2R4sZgoPQrpz/VhJUKyVLO2KGT8+b0mvr86EO VFHRos2Iw8/pXFx5f9czgQTFATAhWME= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=b1APO7w8; spf=pass (imf18.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724950700; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UScoC5uAD98XoEORkSS5jabd7Wa591naPepP4SsdqqM=; b=b1APO7w8Lqgw4d/9gY+tm4R8XJGo4sDhdUdu5sh4DLObD2nv3nS8rtNRprpwd8NtDN+5We 7BcKEojS9uXHplv/IA35+ZLFjxi9B0ZLnai/oILQmZqGMNKYKPwId4EUM+oo07MJOd/NB6 gPDCgyACIXcZZAJPzfIgYGiSH2Po9y8= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-627-7nh5CtHIOOCFSuu-GrSw3g-1; Thu, 29 Aug 2024 12:58:17 -0400 X-MC-Unique: 7nh5CtHIOOCFSuu-GrSw3g-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C41D8191379F; Thu, 29 Aug 2024 16:58:14 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.193.245]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id AB5951955F66; Thu, 29 Aug 2024 16:58:03 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH v1 09/17] bit_spinlock: __always_inline (un)lock functions Date: Thu, 29 Aug 2024 18:56:12 +0200 Message-ID: <20240829165627.2256514-10-david@redhat.com> In-Reply-To: <20240829165627.2256514-1-david@redhat.com> References: <20240829165627.2256514-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspamd-Queue-Id: 936401C0004 X-Stat-Signature: n6hyfbxys3kd1uhjqx9ckhcx1zezfqge X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1724950701-291639 X-HE-Meta: U2FsdGVkX1+k1gvd2plHSYISTYhZuxfA9+3jMNFwLDNjVTDFhREXahWLjJuMOC8EIJFqJidz40RVzvv9yGrPpy7RoFHKndlMpaPflh+9Uc39sPzClIgIqi4AfDqCJpuFpUHGwN7L49prZzrpD6SuvgyOLZdv6k5u8otL7bhfaf5n8Q7qV7EeVmnGdyRq+0urAfY149A/esUxM91Ib9NUFOiTxYKkQTWrlnBAqkFTjJYy0/hxx+jI/PfIyJlT3cVVe2jMkbLP36ckKG8dKyVqhixRxwssW0lmUu7ks8CMUQDUcMDh/dSW/vxMTMjUJFnwQMdGcp66TSRvPWDSDUFTUenGWqvAjnNVwWPYUz1V68Efw3g51TxVXTXCQQnmVhmcX/oIjoWfe6zVwq/sVQ6nukQvgGhybMMRclboId6YUSBc2MRyKm7/pOOxyhgyy5JyHWhHWKuN4JSUlgZUBvsB3DBP/Us+sKoRGEpuoz8iHvzIltCA2bs5vKezOpldV3jE2oXUWwfb3H6DByEUFUopImXiu/nF+dNoNFoLwHitfvgLxZKrzwQv+t4EDXYt3mP779bac/0fWR/oz0a5ZT37dDzvs08xEKuZT2+bb005iUTR253fuzYGlDfVfs+E+4J1nQO9MIN+IoJQ4gqZTzMI4qrSpIln/6GiqifMDk9dDo3YL7kN1wprPHAanG20xivHWvIh1jBJXl5Qca1v/C/LbjVgFjE06fp+AiqU9zGWpwUWvuvrzX3kVPe3C0UJ7N7pqRbNTr8YlDrUbBOBhqRfdNe3oXGalfymuexE2nOJHf7IR8tnCMrr0YAQdYus7TjEZk2eEmC9SqMra5LzlxYG0Sq42e9DbdIRh3gMTiiCcBA261zILKM6HlUdITn0svvKnxWH5FOgMG0AI4Ek30aKIHuUDGaF/GDuYz0bd/cheyLFcvhQyeWDUeRenod2gch0y9oFigS6NzWNEDVtzaA P1OIS7xl uSQ8XmSWtYb3X1rL3LuDK8gsBHsJAHmYRRnyb8njUJ3D3sP5U0AJT9d08KXnwCob0hBRBtgaQqu3HAdF6I5JamwBI/qZ+zhu7HU8kmtTPQfrDTQOIbReAjvi7AqMAW1bUigBV7vT5jNQWDooJcoOwCBxDxsC+kuhEqYMhY4CiL7UrmHmmzWCcGjqFtLMlWpu6Y8J758RJLsikOBhDj5wpRDSfqT9yAEzTGzqKGZjcXtIwu7rDuFY159qcJvDArPZGQMRFL85Zeo+W+IVcM341D4wu5QLBjmuGzcKWQ1SvMlpPstZQy6mqm1ZAL+lG/4xu4U7YjxhnIhNKHQWpKegd8Tf26w1S+M0KqgRknEA1Z0nD9Vpvw7P+Uyyf3A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: The compiler might decide that it is a smart idea to not inline bit_spin_lock(), primarily when a couple of functions in the same file end up calling it. Especially when used in RMAP context, this can negatively affect fork() performance, where each additional function call is noticeable. Let's simply flag all lock/unlock functions as __always_inline; arch_test_and_set_bit_lock() and friends are already tagged like that (but not test_and_set_bit_lock() for some reason). If ever a problem, we could split it into a fast and a slow path, and only force the fast path to be inlined. But there is nothing particularly "big" here. Signed-off-by: David Hildenbrand --- include/linux/bit_spinlock.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/include/linux/bit_spinlock.h b/include/linux/bit_spinlock.h index bbc4730a6505c..c0989b5b0407f 100644 --- a/include/linux/bit_spinlock.h +++ b/include/linux/bit_spinlock.h @@ -13,7 +13,7 @@ * Don't use this unless you really need to: spin_lock() and spin_unlock() * are significantly faster. */ -static inline void bit_spin_lock(int bitnum, unsigned long *addr) +static __always_inline void bit_spin_lock(int bitnum, unsigned long *addr) { /* * Assuming the lock is uncontended, this never enters @@ -38,7 +38,7 @@ static inline void bit_spin_lock(int bitnum, unsigned long *addr) /* * Return true if it was acquired */ -static inline int bit_spin_trylock(int bitnum, unsigned long *addr) +static __always_inline int bit_spin_trylock(int bitnum, unsigned long *addr) { preempt_disable(); #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_SPINLOCK) @@ -54,7 +54,7 @@ static inline int bit_spin_trylock(int bitnum, unsigned long *addr) /* * bit-based spin_unlock() */ -static inline void bit_spin_unlock(int bitnum, unsigned long *addr) +static __always_inline void bit_spin_unlock(int bitnum, unsigned long *addr) { #ifdef CONFIG_DEBUG_SPINLOCK BUG_ON(!test_bit(bitnum, addr)); @@ -71,7 +71,7 @@ static inline void bit_spin_unlock(int bitnum, unsigned long *addr) * non-atomic version, which can be used eg. if the bit lock itself is * protecting the rest of the flags in the word. */ -static inline void __bit_spin_unlock(int bitnum, unsigned long *addr) +static __always_inline void __bit_spin_unlock(int bitnum, unsigned long *addr) { #ifdef CONFIG_DEBUG_SPINLOCK BUG_ON(!test_bit(bitnum, addr)); From patchwork Thu Aug 29 16:56:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13783475 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 133A9C87FC8 for ; Thu, 29 Aug 2024 16:58:37 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A24C66B00A1; Thu, 29 Aug 2024 12:58:36 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9D3946B00A2; Thu, 29 Aug 2024 12:58:36 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 84D576B00A3; Thu, 29 Aug 2024 12:58:36 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 65E566B00A1 for ; Thu, 29 Aug 2024 12:58:36 -0400 (EDT) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 25D081606A3 for ; Thu, 29 Aug 2024 16:58:36 +0000 (UTC) X-FDA: 82505891832.04.3141F54 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf27.hostedemail.com (Postfix) with ESMTP id 6D6A840014 for ; Thu, 29 Aug 2024 16:58:34 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FoLIfD8r; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf27.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724950615; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=SjSkhPyRz/5fIFSM96q39YLgP24AQs1QkGAT42V2m6c=; b=J3kxgdXDt0taAw+9CW3VQFWQFQvOstgSRc53+jrSrefWUTnSViEdiRSKjsHNT0WU/cjXfM UQ+GUUV0T9UHGr+YMa68/iOS0x38Z2WK+CAcLiBYZV3+pkrWj8fPMU6OdwyHs67bsxVUGN iePjYo11DYvWqw4EuF6RspDm+Qijo1I= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724950615; a=rsa-sha256; cv=none; b=tMC8uhZdeMnZdUprtcjFGdYcSRBOseQZkRzYjwRxXSvkEyeCy0UxL6j3om+XWo/j6MZOtI LLlNxyh4fzW/QufrOo9gzTQwxeh8+Ha6kYmBWnIjSELvrqq7ez/I//UM4uKwWH3kw8idhd EAkVs8TJPaikfNSpSQw1jClz20MeMEE= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=FoLIfD8r; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf27.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724950713; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SjSkhPyRz/5fIFSM96q39YLgP24AQs1QkGAT42V2m6c=; b=FoLIfD8r8RvGR1/WK0TOLAMzon1MRWv5AlmkhU4cU9vAeLjOA48my1rDGvN6UbdDNDUduq 8bWgFBVvb7hOouEtWNE/o1YrlyIGPlL1lz0e5oyMm6WTGJV7zOttgi2wYFjRJqQXMnHbuV VxknXGg6Am7t5lO6fNuczLmdHR545xI= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-548-ZwwKUWzxPCORnd2cpciv7A-1; Thu, 29 Aug 2024 12:58:27 -0400 X-MC-Unique: ZwwKUWzxPCORnd2cpciv7A-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 3E98818BC2F6; Thu, 29 Aug 2024 16:58:23 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.193.245]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 120621955F66; Thu, 29 Aug 2024 16:58:14 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH v1 10/17] mm: COW reuse support for PTE-mapped THP with CONFIG_MM_ID Date: Thu, 29 Aug 2024 18:56:13 +0200 Message-ID: <20240829165627.2256514-11-david@redhat.com> In-Reply-To: <20240829165627.2256514-1-david@redhat.com> References: <20240829165627.2256514-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 6D6A840014 X-Stat-Signature: nwuzzpp1zxffcgchr71i1kgwaj6r31bt X-Rspam-User: X-HE-Tag: 1724950714-535916 X-HE-Meta: U2FsdGVkX1+WRpUrcC16+aP6zIptf8GLmLLtx/ZeLd+OOEmsGLt5Li1fCU0RDtHn8Id/zGywnA/kPvSIVt6Rf9wmTjyBaKEaZeQmhoy+dQhcGiU0gmloEtObqw+h11jnPYiYexbQ6wyfJBlFFFOxBxVZQZui5sHw2VwVkwtRKJiHSS45LVC9HP7S8FjFwfC9sSH+jm6jbOYBqPTseYhZXFi+K+HXChkLmpfjKAd7g/fDx5kq5/EiXTDzwqQ82K3TkyVReaKkluR0G6wH1qv7tt+S47W7ZQuDak581kkg+lZQJyjemYJQuVLbztxKr6ROOk227CqSv26JaJnYXqmrEyHPUfx410SPbHZfa8Q5LrZgAzh5aYkKXVKZQ/yFMZ4sdcC0Fxvh8SNLEzE+vzCjcF239t90oO0i7wyL4z5Jv8JoEPYeNzgRdlUfuguGWrGrza9LawSDmqTEN+JxotHwv4jkWfpKQC3xzMok/XuoDVTgFqY6ci7LktTphPaWuvf3JzhOXrVBZ/qsrtv+rwZr84xtw6jBi1zN+kJLrMkP827lDi7Y+GS9oaOGVPJw4Kac+z2UEcTAdbowm5OPfTNxnrn2w4QcIRlQKQm8YZ6Deg3EERYDQZJZ93+GXMay2dneZV/mZ9epnONPkCug6rNk+jVzYky6MEcD1Zvrsai6TxL9is/BB/VA70LB3djbp6Fjft8UKUlcVnTk712Dm59/rZJmBcQzoDg4oLPISbI0ZjCIo0AVzK/Pdb+buMCItCkIZNKsJTk+hmJSoH51az5vAmcySRiU/ZFESTCFIf+4q6nr4RQXugR/q/nwSBNiw4Ol5lDg/NAQlc7fykBoi25luD2vrp5vNNPxRTk2HhA+W3p/bu8FGzrmajoektYAKX5fJ6rTUcpDgeyfr/GI+Xjt07ek/U3DXOYe6ie1B2gaYrDyx/uuK/4viVHtp4doYm9tW60WanTJab0IzG27IRz aGO81I5r qkU2+cYKwtg6M/Ci0B5pQjo3JbjxBgb4nfdNIgmwWL1Ui3toIRz2Yb7ZkYhCK6GVeb5kLX8s2uTb+Vs65uHNeyGnjRAVoMgX6qcjLVP0buCxKlCJVW0hXbz3LuI0yXIFOFfK9f28xpLABMwPSsnWbFOSxZJMQAnk1JyPx6nXcXQCwYb0+8IKxxAYctE1ofZn6V8PyuEQSmL4D8/dhCXGDytuYugmpDIKfzQifB3noK1PFMNaJTHo/4UuqSg/n46TucGrktAbI+zm8cbi2Xzn9ZIXcAGBqBZqNL3f7RRrEXExyMxH3drFTkdafbalbw6RJM3Bj9Z6WdKBCC0Gjl66XNfB1RNuEzHOn3BxESbNovCGzI44HVd7fFrSvFw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's add support for CONFIG_MM_ID. The implementation is fairly straight forward: if exclusively mapped, make sure that all references are from mappings. There are plenty of things we can optimize in the future: For example, we could remember that the folio is fully exclusive so we could speedup the next fault further. Also, we could try "faulting around", turning surrounding PTEs that map the same folio writable. But especially the latter might increase COW latency, so it would need further investigation. Signed-off-by: David Hildenbrand --- mm/memory.c | 87 ++++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 79 insertions(+), 8 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index c2143c40a134b..3803d4aa952ed 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3564,19 +3564,90 @@ static vm_fault_t wp_page_shared(struct vm_fault *vmf, struct folio *folio) return ret; } -static bool wp_can_reuse_anon_folio(struct folio *folio, - struct vm_area_struct *vma) +#ifdef CONFIG_MM_ID +static bool __wp_can_reuse_large_anon_folio(struct folio *folio, + struct vm_area_struct *vma) { + bool exclusive = false; + + /* Let's just free up a large folio if only a single page is mapped. */ + if (folio_large_mapcount(folio) <= 1) + return false; + /* - * We could currently only reuse a subpage of a large folio if no - * other subpages of the large folios are still mapped. However, - * let's just consistently not reuse subpages even if we could - * reuse in that scenario, and give back a large folio a bit - * sooner. + * The assumption for anonymous folios is that each page can only get + * mapped once into each MM. The only exception are KSM folios, which + * are always small. + * + * Each taken mapcount must be paired with exactly one taken reference, + * whereby the refcount must be incremented before the mapcount when + * mapping a page, and the refcount must be decremented after the + * mapcount when unmapping a page. + * + * If all folio references are from mappings, and all mappings are in + * the page tables of this MM, then this folio is exclusive to this MM. */ - if (folio_test_large(folio)) + if (!folio_test_large_mapped_exclusively(folio)) + return false; + + VM_WARN_ON_ONCE(folio_test_ksm(folio)); + VM_WARN_ON_ONCE(folio_mapcount(folio) > folio_nr_pages(folio)); + VM_WARN_ON_ONCE(folio_entire_mapcount(folio)); + + if (unlikely(folio_test_swapcache(folio))) { + /* + * Note: freeing up the swapcache will fail if some PTEs are + * still swap entries. + */ + if (!folio_trylock(folio)) + return false; + folio_free_swap(folio); + folio_unlock(folio); + } + + if (folio_large_mapcount(folio) != folio_ref_count(folio)) return false; + /* Stabilize the mapcount vs. refcount and recheck. */ + folio_lock_large_mapcount_data(folio); + VM_WARN_ON_ONCE(folio_large_mapcount(folio) < folio_ref_count(folio)); + + if (!folio_test_large_mapped_exclusively(folio)) + goto unlock; + if (folio_large_mapcount(folio) != folio_ref_count(folio)) + goto unlock; + + VM_WARN_ON_ONCE(folio_mm0_id(folio) != vma->vm_mm->mm_id && + folio_mm1_id(folio) != vma->vm_mm->mm_id); + + /* + * Do we need the folio lock? Likely not. If there would have been + * references from page migration/swapout, we would have detected + * an additional folio reference and never ended up here. + */ + exclusive = true; +unlock: + folio_unlock_large_mapcount_data(folio); + return exclusive; +} +#else /* !CONFIG_MM_ID */ +static bool __wp_can_reuse_large_anon_folio(struct folio *folio, + struct vm_area_struct *vma) +{ + /* + * We could reuse the last mapped page of a large folio, but let's + * just free up this large folio. + */ + return false; +} +#endif /* !CONFIG_MM_ID */ + +static bool wp_can_reuse_anon_folio(struct folio *folio, + struct vm_area_struct *vma) +{ + if (folio_test_large(folio)) + return __wp_can_reuse_large_anon_folio(folio, vma); + /* * We have to verify under folio lock: these early checks are * just an optimization to avoid locking the folio and freeing From patchwork Thu Aug 29 16:56:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13783476 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1114BC87FC8 for ; Thu, 29 Aug 2024 16:58:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9AEC66B00A3; Thu, 29 Aug 2024 12:58:42 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 95EA46B00A4; Thu, 29 Aug 2024 12:58:42 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7D8CD6B00A5; Thu, 29 Aug 2024 12:58:42 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 5E40F6B00A3 for ; Thu, 29 Aug 2024 12:58:42 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id BE26D1405F8 for ; Thu, 29 Aug 2024 16:58:41 +0000 (UTC) X-FDA: 82505892042.05.0B49F26 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf14.hostedemail.com (Postfix) with ESMTP id 150A1100002 for ; Thu, 29 Aug 2024 16:58:39 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=EUncBdWM; spf=pass (imf14.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724950630; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=z6SjsdX5+d4w13f2Yod61hus7319jekbv8gEFkX5n/A=; b=QlNshT8IFXKRfqPOBAkdFUiNNWSAT9AdZ/ghfdI5rw3iWmpOUJfWCgVcU30NnTkUuVL3dQ zMoGpeJMA0QuBKPEsAkSxD1V/eosaWHww8h0FrnvUdwMgiCEgnHq1D4sW4jUKE2Mqlb/O5 +EIwgDcQN/bZ+1c/PNrCpLRS7IEnZ0g= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724950630; a=rsa-sha256; cv=none; b=oNMD4KeWGzKYNQhMgGx9gn0fY6/GJBX0jJi8OeosohlKbU3KO53TcEfiReRhCEVgi9EDYz jTrWHa/+r4qQcS6PrtQfU7ieFhKeKGIX3ENmRH7pKTq6ERpA+UHJU0DYhbeISwvDMedkNI 07aO5hRQZIYrE5GFfa6bQZ2SDLjkEkc= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=EUncBdWM; spf=pass (imf14.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724950719; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=z6SjsdX5+d4w13f2Yod61hus7319jekbv8gEFkX5n/A=; b=EUncBdWMptcwRqWdMUvgr9KXJhAgnGaXgZVmxoxP56tTRckw70fjpGglNieq5/NlOY3FUP Yw3ZO2rR6eFTi82VnbK2qQsDCtqbK8GjUmJ4iA+olo6QCwkqO39UP7KZ910LCE55bghobB dKp6BmGI+lfw925QFPwcbIqn4VIRfSc= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-608-_x1Mv_w9P1eDEEfjyaDX-g-1; Thu, 29 Aug 2024 12:58:34 -0400 X-MC-Unique: _x1Mv_w9P1eDEEfjyaDX-g-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id E0E2419792FA; Thu, 29 Aug 2024 16:58:31 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.193.245]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 3B8EB1955F21; Thu, 29 Aug 2024 16:58:23 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH v1 11/17] mm: CONFIG_NO_PAGE_MAPCOUNT to prepare for not maintain per-page mapcounts in large folios Date: Thu, 29 Aug 2024 18:56:14 +0200 Message-ID: <20240829165627.2256514-12-david@redhat.com> In-Reply-To: <20240829165627.2256514-1-david@redhat.com> References: <20240829165627.2256514-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 150A1100002 X-Stat-Signature: 83ezdendfp8wm3xqa9e5j6ugepyxpxnx X-HE-Tag: 1724950719-214427 X-HE-Meta: U2FsdGVkX1/BCM+4UUDzK60QPJ8keConog0XmpzC16E7TMva9n2qQSGGDSGeTQatGPy0tetPNvwuInkA9Mt3XsrNQIOrcma1pQN0KdKIN9oN3BBP43BYryQY62N0BGhJkvmadGnTb0FM5QcIvw8uOr4AKzvba9CgVzLQ39py0if/QU8GnCrxkv0/dVzMd616hWjBcBonhYAHCHnCbCDPYwEpund/G5iIfF8Yu/N57a7A2TdymD1gg4F/z7mc5W4Ivm51dfSygthf7mJG+ERUO39iDUI92akkHJO4d83+/bQDyrX3VGOJkesyc0eTwqZy7ljZ6exJel2ADS+W02z07E6wrG8b9S0LNE+ACXOKmDg6qCf6xAzWaFCnw/rPXWxYDzOfrY/HU2TJnfKNESU++QYB/6eIz0d4y7lSIV3Nsh16xzVHrvL/6S61UP3U3VLgu80H2ciiQe8A7Lxh2sCp4Mf2wXxd+2Yo6kpXEXbaG8ULAcI4+rnUxaaZIuMotEmkJeUNHALtsTG0/JzytgM60+JxUC8tuMIia8LXLem3NkEQ4sb3kSt6h0+99rhEeCnp8bDImGxHNEHTk+UpbQAaecRv8vl11h0ry9VjZXfvdTkeK50znFxV1mPC9WGh+3NB9ThO3e8AVEPMHlWETShydlxCJA6CjfSsC4JuhPmwYJuvVqlPsIalNCaL1XnvCWi4FSagXd7ELcxKWm+ku39MhXnYaEH6dFMT0+Lv9InsG1HM07XNSh+5hIWGpNutxMXXsTrfawchOyiQhUkZGXeSsDUhawfxEz1JaQqKNicYbN/FWjbTfBWHtgU9K95x/+RECNm8a0X2Uj+sZMkmpsbYJ8E/JLZUFNTTwKAuyMCd2dVS86IfryUHYXZLtE6MOsYoMTap7soZCjA9oTy9DwcJBm9VLL/5xb27UmB+LkD5lCkQM+vUMYYKRnI0PrOFotXTsNEWLMLUxhNzT2qwbEU gms6tBiJ ZpY4gxAPeoVMB5V+9b3rLMscQwRFbgKodstGgoRZeclv8n5+BbntU8BiwKQCHchAJFSM9gCrEFlU1A7P7GsPtdscErRGzJOY7zst2zKXrNbj5E9hPhM1WCDAhSevqv7Fp9T+PpJNcrxr+z3rHxm1Rz1WbwDE6rSshyRxi1ciyMdBB3OHtWOL2huyf042W5U3J74tJ4ufGPjDviJ8PRusrFiepDtMUPH91Urkt9b9OzGESFuJSyfie8Km7BTdphC8LYwkrh7z5/aFEas54n6yBBP2q4DPwHGKZ+ZqUWF6YLK/xIZmQjgAvyZ0qELCSSDmuyvjlpm4H8fklZvTLHDycZu0qXQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: We're close to the finishing line: let's introduce a new CONFIG_NO_PAGE_MAPCOUNT config option where we will incrementally remove any dependencies on per-page mapcounts in large folios. Once that's done, we'll stop maintaining the per-page mapcounts with this config option enabled. CONFIG_NO_PAGE_MAPCOUNT will be EXPERIMENTAL for now, as we'll have to learn about some of the real world impact of some of the implications. As writing "!CONFIG_NO_PAGE_MAPCOUNT" is really nasty, let's introduce a helper config option "CONFIG_PAGE_MAPCOUNT" that expresses the negation. Signed-off-by: David Hildenbrand --- mm/Kconfig | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/mm/Kconfig b/mm/Kconfig index 0877be8c50b6c..73cfacbd1cc6a 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -878,8 +878,28 @@ config READ_ONLY_THP_FOR_FS support of file THPs will be developed in the next few release cycles. +config NO_PAGE_MAPCOUNT + bool "No per-page mapcount (EXPERIMENTAL)" + depends on TRANSPARENT_HUGEPAGE && MM_ID + help + Do not maintain per-page mapcounts for pages part of larger + allocations, such as transparent huge pages. + + When this config option is enabled, some interfaces that relied on + this information will rely on less-precise per-folio information + instead: for example, using the average per-page mapcount in such + a large allocation instead of the per-page mapcount. + + EXPERIMENTAL because the severity of some of the implications first + have to be understood properly. + endif # TRANSPARENT_HUGEPAGE +# simple helper to make the code a bit easier to read +config PAGE_MAPCOUNT + def_bool y + depends on !NO_PAGE_MAPCOUNT + # # The architecture supports pgtable leaves that is larger than PAGE_SIZE # From patchwork Thu Aug 29 16:56:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13783477 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 76F6FC87FC8 for ; Thu, 29 Aug 2024 16:58:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0874B6B00A5; Thu, 29 Aug 2024 12:58:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 039026B00A6; Thu, 29 Aug 2024 12:58:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DF3246B00A7; Thu, 29 Aug 2024 12:58:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id BDAFE6B00A5 for ; Thu, 29 Aug 2024 12:58:51 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 796E4402A2 for ; Thu, 29 Aug 2024 16:58:51 +0000 (UTC) X-FDA: 82505892462.10.C40AB07 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf01.hostedemail.com (Postfix) with ESMTP id C6D4F40008 for ; Thu, 29 Aug 2024 16:58:49 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="K0rzA/mM"; spf=pass (imf01.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724950685; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Xlz/Hr/MLXsaGBmeoyntSanirDE8WH40+jKnaAIxWt4=; b=571agoWGTK0B8VZYugeG5crrCgHuiQlbhtAE1BB+ReQdvTVzyC7YZjMov1v3JqtqGnwUO4 dZQEcTw5tDPKSH6xx/OMuTQ01xU8MkRwofyz9EQz6ALKXewAdZ0AR6sW+nzzf/RunVpySk OxkkKMUrSViI6necZc0DWt4dg+DudKk= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="K0rzA/mM"; spf=pass (imf01.hostedemail.com: domain of david@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724950685; a=rsa-sha256; cv=none; b=1E1puELzW5gFhDy30ZcbpDwqBgLhf6wBtdiPc8gLyrbPJuVnR2S0YaKtq48oauh7VvzwLX F1Mzc5iDmKyCldBvT0u5+WcZXiMCNRJXozLpzuB2NOG9G1JrXmahuzJENIcvsCoG+e5HxW BgALeUhB7ZSpafmTuRxKankWJEnc+Xc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724950729; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Xlz/Hr/MLXsaGBmeoyntSanirDE8WH40+jKnaAIxWt4=; b=K0rzA/mMYkREbPB/ZxPvtNTSq/BTkJ78ln5P4CVal1SJ/OnWNN70qityKS8WvxRK2yEyK4 BtJsJclEX4EfppDmcqq4fh3LTAo/zMFyvCTk29hBNa8JxQf0yi/ciefF3dJUks0Jx17dZ7 TCo47BBj2Q5jp6nJ7g7UuDGU9eSr3Vg= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-302-FFTH9f1IPp2QjlcNQc8UoQ-1; Thu, 29 Aug 2024 12:58:45 -0400 X-MC-Unique: FFTH9f1IPp2QjlcNQc8UoQ-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id A6ADE190308D; Thu, 29 Aug 2024 16:58:43 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.193.245]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 3F3A51955F21; Thu, 29 Aug 2024 16:58:31 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH v1 12/17] mm: remove per-page mapcount dependency in folio_likely_mapped_shared() (CONFIG_NO_PAGE_MAPCOUNT) Date: Thu, 29 Aug 2024 18:56:15 +0200 Message-ID: <20240829165627.2256514-13-david@redhat.com> In-Reply-To: <20240829165627.2256514-1-david@redhat.com> References: <20240829165627.2256514-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: C6D4F40008 X-Stat-Signature: c61czwrtkfa9igbeau1ba5uspzzdxesq X-HE-Tag: 1724950729-191532 X-HE-Meta: U2FsdGVkX18f/hdTYdO/yaaWpl+r0EQ8PytUmBnmSktBtsyTA1Cz7nfvZiFGAwDZ4B+fwKdPXF0doK7U06j3nWrh6mLRpaTVqcO1L6WtXE8FJh9J8hP88gcojEio5nnFqQ2O1BZznOMW3phOgqjQYaniYlDShK5zo0Vs2P3vNQJLNfHMymtjvrgkL5W0YCx9gYQEa0TXSbg1QF7ET0MevlhpwDN/CzmWdQyfl94i4nqwX9yYnbAxGj+v8jR4H1UoLC6rIpNgyDUgSENsZANecpgVtUupKZoR93bOCgOTXC6uuJ5VRfR6VvL53F+Puen/hicc19mbOIrVivkY7+TkkkqyiE2KB1E0qBTL8xXedHkG46D2GdGKcg/nOyvbEJtnJUIM1bRmFHhl4wKky+buPfg0WYvknkoWGDZyOFwv8q+U+egewThSVvwyaRHSKZhAaUdNVSZxugK54BwBLUNB0JM0/aXht3e54YP+a6n4ePA82rowLHzb2Bme52kEpwVAoOTsxTVgHLehMpePQ4Rm0bgFM2hLOwDWKfifoc4+tMP7nGoOai2YkVpqC6hXk78TkkCtwwBq9A2unthD68nXyKUnDd1nAOPnPSjqd/8o4zeFzv/1agDGvw2W57HpnD1FiE9MQ5XQOVT+9tc9ZFXfLN7RhzBBDiw/iS3M4tt14QZqrcA9mIMRrQ6kMZnVI669z/b1jvLQuGhfKCO+Gyk8Xuv3O1Qf8d4nlUNiQAhvDptI5D7NDtLSLIDj3uMUG2L5J9uthP9RqtlDBVsSSQQIfHveZQ046gFJNH+LwQ3EdwW/xIW5Uof+LZOcEx4sjb2b0Ta08+hTrsQAZ75unZVCHHrcYXiYAN67MImYjtNErT51RSAIY0WwnV1+/Si9D19EOqXdjGhuQ4t7VliHHYUf3tUIRYGWcdW6xvV+4Df2TTNchrliiYraSb7tE1mwsfgErw8su7K/qG8tzm/TIM1 xU3Aqj9v 3MYPsiNNODviyowUQ/u+8wTvg8pJf6aMXSgbdkDb/x5rCRLzYU0P4BKs9RxehunSKmWNQeCJPIQQxeMON4PznOBhUUJHvhuvRMCAFbVfo29dPftMQgqi7GBZ5h8HYtFmm37V7d8nfQ7gQ02XM3h50qIi5J9UwVCUCGloJ7iyhateMj3xQ9WtEK9xgnWXOwdypP3JFnNz6O9SwpGEV5Q02mY7hU81AkhJTaejNGzW5tgvf4yDNzI23RN/Fnpn1/PSV18Pu908CPiGrHdUyUiiANOpHGKTpY+pbvR4+T7l+SdOeHCxwQlKcbo/zRpudF++PspMjuw1hxnJ1smDdrVIihGzxi2EQmi+BJj8jRB836Bo1Z6XKZLUviOXtWw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's remove the dependency on the mapcount of the first folio page in large folios and consequently any "false negatives" from folio_likely_mapped_shared(). In theory, we could implement this change only with CONFIG_MM_ID, without gluing it to another config option. But we'll be a bit careful for the time being, because folio_likely_mapped_shared() can now return "false positives" more frequently. Glue it to CONFIG_NO_PAGE_MAPCOUNT, which expresses the "EXPERIMENTAL" character for now. Let's reuse our new MM ownership tracking infrastructure for large folios. Thoroughly document the changed semantics. We might now detect that a folio as "mapped shared" although it no longer is -- this can only happen if more than two MMs mapped a folio at the same time, and neither of the first two is the last one mapping the folio. "false positives" in this context are certainly better than "false negatives" when it comes to enforcing policies (e.g., is process 1 allowed to migrate a folio that might also be used by another process?), but in an ideal world we wouldn't have these "false positives" either. It's worth noting that there will not be a change for small folios and hugetlb folios. In general, for PMD-mapped THP we don't expect a change, only for PTE-mapped THP. This will affect various users of folio_likely_mapped_shared(): (1) khugepaged counts PTEs that target shared folios towards the max_ptes_shared. With false positives we might collapse too little, with false negatives too much. (2) NUMA hinting: PROT_NONE NUMA protection will be skipped for shared folios in COW mappings. With false positives we skip too many, with false negatives we don't skip some we should be skipping. During NUMA hinting faults, we will set TNF_SHARED with shared folios in shared mappings. With false positives we set it too often, with false negatives not often enough. During NUMA hinting faults, we will reject to migrate shared folios in mappings with execute permissions (expectation: shared libraries). With false positives we reject to migrate some, with false negatives we migrate too many. (3) MADV_COLD / MADV_PAGEOUT / MADV_FREE will not try splitting PTE-mapped THPs that are considered shared but not fully covered by the requested range, consequently not processing them. With false positives we will not split+process some we could have processed, with false negatives we split some folios we probably shouldn't have split. (4) mbind() / migrate_pages() / move_pages() will refuse to migrate shared folios unless MPOL_MF_MOVE_ALL is effective (requires CAP_SYS_NICE). With false positives we reject to migrate some folios that could be migrated, with false negatives we migrate some folios that shouldn't have been migrated. (5) folio_referenced_one() will skip exclusive swapbacked folios in dying processes. Shared folios will not be skipped. With false positives we might skip this optimization, with false negatives we might apply this optimization wrongly. Likely (3) and (4) are not really used a lot on folios that are heavily shared among processes -- rather on anonymous memory (mostly from a single parent process) or almost-exclusively mmap'ed files. Similarly (1) is not expected to matter much in practice, and if so, only for long-running child processes after fork(). But even here, it's unlikely that it matters in practice. (5) is not expected to matter much at all, it's a new optimization either way. (2) is interesting: the expectation here is that for anon folios it might not make a big difference. For file-backed pages it might, we'll have to learn about that. Long story short: this paves the way for a complete CONFIG_NO_PAGE_MAPCOUNT implementation, but maybe we'll have to switch to another MM ownership tracking later. Signed-off-by: David Hildenbrand --- include/linux/mm.h | 24 ++++++++++++++++++------ 1 file changed, 18 insertions(+), 6 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 98411e53da916..b37f20b26776d 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2142,9 +2142,9 @@ static inline size_t folio_size(const struct folio *folio) * are independent. * * As precise information is not easily available for all folios, this function - * estimates the number of MMs ("sharers") that are currently mapping a folio - * using the number of times the first page of the folio is currently mapped - * into page tables. + * must sometimes estimate the number of MMs ("sharers") that are currently + * mapping a folio using the number of times the first page of the folio is + * currently mapped into page tables. * * For small anonymous folios and anonymous hugetlb folios, the return * value will be exactly correct: non-KSM folios can only be mapped at most once @@ -2152,13 +2152,21 @@ static inline size_t folio_size(const struct folio *folio) * considered shared even if mapped multiple times into the same MM. * * For other folios, the result can be fuzzy: - * #. For partially-mappable large folios (THP), the return value can wrongly - * indicate "mapped exclusively" (false negative) when the folio is - * only partially mapped into at least one MM. + * #. With CONFIG_PAGE_MAPCOUNT: For partially-mappable large folios (THP), + * the return value can wrongly indicate "mapped exclusively" (false + * negative) when the folio is only partially mapped into at least one MM. + * #. With CONFIG_NO_PAGE_MAPCOUNT: For partially-mappable large folios + * (THP), the return value can wrongly indicate "mapped shared" (false + * positive) in some scenarios. This can only happen if two MMs are + * already mapping a folio and a more MM starts mapping the folio. We + * would still the detect the folio as "mapped shared" after the first + * two MMs no longer map the folio. * #. For pagecache folios (including hugetlb), the return value can wrongly * indicate "mapped shared" (false positive) when two VMAs in the same MM * cover the same file range. * + * With CONFIG_MM_ID, this function will never return "false negatives". + * * Further, this function only considers current page table mappings that * are tracked using the folio mapcount(s). * @@ -2183,12 +2191,16 @@ static inline bool folio_likely_mapped_shared(struct folio *folio) if (mapcount <= 1) return false; +#ifdef CONFIG_PAGE_MAPCOUNT /* If any page is mapped more than once we treat it "mapped shared". */ if (folio_entire_mapcount(folio) || mapcount > folio_large_nr_pages(folio)) return true; /* Let's guess based on the first subpage. */ return atomic_read(&folio->_mapcount) > 0; +#else /* !CONFIG_PAGE_MAPCOUNT */ + return !folio_test_large_mapped_exclusively(folio); +#endif /* !CONFIG_PAGE_MAPCOUNT */ } #ifndef HAVE_ARCH_MAKE_FOLIO_ACCESSIBLE From patchwork Thu Aug 29 16:56:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13783478 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89F93C87FC9 for ; Thu, 29 Aug 2024 16:59:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 236146B00A7; Thu, 29 Aug 2024 12:59:03 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1E52F6B00A8; Thu, 29 Aug 2024 12:59:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0ACB36B00A9; Thu, 29 Aug 2024 12:59:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id E0C306B00A7 for ; Thu, 29 Aug 2024 12:59:02 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 8834B806B9 for ; Thu, 29 Aug 2024 16:59:02 +0000 (UTC) X-FDA: 82505892924.21.1730157 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf26.hostedemail.com (Postfix) with ESMTP id B5A41140011 for ; Thu, 29 Aug 2024 16:59:00 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=MIDHBggJ; spf=pass (imf26.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724950651; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=/4w50puZlu0HXrTGynq7fLz3hKXbxnRdIDDiKfwRwdI=; b=de8hpLFKIMq/38br9SwuIvfE5NT2JPgg9y+czal8fTAIwialqU2nZ0FKKmfmSsZKpeEQrT hkbo9gqxQtLbBVca8NK3dzDnXoo34UrmTNOsKhnFYG5rYvifvmmvvZDiGhDf2t7y4bXXFq ckueLo1FqibO1pLpEEsm95jB5KZ3+f4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724950651; a=rsa-sha256; cv=none; b=u6ErcYSH7h+SSWDWEq8iqjIUakD/+asvg5LuEkdXp89V1WO+acvfHpSiexVyq085eoL4F3 lrcROBUpoBS8ua7ElJDYkgsFRYDKxpr6yPTjrwOlswSsz7hCsNmdoJpjC6PV2rmHTrj3Lo N09tRD5gD30S0/Rn76BC2P6TEI05thQ= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=MIDHBggJ; spf=pass (imf26.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724950740; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=/4w50puZlu0HXrTGynq7fLz3hKXbxnRdIDDiKfwRwdI=; b=MIDHBggJjuRbN5jmkFDAYktU12AH0nDt/FqD1zLCI/Hl0v1nRWzFA7pUwrFtmjZeyTkNBZ S2L1hkq2E8/cVbAn25UGDffFn1+Wpu6mLnfPoVG14dlEnsruDEpQWhyYiQZTGvzYXMep1N xmRJrANuwDlVwQqw+ge6iFjpCobft3I= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-251-I4RGjMuXNxC500-cFdsqyQ-1; Thu, 29 Aug 2024 12:58:55 -0400 X-MC-Unique: I4RGjMuXNxC500-cFdsqyQ-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 4912C18F498B; Thu, 29 Aug 2024 16:58:51 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.193.245]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 1C0F81955F21; Thu, 29 Aug 2024 16:58:44 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH v1 13/17] fs/proc/page: remove per-page mapcount dependency for /proc/kpagecount (CONFIG_NO_PAGE_MAPCOUNT) Date: Thu, 29 Aug 2024 18:56:16 +0200 Message-ID: <20240829165627.2256514-14-david@redhat.com> In-Reply-To: <20240829165627.2256514-1-david@redhat.com> References: <20240829165627.2256514-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: B5A41140011 X-Stat-Signature: xfp3hrabg1mkj4kbownsqribxes891r3 X-HE-Tag: 1724950740-809104 X-HE-Meta: U2FsdGVkX1+hsYBIzX7XG2J4g25e7HYYQ6WtGkEgLiHa2J9qhERjyj5mSu7bdYayrldquvd9duOYiGJDgKRPV0vn1KasFROGqMGo5BVcQkd7zgLMIibmUz5VPxdXJb2H/3O+rEUmwpdP1FZwins9DRJGQfUBvP7lwra1t4/Pn4W07mjgznjX+vqN6/M2V0/z5oIdZ3xUwf2YFxX/S4dmBESsMR1c24JMXtke6Jy9i2wJpNmJp8eybmKuMcPPR0YASlNPJwQdbSGMqSiVZAeAW16PzuNLPPEeP+9ZlVZNYY8T5RMngFo5ISiOFi4zHXsd55/UqRuGjaakcosfCK1q7XJtgW/A5iF6nLXgmOaADcofyrdIhcN2yXFB18aW3Yu8wAyUTgHLw0Mril5C3Dteg0r15kEKot+6xIK4Vx/jsG65YaGhDfHx5+H1wFeS4EpWV40ZTSK0ttGNFs646VtowCMkyD0+jhKnAzcBAL9W3+DQpIiHl/xxcWul52+nzBmKFA8GYpK6MSQPY7CiFhVXdLwsGHLx3mj9IjhyBsuLvxR+GlTd727pSAeKh5/ViOVOZgM6rs/avfa3Kc9HrzjC7KQ2sAp4pk+yWxPbgZJKlY1E45KSlS9giFGbcIinmbZhZfFzWn5rzp9jQp+MqCe9VwxhDgMKWFyk4gFIeOTLfvYgG+veJX8YinDWNGONM33Ox/z9I1T2QHy9ZaIbPOO4NSn90nVv+YntmopA/RlaWCgYgQxiZJ+synUevgPXCIP+4Re1p3GG2HDa5g8SWcCxEsX0XCFVzribLcoCdKIDuJnarfLZYDuHBkvHzto8F+EDeXJNB4G0S9rAW9wIgdAsB1PBRMU7SxxZsxMEVR0YgnzadHTQxXwV6el9cKwA29YWx3DM9jDr85umeyehayVuETcSqsh5iXxyV7iMmjZiVkzhotg60yktVfkONZsaeFqczQfDPJtyIE03wwiJuJO 8yqw1OTR rWof7zXrd+1UZ0OHmCRqs7kNmhx3VZAJsVjgx61oPkQld33JcVIwSYRoBg9YvYlEXa55k5+u+8qHX2qiT9khmsh2liRYD1eNUV9ybP3UbLf3lznxMG6KTs1PrAEte17Pq7WtbyJZ9MOyg/4j6XJDQFdSNfsZAlZCdkL4sJhSjx2763BjaM+i1V03uNsWY0stLKTVtQEEIXGP/FEoPkF6JTIB/acHWkVi+sXwQ0b5MdxKMdzdM6f+Wq1DuCKS0Ow6mqVfOQFWKtmI0c6wNnr8jGx0f+2KktJRMJprjTKUDFqaRyGtSbw08+lmwzufLEL6nJEpkJZTXhFKAuz1GYwaIVt3sEKGMVnGgq/R/Vk6H7klTSPUPkXHY7MxoVQ5F7D2Zg/Uf X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's implement an alternative when per-page mapcounts in large folios are no longer maintained -- soon with CONFIG_NO_PAGE_MAPCOUNT. For large folios, we'll return the per-page average mapcount within the folio, except when the average is 0 but the folio is mapped: then we return 1. For hugetlb folios and for large folios that are fully mapped into all address spaces, there is no change. As an alternative, we could simply return 0 for non-hugetlb large folios, or disable this legacy interface with CONFIG_NO_PAGE_MAPCOUNT. But the information exposed by this interface can still be valuable, and frequently we deal with fully-mapped large folios where the average corresponds to the actual page mapcount. So we'll leave it like this for now and document the new behavior. Signed-off-by: David Hildenbrand --- Documentation/admin-guide/mm/pagemap.rst | 7 +++++- fs/proc/internal.h | 31 ++++++++++++++++++++++++ fs/proc/page.c | 18 +++++++++++--- 3 files changed, 52 insertions(+), 4 deletions(-) diff --git a/Documentation/admin-guide/mm/pagemap.rst b/Documentation/admin-guide/mm/pagemap.rst index caba0f52dd36c..49590306c61a0 100644 --- a/Documentation/admin-guide/mm/pagemap.rst +++ b/Documentation/admin-guide/mm/pagemap.rst @@ -42,7 +42,12 @@ There are four components to pagemap: skip over unmapped regions. * ``/proc/kpagecount``. This file contains a 64-bit count of the number of - times each page is mapped, indexed by PFN. + times each page is mapped, indexed by PFN. Some kernel configurations do + not track the precise number of times a page part of a larger allocation + (e.g., THP) is mapped. In these configurations, the average number of + mappings per page in this larger allocation is returned instead. However, + if any page of the large allocation is mapped, the returned value will + be at least 1. The page-types tool in the tools/mm directory can be used to query the number of times a page is mapped. diff --git a/fs/proc/internal.h b/fs/proc/internal.h index cc520168f8b69..3c687f97e18c4 100644 --- a/fs/proc/internal.h +++ b/fs/proc/internal.h @@ -174,6 +174,37 @@ static inline int folio_precise_page_mapcount(struct folio *folio, return mapcount; } +/** + * folio_average_page_mapcount() - Average number of mappings per page in this + * folio + * @folio: The folio. + * + * The average number of present user page table entries that reference each + * page in this folio as tracked via the RMAP: either referenced directly + * (PTE) or as part of a larger area that covers this page (e.g., PMD). + * + * Returns: The average number of mappings per page in this folio. 0 for + * folios that are not mapped to user space or are not tracked via the RMAP + * (e.g., shared zeropage). + */ +static inline int folio_average_page_mapcount(struct folio *folio) +{ + int mapcount, entire_mapcount; + unsigned int adjust; + + if (!folio_test_large(folio)) + return atomic_read(&folio->_mapcount) + 1; + + mapcount = folio_large_mapcount(folio); + entire_mapcount = folio_entire_mapcount(folio); + if (mapcount <= entire_mapcount) + return entire_mapcount; + mapcount -= entire_mapcount; + + adjust = folio_large_nr_pages(folio) / 2; + return ((mapcount + adjust) >> folio_large_order(folio)) + + entire_mapcount; +} /* * array.c */ diff --git a/fs/proc/page.c b/fs/proc/page.c index a55f5acefa974..c7838de949287 100644 --- a/fs/proc/page.c +++ b/fs/proc/page.c @@ -67,9 +67,21 @@ static ssize_t kpagecount_read(struct file *file, char __user *buf, * memmaps that were actually initialized. */ page = pfn_to_online_page(pfn); - if (page) - mapcount = folio_precise_page_mapcount(page_folio(page), - page); + if (page) { + struct folio *folio = page_folio(page); + +#ifdef CONFIG_PAGE_MAPCOUNT + mapcount = folio_precise_page_mapcount(folio, page); +#else /* !CONFIG_PAGE_MAPCOUNT */ + /* + * Indicate the per-page average, but at least "1" for + * mapped folios. + */ + mapcount = folio_average_page_mapcount(folio); + if (!mapcount && folio_test_large(folio) && folio_mapped(folio)) + mapcount = 1; +#endif /* !CONFIG_PAGE_MAPCOUNT */ + } if (put_user(mapcount, out)) { ret = -EFAULT; From patchwork Thu Aug 29 16:56:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13783479 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F07BBC87FC3 for ; Thu, 29 Aug 2024 16:59:08 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7DBA66B00A8; Thu, 29 Aug 2024 12:59:08 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 78A156B00AA; Thu, 29 Aug 2024 12:59:08 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 62B756B00AB; Thu, 29 Aug 2024 12:59:08 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 41E7B6B00A8 for ; Thu, 29 Aug 2024 12:59:08 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 02973160E0B for ; Thu, 29 Aug 2024 16:59:07 +0000 (UTC) X-FDA: 82505893176.06.6554054 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf12.hostedemail.com (Postfix) with ESMTP id 5508D40002 for ; Thu, 29 Aug 2024 16:59:06 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KkDCLzPC; spf=pass (imf12.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724950627; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cKf6dQguD8Od23K0wDaepWW4S0v2e9lLNnYaNj5b1cU=; b=He0sIvalODBBQ5s5m1hqVyu85Z8hiHEMbKBYgMhKnDRGzMEklYdBZqGUHNfpes1ym9rm96 3S32gOdUiP1Nb8qq1H7WYd+YxaW4tLSMvbVRgNqe/rm8FO8jp9BanlvuxVPAZOyIScvXqq xl6nkUV/+w1ZtY4Jjy2ANg/rshEj7T8= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=KkDCLzPC; spf=pass (imf12.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724950627; a=rsa-sha256; cv=none; b=y0zJASoXQ0qjZ4042ecn38vQpAb7jw1VWNLzA4EN/YJxRYrxwSEATyAkSwBOf6x1Wsj4cL vsQU62N7CwpeL+FVmGcWv8QxaY1o/nReimHXD3wpSexxt/qeENJNxqn2Mp2K8wDWA39cfe S8LSGBWnsZuXCJiU80d9aRvl6c6vRT8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724950745; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cKf6dQguD8Od23K0wDaepWW4S0v2e9lLNnYaNj5b1cU=; b=KkDCLzPCyOL/ZMZ/mXWWsPZs1vxVQbHb25R4jP5ypgTpktuRS/5TOnVz+tu2v1IsbV/tsB 3kqHeCdZA9Jan651uvdUiIRLF4V1yVh5rPMfQUEoxWzg5dxgpmZQSrUEBJ6X4OBw2cvOsK Hlp3nCuenUulRzRZ7xnGWplMMolrqcY= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-541-ui0rW_eRM6iDl2qZZuTgKA-1; Thu, 29 Aug 2024 12:59:02 -0400 X-MC-Unique: ui0rW_eRM6iDl2qZZuTgKA-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 3AA1D18F498B; Thu, 29 Aug 2024 16:59:00 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.193.245]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 6ACD61955F66; Thu, 29 Aug 2024 16:58:51 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH v1 14/17] fs/proc/task_mmu: remove per-page mapcount dependency for PM_MMAP_EXCLUSIVE (CONFIG_NO_PAGE_MAPCOUNT) Date: Thu, 29 Aug 2024 18:56:17 +0200 Message-ID: <20240829165627.2256514-15-david@redhat.com> In-Reply-To: <20240829165627.2256514-1-david@redhat.com> References: <20240829165627.2256514-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 5508D40002 X-Stat-Signature: sk516785md3n14zzm68sngdbgounsjms X-Rspam-User: X-HE-Tag: 1724950746-436517 X-HE-Meta: U2FsdGVkX19CThm4MuxGRCSoW91eJLeUKn8jupSc7cl+egTx5NJepxHnMP7DSG5tp6u2jVFC1FtwhWdzWFkPBLcCOdW5Tbo4hNndUkgPkcLCblHJBVBG95U2s+kNUXky/TRtwGwF40fEAZS263TOFJ0BKRy8GyaZGp3L9OkQr/CPqiop5VsbA7p6P9TRxCF4LTOMGbk8m9pZkfqTo/MduNYqdwCYrK/1Cq8MRQdgVRA3yO+bQb2J+Kcdu4+wTAghduGsuTp2D6AJ6ydBykZQPe8UAh9Wzc2d/jcFTG7ZGFTLOghMKvF5UdUeB/5HxYxwD1MG1Acy4aOrHfZbxUuy6pWOUl3A6PY+Djd+zf8+lJjubjDM+YbRo9zXCuq0VGwF4BvZjQRJB7JUfv1vd+FEGLWMovjUYWeQju1aDhhyHRPm/XXaFh8R5SUliQyWLHJFb9z07xHBQKn2ujIGnxeSeicSRxIUmYOxiEN3ONwVEn8KqAOpEg3UAVL82RFgnrtEvvG7/CW0lU7DfIGFl/0bH7YPh9lw1Q0LSxR7ZkyTXCEQk6WHN3ucLVugUqhdnurRpAJA1EcAskN03tNyFzXRa0mzV0fRKiv0OKLiTZSBBgsny27Qi6XVA3Yhs/I64x/2zHU69+ioMRHa6oHlyMI9UMLxql2j1dZsFStvJUdB3rPGtJXdC/8Ew6vpUqZWabZutzMVF0mZ8cI8uZoHf3Uc+Kw3PEvEP1eq6hhhR0W4P2NZ/+YTWLEWT/o914Xeo0gVz4/AefHt4EJxsM8tQKqV7xMpWiukpTBJhfdYZx16v4ySq4M3h/+tmr/ugRug2yFT2vqUIlh3sG/GibQ4omT79Ar4ytjqdIiPBuki6ryq2pcFP0aX5wU7pECqcMDYX6Ws/fHQ399R8Zdqcp/hiF/oHRU9A9La6Ar6wge3eNbc1++TPz4vtUZbPypuXJyjYUy0NYPT9bpUdtIk1KZg/PW Xh+fh94I OFsOoVzwhUCN+5oYbbvw0tgvm/mGttDeU2sxqLbVTvi5ABqBQt8CBhVSVtgW/4WRN+Ax1Zz2k9diBYvom1u9mcMb0POZftvX90LbtMxW7McS0UCsTC7t2fzSM42P80CxgAY40+yy9aYdPPnoa+Zw85d/hSnbNfUXcloYuJwryOjSTrVbNYdk8TvXehEHP9vBxFmBeks676SlykS2kNqY3DYOVkqlxY+Kp/iXoFQ5Dt7eo3vCNztK/SU/rq4PIEPhnz9gzneKlGavBg3B4lYG0D0xM2PjD+frEei1O7OkUb/nOGWQHog2awjj+37kClcHKJEErjv7qCZRsjyQYpCZWl3PCXn/uWpQCqeeWZEirdC01cNAGsL6CRdb36g== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's implement an alternative when per-page mapcounts in large folios are no longer maintained -- soon with CONFIG_NO_PAGE_MAPCOUNT. PM_MMAP_EXCLUSIVE will now be set if folio_likely_mapped_shared() is true -- when the folio is considered "mapped shared", including when it once was "mapped shared" but no longer is, as documented. This might result in and under-indication of "exclusively mapped", which is considered better than over-indicating it: under-estimating the USS (Unique Set Size) is better than over-estimating it. As an alternative, we could simply remove that flag with CONFIG_NO_PAGE_MAPCOUNT completely, but there might be value to it. So, let's keep it like that and document the behavior. Signed-off-by: David Hildenbrand --- Documentation/admin-guide/mm/pagemap.rst | 9 +++++++++ fs/proc/task_mmu.c | 16 ++++++++++++++-- 2 files changed, 23 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/mm/pagemap.rst b/Documentation/admin-guide/mm/pagemap.rst index 49590306c61a0..131c86574c39a 100644 --- a/Documentation/admin-guide/mm/pagemap.rst +++ b/Documentation/admin-guide/mm/pagemap.rst @@ -37,6 +37,15 @@ There are four components to pagemap: precisely which pages are mapped (or in swap) and comparing mapped pages between processes. + Note that in some kernel configurations, all pages part of a larger + allocation (e.g., THP) might be considered "mapped shared" if the large + allocation is considered "mapped shared": if not all pages are exclusive to + the same process. Further, some kernel configurations might consider larger + allocations "mapped shared", if they were at one point considered + "mapped shared", even if they would now be considered "exclusively mapped". + Consequently, in these kernel configurations, bit 56 might be set although + the page is actually "exclusively mapped" + Efficient users of this interface will use ``/proc/pid/maps`` to determine which areas of memory are actually mapped and llseek to skip over unmapped regions. diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 5f171ad7b436b..f35a63c4b7c7a 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -29,6 +29,18 @@ #include #include "internal.h" +#ifdef CONFIG_PAGE_MAPCOUNT +static bool __folio_page_mapped_exclusively(struct folio *folio, struct page *page) +{ + return folio_precise_page_mapcount(folio, page) == 1; +} +#else /* !CONFIG_PAGE_MAPCOUNT */ +static bool __folio_page_mapped_exclusively(struct folio *folio, struct page *page) +{ + return !folio_likely_mapped_shared(folio); +} +#endif /* CONFIG_PAGE_MAPCOUNT */ + #define SEQ_PUT_DEC(str, val) \ seq_put_decimal_ull_width(m, str, (val) << (PAGE_SHIFT-10), 8) void task_mem(struct seq_file *m, struct mm_struct *mm) @@ -1746,7 +1758,7 @@ static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm, if (!folio_test_anon(folio)) flags |= PM_FILE; if ((flags & PM_PRESENT) && - folio_precise_page_mapcount(folio, page) == 1) + __folio_page_mapped_exclusively(folio, page)) flags |= PM_MMAP_EXCLUSIVE; } if (vma->vm_flags & VM_SOFTDIRTY) @@ -1821,7 +1833,7 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end, pagemap_entry_t pme; if (folio && (flags & PM_PRESENT) && - folio_precise_page_mapcount(folio, page + idx) == 1) + __folio_page_mapped_exclusively(folio, page)) cur_flags |= PM_MMAP_EXCLUSIVE; pme = make_pme(frame, cur_flags); From patchwork Thu Aug 29 16:56:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13783480 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CC6CC87FCB for ; Thu, 29 Aug 2024 16:59:18 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D33176B00AA; Thu, 29 Aug 2024 12:59:17 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CE1B26B00AC; Thu, 29 Aug 2024 12:59:17 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B83796B00AD; Thu, 29 Aug 2024 12:59:17 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 919B76B00AA for ; Thu, 29 Aug 2024 12:59:17 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 4FF5BAA24D for ; Thu, 29 Aug 2024 16:59:17 +0000 (UTC) X-FDA: 82505893554.03.FF4E35E Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf18.hostedemail.com (Postfix) with ESMTP id 9C06F1C001B for ; Thu, 29 Aug 2024 16:59:15 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=JlaYG4gg; spf=pass (imf18.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724950710; a=rsa-sha256; cv=none; b=w3kekuZS/RTDkGPTrd5qzMDxMrLnfXts68Vnn8jEm1Th9HSLK/EtdY//feIZEEJ6JJC6C7 q6DIeUx0B5gFLBqUQ43yt5JfHZZK+KJMzbq+Fgy4oQ0gL0ddaQf5hO/vVMu6r8EfdiuzXi hyFk7HXGXemsaftScTIa+EFzzQlBU9o= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=JlaYG4gg; spf=pass (imf18.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724950710; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=MiGCooKskvY1haSoDErYW63CNhCWGlypowkYLoi/78c=; b=4MPmDWhO1hI6TYKkIyTHWJHIN2hREPjodyIGlem226ddnsfw7kaF5Rh0xbhn4c3U/r2ann tvba99WZnKKCPxm+RSuhgz5e3FMN8mBmzIbRjjLWHN8QINJchQFhpNRbKSljvActOUBaYU 9fxL88Z/Qd68woJPzJ60A0b/c6irAEM= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724950755; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=MiGCooKskvY1haSoDErYW63CNhCWGlypowkYLoi/78c=; b=JlaYG4ggMiutZJ6H3Un4SXHxPC9ict6ee9mC22M+8KWxaa/pT67o/trsfeaWk9dm5XRAip q+1f9iqMmcR2SU5/840eeFcbWnQGBCoVo+/ZQVH5mtZUNA9r/iqBJ3Q8/QsZmMMVI8UnTz p2l0peQtI3hsgdMb7iAc0N7MLor8Ed8= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-169-8aXbFsVPOVG-1DFlGxFIdw-1; Thu, 29 Aug 2024 12:59:11 -0400 X-MC-Unique: 8aXbFsVPOVG-1DFlGxFIdw-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 6AE9218BC2D7; Thu, 29 Aug 2024 16:59:08 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.193.245]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id A5CBF1955F21; Thu, 29 Aug 2024 16:59:00 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH v1 15/17] fs/proc/task_mmu: remove per-page mapcount dependency for "mapmax" (CONFIG_NO_PAGE_MAPCOUNT) Date: Thu, 29 Aug 2024 18:56:18 +0200 Message-ID: <20240829165627.2256514-16-david@redhat.com> In-Reply-To: <20240829165627.2256514-1-david@redhat.com> References: <20240829165627.2256514-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Stat-Signature: t7ccsqigf3ztuhhxqtgsxzkt79gcp5mx X-Rspamd-Queue-Id: 9C06F1C001B X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1724950755-258858 X-HE-Meta: U2FsdGVkX1+wTt9Zhht0dhYdBiDyspL22VJNKgtvyLuYqPYSM8wBzPq9EZY9VGWovPh3IQEXiqLW9UneRpPIOh7nIUOPUDnjib9L7w14yOrVpqUBIOiL17TZl5RR8Lm8ixRtHKPyOgtNuVgp+DMv1pNT4Fw3ENjtvCAHpEvQxQQWNirfv+aHRdRrEO9xXLyQddZflKQZgoLMy/Fm1nYPoIsQhEc7OMRnbEZhXQYFWJchVB353t5c9NeZIUA9dsq09RV/cvVVxnop6GQYkUvIA5jSCbdj+OnGiOkzccH5tKIXyC60QRGn7y46m651pCJrooovDpui/uETDqTur6v0WyC+KozXrZkcxLFuGki2IWG4rrAR+X7gmLwU8mkQ35PEduq7zP/P9o/odDUXZVGZqUH6hnB3/ZE8XwJwFPz0lGbl0Qqw+7PYIVJUvg/RPhzYT7+ffb1R98VsTctRA6kMCpSY8Ps63jkO1i2Keo5J8NMbOSO/U2SnYSkdI+v2vuRqF0Xr0SEcKc/LGppKAHakvs78xuXW7BOvoUYVAB3pmNqwOXnfQrUMM33tUkt0+Fu8ZcmRa7GOPsIh0IkVqpwm+Nv5ns2nQqcx4YmFXuijSFzRiM+QIWXrdhQ3vKBPXOtrrZAQCs8/WJdLLepQL3Chq/g/5awbH3GtUGqIKPan2TT9bCM014zmuaBacnTgNhrRWEQvcsXrWd3AIc7zCoFe/v9J00a20JO5EStyOIu7Bce49mzUIIVPNsX0ECCUUd0YrNptXVwohOOptjtrkKQsR4Ijcd1JV9PDgXNeMFkQvn+7SMBBWsJaW4MDKwtcm2Xo59Ews6aUGhKUffIktm3fL1W6KwRGV+ANSRoqD8k/1uOzMXZ5yTRyOh/c6Li+IWJkWZFBl2NYQujye1qo75zjlkwR4IgtaUL7C/Pad/gv+0aeTvBvHAnk5h2RZgAUmvmL3krK4OnwZlEYZcJt7HK TzWah3RC GjTliTPPrzJtm662vY/LhUWcJ12crwgTkJHv9Fg1fVZoUBFEokAc3WbphCLNEIUmidO3DFp/3kinQTTMqukV4o/RGgLDDbMXEQBECYwhStPWQr2ladgURu5Uc51qjZUUHmTevBrnfi4TfU/bqcQmd2IkQ5N4JpG2qQ3v0+kCVmab8Y59ZmPgQ5/gMO5TNdn8rpDdwihSDXzCoQ5Z5fP0qfeBK8j0Rf73/3G8cKX/G6SC1jyKZVRa3NhGqaTVUlZh1FLePVB3gXiDRheVnJgso/4cSU8nRVq/ZYrk+IZtSbJIsvCeECmz48ysdywO/A8J4eQdB74KB747qGE0LDwzGVjgKidKObn3UW2JGRWWSo1u27hthK8e9V6afZA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's implement an alternative when per-page mapcounts in large folios are no longer maintained -- soon with CONFIG_NO_PAGE_MAPCOUNT. For calculating "mapmax", we now use the average per-page mapcount in a large folio instead of the per-page mapcount. For hugetlb folios and folios that are not partially mapped into MMs, there is no change. Likely, this change will not matter much in practice, and an alternative might be to simple remove this stat with CONFIG_NO_PAGE_MAPCOUNT. However, there might be value to it, so let's keep it like that and document the behavior. Signed-off-by: David Hildenbrand --- Documentation/filesystems/proc.rst | 5 +++++ fs/proc/task_mmu.c | 8 +++++++- 2 files changed, 12 insertions(+), 1 deletion(-) diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst index e834779d96115..bed03e77c0f91 100644 --- a/Documentation/filesystems/proc.rst +++ b/Documentation/filesystems/proc.rst @@ -684,6 +684,11 @@ Where: node locality page counters (N0 == node0, N1 == node1, ...) and the kernel page size, in KB, that is backing the mapping up. +Note that some kernel configurations do not track the precise number of times +a page part of a larger allocation (e.g., THP) is mapped. In these +configurations, "mapmax" might corresponds to the average number of mappings +per page in such a larger allocation instead. + 1.2 Kernel data --------------- diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index f35a63c4b7c7a..3d9fe99346478 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -2872,7 +2872,13 @@ static void gather_stats(struct page *page, struct numa_maps *md, int pte_dirty, unsigned long nr_pages) { struct folio *folio = page_folio(page); - int count = folio_precise_page_mapcount(folio, page); + int count; + +#ifdef CONFIG_PAGE_MAPCOUNT + count = folio_precise_page_mapcount(folio, page); +#else + count = min_t(int, folio_average_page_mapcount(folio), 1); +#endif md->pages += nr_pages; if (pte_dirty || folio_test_dirty(folio)) From patchwork Thu Aug 29 16:56:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13783481 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4595C87FC9 for ; Thu, 29 Aug 2024 16:59:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4A82F6B00AC; Thu, 29 Aug 2024 12:59:25 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 457AE6B00AE; Thu, 29 Aug 2024 12:59:25 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 31F176B00AF; Thu, 29 Aug 2024 12:59:25 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 098E96B00AC for ; Thu, 29 Aug 2024 12:59:25 -0400 (EDT) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id B8FA140748 for ; Thu, 29 Aug 2024 16:59:24 +0000 (UTC) X-FDA: 82505893848.19.7EE8922 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf26.hostedemail.com (Postfix) with ESMTP id CC1A7140007 for ; Thu, 29 Aug 2024 16:59:22 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Sx4kQ4Xv; spf=pass (imf26.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724950718; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=mZddLqrKBd43/r/px6zwJ3YX6SWmvbs5NWYNR2u4nCE=; b=PaEovFvcc5ERBANBa4fMB2FRAjEeZVr7DR1mRMRZraiA6cjkYH8ncSmj+iX7da5rpE5IFH oRmxMyb0FpKEq9Th87DtLoVErVG/3lO9HTgknd5IaA/OSUfuZaS8vk6n1bfB5j7aqt7dUf zVIpeKXkHF//LO+pEn71KrieOl0FacI= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Sx4kQ4Xv; spf=pass (imf26.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724950718; a=rsa-sha256; cv=none; b=NjcK24yDy98y4VXh81F8gkzURFxv+AEQKfPSbnZTnifVpY8oOBOp5/Cp7DpIKzu+s6ColF scG/0fOHwxkvHM8BEr7lUun6scbNrFhr7j9zvAC4PISy/pJttBxHtitRRf7Qui+g9dJxdm /0UgdYBdrYIdYFTDhSZu7EI9edax3mE= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724950762; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=mZddLqrKBd43/r/px6zwJ3YX6SWmvbs5NWYNR2u4nCE=; b=Sx4kQ4XvdZO9hgLAHP2342vxW+JSht+Xw97NKxwj2e7JkE/vIBpLS+f4S+DMyXnwk1QB2k 1AhT2k4b/3b/ovknSDqpX8zy2bxHHdCuRTgURkDNCsmRbBkzSEwjB2L61y2bdMOGC4qVGD SdYxK2AQBBkDRV92lGnDYwENWIyjxQQ= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-481-IesVcwbKN1S5VOVmE03pWw-1; Thu, 29 Aug 2024 12:59:18 -0400 X-MC-Unique: IesVcwbKN1S5VOVmE03pWw-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 9B545196E0A8; Thu, 29 Aug 2024 16:59:16 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.193.245]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 343891955F21; Thu, 29 Aug 2024 16:59:08 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH v1 16/17] fs/proc/task_mmu: remove per-page mapcount dependency for smaps/smaps_rollup (CONFIG_NO_PAGE_MAPCOUNT) Date: Thu, 29 Aug 2024 18:56:19 +0200 Message-ID: <20240829165627.2256514-17-david@redhat.com> In-Reply-To: <20240829165627.2256514-1-david@redhat.com> References: <20240829165627.2256514-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspamd-Server: rspam03 X-Rspam-User: X-Rspamd-Queue-Id: CC1A7140007 X-Stat-Signature: ao5phq7t1ewnwfd38uwmajwiun8z55dq X-HE-Tag: 1724950762-869223 X-HE-Meta: U2FsdGVkX19t0dRFZlz2jLuvko0UiQtgIyeHrLECRkwPswoYiXUZMNDLgOOMbSQBoLaYPDVLTItH0iIquiRFUyOwzT8ggCCI+IoosA+x57WufvY0kgZWZeOGSDefZO8QSL9TPqm8e4V2SySYdreD9CDmQdTVF3J+9CLe+cV6dgO4VsZs30EpV3Te7ObyOhhZG/ROHMys2/5cITJQI760mtKhx1toRG3gtz7cZzpEWJqaxeIlk0jSIFnVgBJJQiNySXt4nNakEGDs/riA9PGq6xjjpSoSbI3nTKOuquNR5N1AUFhrwYT0z5UK3XasiG9OKZesQTBuIoIdBnwARJaPZhYnY8R9tcPuzKvCYaU0ZYXYQGupEYv5pDq/anabrFEKuCOrJe+FRfBsNmfz1tbAM+/vClgq0IIBX8qE/xknFqH6VKCGHepI3lvRa/XXcFkMy7tLLzE+D/nMgeZoAtLUlHXu/V+JhiL3/YPY9AzR4WcKWj/KO3oCsgdKl24+CzpF/aopZo3bQS7AWi6J6IHNBuCvZWijTii6p646gFnbgzMSL79DvYNPCUEwA0s2Qh3KsgEY0e60jozuvljj3beHK2kBHxXoQh10a4u7kj5S0xX82M7ISXYT4X3VHglgcrMTYWiV6j6vfEm1RbIEa7wT2rbbLZQp0gUDGcgAPYf4UkWMwkkNenqmKg1da9IvjftJCTUOM7pgh5Enikhj8UAZa/nf55YZBh2y1uhlEdOGuyL1HzICcOnU0nXl69UOG3oYFk5aAsbNAXAlpIB+MkH7ZxC7ln9f2d7mZIEF+z/a2SmIxaYQGUjJ6I4vAoypkj/RmB0WlOdGHujEo5WqRutEYyOh67QmA0FCawx35pYaFaCQZl/HdpFa9waR8/oJdN+jLyk4TCQwWAG0fFuVSy0XPMD5+56NG4eeLfFc6+7Hc/s+qnhe2jEBWar3gjFCOhSEQjc2cYeujqrkb/adAcB znTDTdnD rQDALIs3Feh9idmwow4hVXWW4jMsVlvT9Ika0bLBkUq/AffitUhho/wFpUwU4RWuxzW7gJ+LL0MDAsCgniKr/xdLdzhEPB6+vwH+bejY8j7UYZnZsAcipmc/8GPSGeqTDO9G2TuEzEej8pNcJE6vqme67TsSHxINwel8j1GuutJFVoH8AXSBeyPLS14gmal3NTUGrTdFTAmIhymKM6BX00/DTxGgvWDunPTx6Cp6Cxao2sA51E5EWqE/MdL67zMViv0vFkXTEmjKpeAP4HiuiYeFjHdeLCQ/+BvevKEc0RKp/lGEa1OFjhGtKpX8i+yZmMoCB8X36dhY5GQjpFoTE+8Fy7TEn8JDzoXzU2bWpOi4eApuYLVq9k5IWSA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's implement an alternative when per-page mapcounts in large folios are no longer maintained -- soon with CONFIG_NO_PAGE_MAPCOUNT. When computing the output for smaps / smaps_rollups, in particular when calculating the USS (Unique Set Size) and the PSS (Proportional Set Size), we still rely on per-page mapcounts. To determine private vs. shared, we'll use folio_likely_mapped_shared(), similar to how we handle PM_MMAP_EXCLUSIVE. Similarly, we might now under-estimate the USS and count pages towards "shared" that are actually "private" ("exclusively mapped"). When calculating the PSS, we'll now also use the average per-page mapcount for large folios: this can result in both, an over-estimation and an under-estimation of the PSS. The difference is not expected to matter much in practice, but we'll have to learn as we go. We can now provide folio_precise_page_mapcount() only with CONFIG_PAGE_MAPCOUNT, and remove one of the last users of per-page mapcounts when CONFIG_NO_PAGE_MAPCOUNT is enabled. Document the new behavior. Signed-off-by: David Hildenbrand --- Documentation/filesystems/proc.rst | 13 +++++++++++++ fs/proc/internal.h | 2 ++ fs/proc/task_mmu.c | 17 +++++++++++++++-- 3 files changed, 30 insertions(+), 2 deletions(-) diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst index bed03e77c0f91..7cbab4135f244 100644 --- a/Documentation/filesystems/proc.rst +++ b/Documentation/filesystems/proc.rst @@ -504,6 +504,19 @@ Note that even a page which is part of a MAP_SHARED mapping, but has only a single pte mapped, i.e. is currently used by only one process, is accounted as private and not as shared. +Note that in some kernel configurations, all pages part of a larger allocation +(e.g., THP) might be considered "shared" if the large allocation is +considered "shared": if not all pages are exclusive to the same process. +Further, some kernel configurations might consider larger allocations "shared", +if they were at one point considered "shared", even if they would now be +considered "exclusive". + +Some kernel configurations do not track the precise number of times a page part +of a larger allocation is mapped. In this case, when calculating the PSS, the +average number of mappings per page in this larger allocation might be used +as an approximation for the number of mappings of a page. The PSS calculation +will be imprecise in this case. + "Referenced" indicates the amount of memory currently marked as referenced or accessed. diff --git a/fs/proc/internal.h b/fs/proc/internal.h index 3c687f97e18c4..8c9ef19526d2b 100644 --- a/fs/proc/internal.h +++ b/fs/proc/internal.h @@ -143,6 +143,7 @@ unsigned name_to_int(const struct qstr *qstr); /* Worst case buffer size needed for holding an integer. */ #define PROC_NUMBUF 13 +#ifdef CONFIG_PAGE_MAPCOUNT /** * folio_precise_page_mapcount() - Number of mappings of this folio page. * @folio: The folio. @@ -173,6 +174,7 @@ static inline int folio_precise_page_mapcount(struct folio *folio, return mapcount; } +#endif /* CONFIG_PAGE_MAPCOUNT */ /** * folio_average_page_mapcount() - Average number of mappings per page in this diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c index 3d9fe99346478..30306e231ff04 100644 --- a/fs/proc/task_mmu.c +++ b/fs/proc/task_mmu.c @@ -734,6 +734,8 @@ static void smaps_account(struct mem_size_stats *mss, struct page *page, struct folio *folio = page_folio(page); int i, nr = compound ? compound_nr(page) : 1; unsigned long size = nr * PAGE_SIZE; + bool exclusive; + int mapcount; /* * First accumulate quantities that depend only on |size| and the type @@ -774,18 +776,29 @@ static void smaps_account(struct mem_size_stats *mss, struct page *page, dirty, locked, present); return; } + +#ifndef CONFIG_PAGE_MAPCOUNT + mapcount = folio_average_page_mapcount(folio); + exclusive = !folio_likely_mapped_shared(folio); +#endif + /* * We obtain a snapshot of the mapcount. Without holding the folio lock * this snapshot can be slightly wrong as we cannot always read the * mapcount atomically. */ for (i = 0; i < nr; i++, page++) { - int mapcount = folio_precise_page_mapcount(folio, page); unsigned long pss = PAGE_SIZE << PSS_SHIFT; + +#ifdef CONFIG_PAGE_MAPCOUNT + mapcount = folio_precise_page_mapcount(folio, page); + exclusive = mapcount < 2; +#endif + if (mapcount >= 2) pss /= mapcount; smaps_page_accumulate(mss, folio, PAGE_SIZE, pss, - dirty, locked, mapcount < 2); + dirty, locked, exclusive); } } From patchwork Thu Aug 29 16:56:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 13783482 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A1DE8C87FCB for ; Thu, 29 Aug 2024 16:59:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3BC126B00AE; Thu, 29 Aug 2024 12:59:40 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 36BDB6B00B0; Thu, 29 Aug 2024 12:59:40 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 20C9C6B00B1; Thu, 29 Aug 2024 12:59:40 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id F070E6B00AE for ; Thu, 29 Aug 2024 12:59:39 -0400 (EDT) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A333641042 for ; Thu, 29 Aug 2024 16:59:39 +0000 (UTC) X-FDA: 82505894478.20.7A9960A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf11.hostedemail.com (Postfix) with ESMTP id D73A740014 for ; Thu, 29 Aug 2024 16:59:37 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=U6st3mUV; spf=pass (imf11.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724950659; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=up1+iyu0HvtZL5E+EXssaXi/TIZa1G2RQW1zOiYjenc=; b=z9Fwv29wXso2PlT9l3hfR8ecoBKk06ra+k5DQhpm7lWB4VqhdTHCW7gU8HO/BhcRkSiDD0 kqUGwm08CGHaoZklynYLWGCTQfVFJ98bd6jltEVoHTC2G+3yGN8f5Uz2yb/3P573pqLt9F 8EE0Es1Nqsf3Lrs09gPPWHOB9Zm3o18= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=U6st3mUV; spf=pass (imf11.hostedemail.com: domain of david@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=david@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724950659; a=rsa-sha256; cv=none; b=C4CL1gKIPlQeOZpsBZTIP09SwwWAv6CHu/hSg575vf45f9VwtcVaNcrzELvvj46ePXd9jq LjRSsKrXMMfDf57JdV/3q8vS7FxYcrhmAYyuQGF0rDs6TTHpzEKyaHThxhEEpDR8adgn1L dsEEevDRjsw5XpNK/pYhZaj5DjUV3Sg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1724950777; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=up1+iyu0HvtZL5E+EXssaXi/TIZa1G2RQW1zOiYjenc=; b=U6st3mUVFaGagzTpIbDBUJ1wLcggrYpsvyWW8bSAw9eACFltxARznfBEL7zzm8dWnDamF3 ZJmsncJutyb5vr/dyCFx/XROTRFcBG7L/IFeLDAkiiw26puNQFe5+geU6xydSuMASpWzC+ 5UIxo3Rmd1LVspjpfZokHl6qo4ioAJI= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-669-1KRbKMh9NyG_mK90LyPUkA-1; Thu, 29 Aug 2024 12:59:30 -0400 X-MC-Unique: 1KRbKMh9NyG_mK90LyPUkA-1 Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D2DEC195420D; Thu, 29 Aug 2024 16:59:27 +0000 (UTC) Received: from t14s.redhat.com (unknown [10.39.193.245]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id DD5A51955F66; Thu, 29 Aug 2024 16:59:17 +0000 (UTC) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org, David Hildenbrand , Andrew Morton , "Matthew Wilcox (Oracle)" , Tejun Heo , Zefan Li , Johannes Weiner , =?utf-8?q?Michal_Koutn=C3=BD?= , Jonathan Corbet , Andy Lutomirski , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen Subject: [PATCH v1 17/17] mm: stop maintaining the per-page mapcount of large folios (CONFIG_NO_PAGE_MAPCOUNT) Date: Thu, 29 Aug 2024 18:56:20 +0200 Message-ID: <20240829165627.2256514-18-david@redhat.com> In-Reply-To: <20240829165627.2256514-1-david@redhat.com> References: <20240829165627.2256514-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: D73A740014 X-Stat-Signature: i6ktb86d8pt3ino59n881xtaggqwibm3 X-Rspam-User: X-HE-Tag: 1724950777-793993 X-HE-Meta: U2FsdGVkX180QMYF9ZShNzlb8irnuB8xp5NkVUoNc3T0juPTRohxKLbpTY8x44JRVxelx/HC4Mpp0BUQo02wCv6oa8uHYpM7i9HzJ8wc7t/Dfm4hYPVzH97JM+ebklBZy6uGIc7T+x/im+NtD/sgQe3k+sS/COJcooJVHcoSWtHoyg4IQ8SuCjQxZupwzTixc+vEIJ8UhGB81/azNrYb+CgYFsancwaqBzb6Wu5DmNv+TfRI07SeJscJwv0XZap5P52933Ie7OatcvcCyR9IhSIlcJ17Zzser+JlgYR6QhPKwbMqI/QAXWgXW/KCLblXq+ILIiFo3OACx29z4dTAUJAIugYMgXDX4C1vudJF7cp/adDdETCPx+Spe47OKyYKIg7GI2ET40hywr5bS1w8/IuQu0SoWCo8qqXZFIRQDjQc3iuYiVz32hC6bqD/FRolH7XJ4A1TGb2j4kR1R3RQaOPwG7EAALrSgw7Ilgtems4NxNDvioz1j0s6vDkVDO1C3/8800ramEC/UjutcHsdoVoxy/vSz2MVGIrbpn+jqmzyzJK/z375A9G4pX+RA6XcZp6JM/MTEf9R9GfdC/Ga3WBABD9MiIhSHxOiDuLuJx3G65DqPAKqiErhK0PiZHmDRQJoi8cbS5CYLbZVIRmdJ4n1j0lUyU/ng04nVvQMFr+rXptY1hJdohJUfeJVHCXPzpvWhypdhz6UKbdgJ2aAoRBgnHUKXabKCoaBUB3ZNRKrO5JUUe8jDvYsOg6DxF2qErQ5hrQa+w57PMgSrGNSgQpn+BBHThuu4KEyUAoJz+JGbR1ll7mcDwQpWh+34T7XKtPO3f3E3HQYzAoNLfGVbycnIQAmxHM9asXX7WXow/qu9gMQW9w2UiH8j3ZpOoUbu6wb9wHzmjtMVGeAmrVfLvVgXTW3AfoqBOLJ8roxcR/LxWCZrh4XkbkY5eIDi+qaPmpzRTMfqlpiQ7OQPVx 9Z2hFi4u Zj18A5UwWcAaZmx/i/B0TLETyalveaL9+Efr05E12fhaS9XwXd5d7FZbMx4ztNYB9MfEy6jh/qq7YPQuQppSyXGmREgUpENfMuJKwXKU0HmWZjBGMFCjgB8AU7hcj7kwe7LFkLhAPF4K2orioQcOEgYUdW2JC8KTIDF4LduQqKeLxFR3MES8BdAfp/VB8Z7V30EFmeZc004MsGQPGkJXh3Mj0dVsO2mp87IzsXyHwWypyBuj6uf4jC8RT6manl+6G2KMxarp9aasxgRX26U7/MafUTETtp/w+0pZsBj7On5eN4x9CknMtTYassaNwM2/XDlvgh/MkX+E0vAKcuoNyJXbUfLJUUPPnii9vJtYWF6/Hb/rQAGPCPcbEXA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Everything is in place to stop using the per-page mapcounts in large folios with CONFIG_NO_PAGE_MAPCOUNT: the mapcount of tail pages will always be logically 0 (-1 value), just like it currently is for hugetlb folios already, and the page mapcount of the head page is either 0 (-1 value) or contains a page type (e.g., hugetlb). Maintaining _nr_pages_mapped without per-page mapcounts is impossible, so that one also has to go with CONFIG_NO_PAGE_MAPCOUNT. There are two remaining implications: (1) Per-node, per-cgroup and per-lruvec stats of "NR_ANON_MAPPED" ("mapped anonymous memory") and "NR_FILE_MAPPED" ("mapped file memory"): As soon as any page of the folio is mapped -- folio_mapped() -- we now account the complete folio as mapped. Once the last page is unmapped -- !folio_mapped() -- we account the complete folio as unmapped. This implies that ... * "AnonPages" and "Mapped" in /proc/meminfo and /sys/devices/system/node/*/meminfo * cgroup v2: "anon" and "file_mapped" in "memory.stat" and "memory.numa_stat" * cgroup v1: "rss" and "mapped_file" in "memory.stat" and "memory.numa_stat ... can now appear higher than before. But note that these folios do consume that memory, simply not all pages are actually currently mapped. It's worth nothing that other accounting in the kernel (esp. cgroup charging on allocation) is not affected by this change. [why oh why is "anon" called "rss" in cgroup v1] (2) Detecting partial mappings Detecting whether anon THP are partially mapped gets a bit more unreliable. As long as a single MM maps such a large folio ("exclusively mapped"), we can reliably detect it. Especially before fork() / after a short-lived child process quit, we will detect partial mappings reliably, which is the common case. In essence, if the average per-page mapcount in an anon THP is < 1, we know for sure that we have a partial mapping. However, as soon as multiple MMs are involved, we might miss detecting partial mappings: this might be relevant with long-lived child processes. If we have a fully-mapped anon folio before fork(), once our child processes and our parent all unmap (zap/COW) the same pages (but not the complete folio), we might not detect the partial mapping. However, once the child processes quit we would detect the partial mapping. How relevant this case is in practice remains to be seen. Swapout/migration will likely mitigate this. In the future, RMAP walkers should check for that for "mapped shared" anon folios, and flag them for deferred-splitting. There are a couple of remaining per-page mapcount users we won't touch for now: (1) __dump_folio(): we'll tackle that separately later. For now, it will always read effective mapcount of "0" for pages in large folios. (2) include/trace/events/page_ref.h: we should rework the whole handling to be folio-aware and simply trace folio_mapcount(). Let's leave it around for now, might still be helpful to trace the raw page mapcount value (e.g., including the page type). (3) mm/mm_init.c: to initialize the mapcount/type field to -1. Will be required until we decoupled type+mapcount (e.g., moving it into "struct folio"), and until we initialize the type+mapcount when allocating a folio. (4) mm/page_alloc.c: to sanity-check that the mapcount/type field is -1 when a page gets freed. We could probably remove at least the tail page mapcount check in non-debug environments. Some added ifdefery seems unavoidable for now: at least it's mostly limited to the rmap add/remove core primitives. Extend documentation. Signed-off-by: David Hildenbrand --- .../admin-guide/cgroup-v1/memory.rst | 4 ++ Documentation/admin-guide/cgroup-v2.rst | 10 ++- Documentation/filesystems/proc.rst | 10 ++- Documentation/mm/transhuge.rst | 31 +++++++--- include/linux/mm_types.h | 4 ++ include/linux/rmap.h | 10 ++- mm/internal.h | 21 +++++-- mm/page_alloc.c | 2 + mm/rmap.c | 61 +++++++++++++++++++ 9 files changed, 133 insertions(+), 20 deletions(-) diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst index 270501db9f4e8..2e2bbf944eea9 100644 --- a/Documentation/admin-guide/cgroup-v1/memory.rst +++ b/Documentation/admin-guide/cgroup-v1/memory.rst @@ -615,6 +615,10 @@ memory.stat file includes following statistics: 'rss + mapped_file" will give you resident set size of cgroup. + Note that some kernel configurations might account complete larger + allocations (e.g., THP) towards 'rss' and 'mapped_file', even if + only some, but not all that memory is mapped. + (Note: file and shmem may be shared among other cgroups. In that case, mapped_file is accounted only when the memory cgroup is owner of page cache.) diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst index e25e8b2698b95..039bdf49854f3 100644 --- a/Documentation/admin-guide/cgroup-v2.rst +++ b/Documentation/admin-guide/cgroup-v2.rst @@ -1429,7 +1429,10 @@ The following nested keys are defined. anon Amount of memory used in anonymous mappings such as - brk(), sbrk(), and mmap(MAP_ANONYMOUS) + brk(), sbrk(), and mmap(MAP_ANONYMOUS). Note that + some kernel configurations might account complete larger + allocations (e.g., THP) if only some, but not all the + memory of such an allocation is mapped anymore. file Amount of memory used to cache filesystem data, @@ -1472,7 +1475,10 @@ The following nested keys are defined. Amount of application memory swapped out to zswap. file_mapped - Amount of cached filesystem data mapped with mmap() + Amount of cached filesystem data mapped with mmap(). Note + that some kernel configurations might account complete + larger allocations (e.g., THP) if only some, but not + not all the memory of such an allocation is mapped. file_dirty Amount of cached filesystem data that was modified but diff --git a/Documentation/filesystems/proc.rst b/Documentation/filesystems/proc.rst index 7cbab4135f244..c6d6474738577 100644 --- a/Documentation/filesystems/proc.rst +++ b/Documentation/filesystems/proc.rst @@ -1148,9 +1148,15 @@ Dirty Writeback Memory which is actively being written back to the disk AnonPages - Non-file backed pages mapped into userspace page tables + Non-file backed pages mapped into userspace page tables. Note that + some kernel configurations might consider all pages part of a + larger allocation (e.g., THP) as "mapped", as soon as a single + page is mapped. Mapped - files which have been mmapped, such as libraries + files which have been mmapped, such as libraries. Note that some + kernel configurations might consider all pages part of a larger + allocation (e.g., THP) as "mapped", as soon as a single page is + mapped. Shmem Total memory used by shared memory (shmem) and tmpfs KReclaimable diff --git a/Documentation/mm/transhuge.rst b/Documentation/mm/transhuge.rst index 0ee58108a4d14..0d34f3ac13d8c 100644 --- a/Documentation/mm/transhuge.rst +++ b/Documentation/mm/transhuge.rst @@ -116,23 +116,28 @@ pages: succeeds on tail pages. - map/unmap of a PMD entry for the whole THP increment/decrement - folio->_entire_mapcount, increment/decrement folio->_large_mapcount - and also increment/decrement folio->_nr_pages_mapped by ENTIRELY_MAPPED - when _entire_mapcount goes from -1 to 0 or 0 to -1. + folio->_entire_mapcount and folio->_large_mapcount. With CONFIG_MM_ID, we also maintain the two slots for tracking MM owners (MM ID and corresponding mapcount), and the current status ("mapped shared" vs. "mapped exclusively"). + With CONFIG_PAGE_MAPCOUNT, we also increment/decrement + folio->_nr_pages_mapped by ENTIRELY_MAPPED when _entire_mapcount goes + from -1 to 0 or 0 to -1. + - map/unmap of individual pages with PTE entry increment/decrement - page->_mapcount, increment/decrement folio->_large_mapcount and also - increment/decrement folio->_nr_pages_mapped when page->_mapcount goes - from -1 to 0 or 0 to -1 as this counts the number of pages mapped by PTE. + folio->_large_mapcount. With CONFIG_MM_ID, we also maintain the two slots for tracking MM owners (MM ID and corresponding mapcount), and the current status ("mapped shared" vs. "mapped exclusively"). + With CONFIG_PAGE_MAPCOUNT, we also increment/decrement + page->_mapcount and increment/decrement folio->_nr_pages_mapped when + page->_mapcount goes from -1 to 0 or 0 to -1 as this counts the number + of pages mapped by PTE. + split_huge_page internally has to distribute the refcounts in the head page to the tail pages before clearing all PG_head/tail bits from the page structures. It can be done easily for refcounts taken by page table @@ -159,8 +164,8 @@ clear where references should go after split: it will stay on the head page. Note that split_huge_pmd() doesn't have any limitations on refcounting: pmd can be split at any point and never fails. -Partial unmap and deferred_split_folio() -======================================== +Partial unmap and deferred_split_folio() (anon THP only) +======================================================== Unmapping part of THP (with munmap() or other way) is not going to free memory immediately. Instead, we detect that a subpage of THP is not in use @@ -175,3 +180,13 @@ a THP crosses a VMA boundary. The function deferred_split_folio() is used to queue a folio for splitting. The splitting itself will happen when we get memory pressure via shrinker interface. + +With CONFIG_PAGE_MAPCOUNT, we reliably detect partial mappings based on +folio->_nr_pages_mapped. + +With CONFIG_NO_PAGE_MAPCOUNT, we detect partial mappings based on the +average per-page mapcount in a THP: if the average is < 1, an anon THP is +certainly partially mapped. As long as only a single process maps a THP, +this detection is reliable. With long-running child processes, there can +be scenarios where partial mappings can currently not be detected, and +might need asynchronous detection during memory reclaim in the future. diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 6d27856686439..2adf1839bcb0d 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -378,7 +378,11 @@ struct folio { struct { atomic_t _large_mapcount; atomic_t _entire_mapcount; +#ifdef CONFIG_PAGE_MAPCOUNT atomic_t _nr_pages_mapped; +#else /* !CONFIG_PAGE_MAPCOUNT */ + int _unused_1; +#endif /* !CONFIG_PAGE_MAPCOUNT */ atomic_t _pincount; #ifdef CONFIG_MM_ID int _mm0_mapcount; diff --git a/include/linux/rmap.h b/include/linux/rmap.h index ff2a16864deed..345d93636b2b1 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -219,7 +219,7 @@ static __always_inline void folio_set_large_mapcount(struct folio *folio, VM_WARN_ON_ONCE(folio->_mm1_mapcount >= 0); } -static __always_inline void folio_add_large_mapcount(struct folio *folio, +static __always_inline int folio_add_large_mapcount(struct folio *folio, int diff, struct vm_area_struct *vma) { const unsigned int mm_id = vma->vm_mm->mm_id; @@ -264,11 +264,12 @@ static __always_inline void folio_add_large_mapcount(struct folio *folio, folio_clear_large_mapped_exclusively(folio); } folio_unlock_large_mapcount_data(folio); + return mapcount_val + 1; } #define folio_inc_large_mapcount(folio, vma) \ folio_add_large_mapcount(folio, 1, vma) -static __always_inline void folio_sub_large_mapcount(struct folio *folio, +static __always_inline int folio_sub_large_mapcount(struct folio *folio, int diff, struct vm_area_struct *vma) { const unsigned int mm_id = vma->vm_mm->mm_id; @@ -294,6 +295,7 @@ static __always_inline void folio_sub_large_mapcount(struct folio *folio, folio->_mm1_mapcount == mapcount_val) folio_set_large_mapped_exclusively(folio); folio_unlock_large_mapcount_data(folio); + return mapcount_val + 1; } #define folio_dec_large_mapcount(folio, vma) \ folio_sub_large_mapcount(folio, 1, vma) @@ -493,9 +495,11 @@ static __always_inline void __folio_dup_file_rmap(struct folio *folio, break; } +#ifdef CONFIG_PAGE_MAPCOUNT do { atomic_inc(&page->_mapcount); } while (page++, --nr_pages > 0); +#endif folio_add_large_mapcount(folio, orig_nr_pages, dst_vma); break; case RMAP_LEVEL_PMD: @@ -592,7 +596,9 @@ static __always_inline int __folio_try_dup_anon_rmap(struct folio *folio, do { if (PageAnonExclusive(page)) ClearPageAnonExclusive(page); +#ifdef CONFIG_PAGE_MAPCOUNT atomic_inc(&page->_mapcount); +#endif } while (page++, --nr_pages > 0); folio_add_large_mapcount(folio, orig_nr_pages, dst_vma); break; diff --git a/mm/internal.h b/mm/internal.h index da38c747c73d4..9fb78ce3c2eb3 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -60,6 +60,13 @@ struct folio_batch; void page_writeback_init(void); +/* + * Flags passed to __show_mem() and show_free_areas() to suppress output in + * various contexts. + */ +#define SHOW_MEM_FILTER_NODES (0x0001u) /* disallowed nodes */ + +#ifdef CONFIG_PAGE_MAPCOUNT /* * If a 16GB hugetlb folio were mapped by PTEs of all of its 4kB pages, * its nr_pages_mapped would be 0x400000: choose the ENTIRELY_MAPPED bit @@ -69,12 +76,6 @@ void page_writeback_init(void); #define ENTIRELY_MAPPED 0x800000 #define FOLIO_PAGES_MAPPED (ENTIRELY_MAPPED - 1) -/* - * Flags passed to __show_mem() and show_free_areas() to suppress output in - * various contexts. - */ -#define SHOW_MEM_FILTER_NODES (0x0001u) /* disallowed nodes */ - /* * How many individual pages have an elevated _mapcount. Excludes * the folio's entire_mapcount. @@ -85,6 +86,12 @@ static inline int folio_nr_pages_mapped(const struct folio *folio) { return atomic_read(&folio->_nr_pages_mapped) & FOLIO_PAGES_MAPPED; } +#else /* !CONFIG_PAGE_MAPCOUNT */ +static inline int folio_nr_pages_mapped(const struct folio *folio) +{ + return -1; +} +#endif /* !CONFIG_PAGE_MAPCOUNT */ /* * Retrieve the first entry of a folio based on a provided entry within the @@ -663,7 +670,9 @@ static inline void prep_compound_head(struct page *page, unsigned int order) folio_set_order(folio, order); atomic_set(&folio->_large_mapcount, -1); atomic_set(&folio->_entire_mapcount, -1); +#ifdef CONFIG_PAGE_MAPCOUNT atomic_set(&folio->_nr_pages_mapped, 0); +#endif /* CONFIG_PAGE_MAPCOUNT */ atomic_set(&folio->_pincount, 0); #ifdef CONFIG_MM_ID folio->_mm0_mapcount = -1; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index c81f29e29b82d..bdb57540cdffa 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -951,10 +951,12 @@ static int free_tail_page_prepare(struct page *head_page, struct page *page) bad_page(page, "nonzero large_mapcount"); goto out; } +#ifdef CONFIG_PAGE_MAPCOUNT if (unlikely(atomic_read(&folio->_nr_pages_mapped))) { bad_page(page, "nonzero nr_pages_mapped"); goto out; } +#endif if (unlikely(atomic_read(&folio->_pincount))) { bad_page(page, "nonzero pincount"); goto out; diff --git a/mm/rmap.c b/mm/rmap.c index 226b188499f91..888394ff9dd5b 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1156,7 +1156,9 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio, struct page *page, int nr_pages, struct vm_area_struct *vma, enum rmap_level level, int *nr_pmdmapped) { +#ifdef CONFIG_PAGE_MAPCOUNT atomic_t *mapped = &folio->_nr_pages_mapped; +#endif /* CONFIG_PAGE_MAPCOUNT */ const int orig_nr_pages = nr_pages; int first = 0, nr = 0; @@ -1169,6 +1171,7 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio, break; } +#ifdef CONFIG_PAGE_MAPCOUNT do { first += atomic_inc_and_test(&page->_mapcount); } while (page++, --nr_pages > 0); @@ -1178,9 +1181,18 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio, nr = first; folio_add_large_mapcount(folio, orig_nr_pages, vma); +#else /* !CONFIG_PAGE_MAPCOUNT */ + nr = folio_add_large_mapcount(folio, orig_nr_pages, vma); + if (nr == orig_nr_pages) + /* Was completely unmapped. */ + nr = folio_large_nr_pages(folio); + else + nr = 0; +#endif /* CONFIG_PAGE_MAPCOUNT */ break; case RMAP_LEVEL_PMD: first = atomic_inc_and_test(&folio->_entire_mapcount); +#ifdef CONFIG_PAGE_MAPCOUNT if (first) { nr = atomic_add_return_relaxed(ENTIRELY_MAPPED, mapped); if (likely(nr < ENTIRELY_MAPPED + ENTIRELY_MAPPED)) { @@ -1195,6 +1207,16 @@ static __always_inline unsigned int __folio_add_rmap(struct folio *folio, } } folio_inc_large_mapcount(folio, vma); +#else /* !CONFIG_PAGE_MAPCOUNT */ + if (first) + *nr_pmdmapped = folio_large_nr_pages(folio); + nr = folio_inc_large_mapcount(folio, vma); + if (nr == 1) + /* Was completely unmapped. */ + nr = folio_large_nr_pages(folio); + else + nr = 0; +#endif /* CONFIG_PAGE_MAPCOUNT */ break; } return nr; @@ -1332,6 +1354,7 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio, break; } } +#ifdef CONFIG_PAGE_MAPCOUNT for (i = 0; i < nr_pages; i++) { struct page *cur_page = page + i; @@ -1341,6 +1364,10 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio, folio_entire_mapcount(folio) > 1)) && PageAnonExclusive(cur_page), folio); } +#else /* !CONFIG_PAGE_MAPCOUNT */ + VM_WARN_ON_FOLIO(!folio_test_large(folio) && PageAnonExclusive(page) && + atomic_read(&folio->_mapcount) > 0, folio); +#endif /* !CONFIG_PAGE_MAPCOUNT */ /* * For large folio, only mlock it if it's fully mapped to VMA. It's @@ -1445,19 +1472,25 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma, struct page *page = folio_page(folio, i); /* increment count (starts at -1) */ +#ifdef CONFIG_PAGE_MAPCOUNT atomic_set(&page->_mapcount, 0); +#endif /* CONFIG_PAGE_MAPCOUNT */ if (exclusive) SetPageAnonExclusive(page); } folio_set_large_mapcount(folio, nr, vma); +#ifdef CONFIG_PAGE_MAPCOUNT atomic_set(&folio->_nr_pages_mapped, nr); +#endif /* CONFIG_PAGE_MAPCOUNT */ } else { nr = folio_large_nr_pages(folio); /* increment count (starts at -1) */ atomic_set(&folio->_entire_mapcount, 0); folio_set_large_mapcount(folio, 1, vma); +#ifdef CONFIG_PAGE_MAPCOUNT atomic_set(&folio->_nr_pages_mapped, ENTIRELY_MAPPED); +#endif /* CONFIG_PAGE_MAPCOUNT */ if (exclusive) SetPageAnonExclusive(&folio->page); nr_pmdmapped = nr; @@ -1527,7 +1560,9 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, struct page *page, int nr_pages, struct vm_area_struct *vma, enum rmap_level level) { +#ifdef CONFIG_PAGE_MAPCOUNT atomic_t *mapped = &folio->_nr_pages_mapped; +#endif /* CONFIG_PAGE_MAPCOUNT */ int last = 0, nr = 0, nr_pmdmapped = 0; bool partially_mapped = false; @@ -1540,6 +1575,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, break; } +#ifdef CONFIG_PAGE_MAPCOUNT folio_sub_large_mapcount(folio, nr_pages, vma); do { last += atomic_add_negative(-1, &page->_mapcount); @@ -1550,8 +1586,20 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, nr = last; partially_mapped = nr && atomic_read(mapped); +#else /* !CONFIG_PAGE_MAPCOUNT */ + nr = folio_sub_large_mapcount(folio, nr_pages, vma); + if (!nr) { + /* Now completely unmapped. */ + nr = folio_nr_pages(folio); + } else { + partially_mapped = nr < folio_large_nr_pages(folio) && + !folio_entire_mapcount(folio); + nr = 0; + } +#endif /* !CONFIG_PAGE_MAPCOUNT */ break; case RMAP_LEVEL_PMD: +#ifdef CONFIG_PAGE_MAPCOUNT folio_dec_large_mapcount(folio, vma); last = atomic_add_negative(-1, &folio->_entire_mapcount); if (last) { @@ -1569,6 +1617,19 @@ static __always_inline void __folio_remove_rmap(struct folio *folio, } partially_mapped = nr && nr < nr_pmdmapped; +#else /* !CONFIG_PAGE_MAPCOUNT */ + last = atomic_add_negative(-1, &folio->_entire_mapcount); + if (last) + nr_pmdmapped = folio_large_nr_pages(folio); + nr = folio_dec_large_mapcount(folio, vma); + if (!nr) { + /* Now completely unmapped. */ + nr = folio_large_nr_pages(folio); + } else { + partially_mapped = last && nr < folio_large_nr_pages(folio); + nr = 0; + } +#endif /* !CONFIG_PAGE_MAPCOUNT */ break; }