From patchwork Tue Dec 19 07:55:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13497937 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 48B02C41535 for ; Tue, 19 Dec 2023 07:56:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id ABEE68D0001; Tue, 19 Dec 2023 02:56:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id A6EC28D0005; Tue, 19 Dec 2023 02:56:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 942DC8D0001; Tue, 19 Dec 2023 02:56:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 82A238D0001 for ; Tue, 19 Dec 2023 02:56:13 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 5ECAF80886 for ; Tue, 19 Dec 2023 07:56:13 +0000 (UTC) X-FDA: 81582809826.23.DF3AD1A Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf19.hostedemail.com (Postfix) with ESMTP id A215C1A0019 for ; Tue, 19 Dec 2023 07:56:11 +0000 (UTC) Authentication-Results: imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="ZH/TYxlV"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf19.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702972571; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NRvgvssN6KVSq9rYZ03PlMu3BaOgDfLGQsqd5RQuPrM=; b=NzHxQ215gvr04vBXUKI93Yr5QXQaJmShPk8hnz5eB6ISYhjGdxIgr2j8qLn7EhGhbhmJTD 09T8CUC5Rtd5Uk0JO9ZTgp/mQJrnUvRXvkxbrpdA3KgN/wxJq30yOLcK6yOoUWcuudxUly mYFk+GDrBRLSuFKCMbDM2cL+oyawcrI= ARC-Authentication-Results: i=1; imf19.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="ZH/TYxlV"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf19.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702972571; a=rsa-sha256; cv=none; b=TqtIOXyW5Igaiylw/KCSrjpqknIFFaId6FIlzV9uINYs9rAOmDYhynpZLBm8CQfYXehFtY Z3Z7NrCp7F39S/BTQsY/IAUUD1hYf+26Qhmh1pfwN1wiNLmnLqv2aQEmu3n+9jiJ/5EDGw tk10LCRQ0O27y5Vk0W1fv4rpdJQY0AY= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702972571; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NRvgvssN6KVSq9rYZ03PlMu3BaOgDfLGQsqd5RQuPrM=; b=ZH/TYxlVsmqTdDGi6plT19jBLA+1MsMU3ef9x/HOKFG+TM4anrf6CUSMkXVphyoHk3zm55 EAN76QzRWurblu0B0U5JGrDaFHX9s2xJHdBCBk1po8DK47xWhqNlBUHD3rlTm1RW+4f43L CCHCsFcub5+2A8d6i0v4QIhVLeJYaD4= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-672-dipUc7OWPBCmQTMSma-p8A-1; Tue, 19 Dec 2023 02:56:06 -0500 X-MC-Unique: dipUc7OWPBCmQTMSma-p8A-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E23A729AA38D; Tue, 19 Dec 2023 07:56:04 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id B4B6D2026D66; Tue, 19 Dec 2023 07:55:52 +0000 (UTC) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Matthew Wilcox , Christophe Leroy , Lorenzo Stoakes , David Hildenbrand , Vlastimil Babka , Mike Kravetz , Mike Rapoport , Christoph Hellwig , John Hubbard , Andrew Jones , linux-arm-kernel@lists.infradead.org, Michael Ellerman , "Kirill A . Shutemov" , linuxppc-dev@lists.ozlabs.org, Rik van Riel , linux-riscv@lists.infradead.org, Yang Shi , James Houghton , "Aneesh Kumar K . V" , Andrew Morton , Jason Gunthorpe , Andrea Arcangeli , peterx@redhat.com, Axel Rasmussen Subject: [PATCH 01/13] mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES Date: Tue, 19 Dec 2023 15:55:26 +0800 Message-ID: <20231219075538.414708-2-peterx@redhat.com> In-Reply-To: <20231219075538.414708-1-peterx@redhat.com> References: <20231219075538.414708-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Rspam-User: X-Stat-Signature: xzx1acnnimop86f4ty8br3ezk4bzdkos X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: A215C1A0019 X-HE-Tag: 1702972571-182557 X-HE-Meta: U2FsdGVkX18HGjoZMDFgIuDOvu++7EXfbJqbPpLCUukKuYj0ZzjUuFhXIY5i14w/c+xk3xY4PllBjLW8XDI9HfNomGzaw/zrF5vOl+iFjJ+fXObjT8MtG49cEkMNzKlEHIixACGz8vnb7G9FnyQP1IlXUA9QQQG398icJGExGca5FkS61ZKbgsWBe15298ohEERus+w7E5nRAZnVUPDZvub9XQY6NjeXO4YMBJDrkfDCxoaKnmEUCuiR95SqD38p7uFthrnmsbg5U3w/CiVRsZRPZvTawwCu+ywu6jzfpScK3UP+z3a/Mqlo6xnIoXEZp+uW5YmsdCn173l+TZEQQ3Uvi2da2GZ5NDTQEFRIn+1js2/BZQUcLmtu4MyIU6kfNWXBy3HkSQL49zFiPVsnWMyUhJ5YI4sfqWWq1352UL33lG9PAm0fe/Q1jbFPE5GFe7Kh2Nlg3dC3KfTeCW/y0VUZy7vZv1GxNJH6yBwwBSnc7D3Ki/gi5CNJlCjSYaqz2UFs6uiMSLUAhHMeOuzgDdIPJDeqPueey9zGWx2ohVm2Wmm8zOY2AWWdCVBX43USseTwzNEsMiRfNinffdL1eoWHZyy7JsT73axlnpPcb0vTnBT8Dn12eilQkncqti549zka2rVR4jsbQBKRXfHoTiagt7CGIO4JPQa0zx89IBi2x3JL+44WTJis7d296qtS1AkTMCgGiOkh/qjJDurBa5/+fYMbdfwD4jEUdMxlhiJSy5/e5WlqCD0ZQBeJ6P1UHF9aVuuSlv0wuWs7RTKm/tSgebXnPNAu/aHpHzHJdxV6G23GGXi74MWfz4+5OOC9p3KF/7YgXYPAPqeoBu8FNlY44WMF/DeFEKtpMdUvfV7Mtyzb9bZnLcv2bqCg9dXCAnkCF7CwVhRZBpc1D0vwm64uerrTD94vFRGOhmgrBnXTxrHYl2fJ4+/8vSmlM5ja X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Introduce a config option that will be selected as long as huge leaves are involved in pgtable (thp or hugetlbfs). It would be useful to mark any code with this new config that can process either hugetlb or thp pages in any level that is higher than pte level. Signed-off-by: Peter Xu --- mm/Kconfig | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/Kconfig b/mm/Kconfig index 8f8b02e9c136..4ca97d959323 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -904,6 +904,9 @@ config READ_ONLY_THP_FOR_FS endif # TRANSPARENT_HUGEPAGE +config PGTABLE_HAS_HUGE_LEAVES + def_bool TRANSPARENT_HUGEPAGE || HUGETLB_PAGE + # # UP and nommu archs use km based percpu allocator # From patchwork Tue Dec 19 07:55:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13497938 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BC65EC46CD4 for ; Tue, 19 Dec 2023 07:56:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 060248D0006; Tue, 19 Dec 2023 02:56:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0363F8D0005; Tue, 19 Dec 2023 02:56:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E67D88D0006; Tue, 19 Dec 2023 02:56:24 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id D8F6D8D0005 for ; Tue, 19 Dec 2023 02:56:24 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id A8B3C408C5 for ; Tue, 19 Dec 2023 07:56:24 +0000 (UTC) X-FDA: 81582810288.10.B96CF46 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf26.hostedemail.com (Postfix) with ESMTP id EF356140012 for ; Tue, 19 Dec 2023 07:56:22 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ZI0L6sbL; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf26.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702972583; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=75YlhRgpWvz7xroPDJtpCNniQimqN/967LqIgG1M+eM=; b=nQ2SLU3dHfbCUHX0iAsbBLcSa+zjJ+ffiZSqativtIVzGWwUTUSK8hIxRq1ebzd2iExqDF JIhLaXb2ffAHJyW9kit/Vy3RlVbGrYtdp6ZfH5u48zxvixFpDdguSizUAbvo9nc4jFHYze ZbGvoJzqNG3nsSKFusvf1plPClfQFDg= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ZI0L6sbL; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf26.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702972583; a=rsa-sha256; cv=none; b=Pe6U+9iH2a7Bf7jdLHVtiO/R4E2n+zwFizQRcOFnWwYXT5CHC3ikPCL0cmbphpl7+KQ1tH mNFaBOk9Y23C+jGicOCuF8kU6Jd+gG4oo1PHiNz7wouKbkmcODD7Wd8IdYEVQFNWZWQFSK f28Iv3V097gKGWGmOODYTo+KPk9E+Hs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702972582; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=75YlhRgpWvz7xroPDJtpCNniQimqN/967LqIgG1M+eM=; b=ZI0L6sbLTVFrGtkM+vcH5a+0jUqvkNkPr9K0tgOuyp9vXoDi8cBVG27Wm53nyinNMN/xVS V32BRHn1bNGcUF/lp8HfBlDFIzSsLWJ7fdLBsZV80fHI2H69vt0x8Jr3Nlq+z/Ee2b6hhW rJpr8bJicawuydJn8QwZ7KT/5UDZfjU= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-356-wW0mlPoONC2GJ2s_aRZ7hQ-1; Tue, 19 Dec 2023 02:56:18 -0500 X-MC-Unique: wW0mlPoONC2GJ2s_aRZ7hQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AE4F23813F24; Tue, 19 Dec 2023 07:56:16 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 993A82026D66; Tue, 19 Dec 2023 07:56:05 +0000 (UTC) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Matthew Wilcox , Christophe Leroy , Lorenzo Stoakes , David Hildenbrand , Vlastimil Babka , Mike Kravetz , Mike Rapoport , Christoph Hellwig , John Hubbard , Andrew Jones , linux-arm-kernel@lists.infradead.org, Michael Ellerman , "Kirill A . Shutemov" , linuxppc-dev@lists.ozlabs.org, Rik van Riel , linux-riscv@lists.infradead.org, Yang Shi , James Houghton , "Aneesh Kumar K . V" , Andrew Morton , Jason Gunthorpe , Andrea Arcangeli , peterx@redhat.com, Axel Rasmussen Subject: [PATCH 02/13] mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static Date: Tue, 19 Dec 2023 15:55:27 +0800 Message-ID: <20231219075538.414708-3-peterx@redhat.com> In-Reply-To: <20231219075538.414708-1-peterx@redhat.com> References: <20231219075538.414708-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Rspamd-Queue-Id: EF356140012 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: j34zg7wgqyfqrnqrsd5zuz6omdnb6u1a X-HE-Tag: 1702972582-681103 X-HE-Meta: U2FsdGVkX19c7ZFtMBSVPOnHCie99arM+be9GzefQBAfq7gdmoJJdJkLeklyKTdR4r402FcFbnMfR4qSyqUMhzIUb62LtchrK5ti4gAM7qlqPTp8hLkkwLf4khHZTbNjJ35zSu0DCbrYMmgdXYfy7wBVQ2GSyFezHei3YXmOwSnKBcGJluDyma4bROJc7Ycq4t0GArVq3hj2VchXffmJYhcvPTsLZxwTevqxpbRIUexGMLCxwu78beZURl5MUz34bZRiePrIH+uKJdqkD+TXB1r1sVp6X64Q3EDg0upAnTbFWOPITj2p6BR2Yj8DMgyYcdPguafviEfgHkAiRMsdTpUQ9s0nDc8GS6ThbX63Ih9NhhyOP7kRLiNjOqc2//MkHp0nUF8CHFQUWKE25ntY6Jcc+RQp2MsaNXC6bm9YESe/adZmJkEp6FjBIMIVLsOTOA6ncOwSr24v3Tzji28By+ZcAwkEmC18yAqdoUXaZSIwRJOuw7/oLdFSF2sF22UHUUWZG6IvzilYN7At2j3WV7Iu9qv2kOs1PrRuLuSX7xTCx8eCoeFs4ZghWIcyPgNiIXHcbRyNWaDMlQnJSiuWHjOivGGpdyb9kZhgGhIf/+xAgTWHdU4O8sNrnchxH+royI8hbuW9otDNaGJBGeIB2BjHAMy2USMS7w5O6nTsm5CDQhaXvZRinZpuP4MA3DU7coh5GCcWfbno356WeQaRjLXymQFxLf/TpQdpJv16rMihMaFX8O9WCI3N049NwImN3pOLjgOqlncaRSxli91QrEqGdnlmmSUKGKDLm7YbXlrv+qbwWhuEjoKjjzXIE9V8kDxhA6rImVo9Vpgfor9LaoGPgskxtHmfUzTb40upqli6nSDiOsawLY7dFZCZaiRFWOxI0U2tywxdrQtasRxQb2hw/yHgkN110Rggc5kq95hxlPXKTudQHVzMoqaHHP7S1oVhtS1slhOjDMkWlLv wcThdygG nI/blpACslc7F9TCZecNaqw0onAogNURHJwWbQJBSLb9RAUz/+D1oK7G2xuu/S6OSN9pG2OWlB2mQ53PuEPkltXllBh0dpEOJbzbGxY7pG9Xrni2QkGlF+EDTeAcBtQMOqeHplTwVBgsUf5RgFURMRbuJTJE65pvUEpKfFoBF1PNGCDpMzK4+qLRMXxCMnl3DWTGMRv3qm+C4orSG8CPLvzujnA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu It will be used outside hugetlb.c soon. Signed-off-by: Peter Xu --- include/linux/hugetlb.h | 9 +++++++++ mm/hugetlb.c | 4 ++-- 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 236ec7b63c54..f8c5c174c8a6 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -174,6 +174,9 @@ u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx); pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, pud_t *pud); +bool hugetlbfs_pagecache_present(struct hstate *h, + struct vm_area_struct *vma, + unsigned long address); struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage); @@ -1221,6 +1224,12 @@ static inline void hugetlb_register_node(struct node *node) static inline void hugetlb_unregister_node(struct node *node) { } + +static inline bool hugetlbfs_pagecache_present( + struct hstate *h, struct vm_area_struct *vma, unsigned long address) +{ + return false; +} #endif /* CONFIG_HUGETLB_PAGE */ static inline spinlock_t *huge_pte_lock(struct hstate *h, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6feb3e0630d1..29705e5c6f40 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6018,8 +6018,8 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, /* * Return whether there is a pagecache page to back given address within VMA. */ -static bool hugetlbfs_pagecache_present(struct hstate *h, - struct vm_area_struct *vma, unsigned long address) +bool hugetlbfs_pagecache_present(struct hstate *h, + struct vm_area_struct *vma, unsigned long address) { struct address_space *mapping = vma->vm_file->f_mapping; pgoff_t idx = linear_page_index(vma, address); From patchwork Tue Dec 19 07:55:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13497939 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0E54C41535 for ; Tue, 19 Dec 2023 07:56:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 84D8D8D0007; Tue, 19 Dec 2023 02:56:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7FDC38D0005; Tue, 19 Dec 2023 02:56:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 713928D0007; Tue, 19 Dec 2023 02:56:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6303B8D0005 for ; Tue, 19 Dec 2023 02:56:36 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id 3C76AA0459 for ; Tue, 19 Dec 2023 07:56:36 +0000 (UTC) X-FDA: 81582810792.29.4E9D1CE Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf23.hostedemail.com (Postfix) with ESMTP id 5BDE314000B for ; Tue, 19 Dec 2023 07:56:34 +0000 (UTC) Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=S5RPX8rj; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf23.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702972594; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=VdhTUt+FrTuSBSRKmaD9aOddOtS2xHAJn3bdMelwyoI=; b=DhwVzZCzAXe8OqHWxVBCwau74p1CH+rCSH6fKbXgyEf2wjeveNHK2Yj5wBJ4yb9put/5qX tCI3IXIsS/3b7g4ilviQTm4Jui57NF2SN1vugy6ciD/qE79y71+gPXXvimoR5CPu3tjdnk o/5mdSXg46jgNLFUYqoVSLRj2y9CDv8= ARC-Authentication-Results: i=1; imf23.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=S5RPX8rj; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf23.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702972594; a=rsa-sha256; cv=none; b=Xpf/Qd3vaW14vijuInFjuhZat2uiSKMPo3L5dm4miMafD7bIrETLXedE/YPgEbKuvH6Ptl XQkfgNCwJkrOg2yIJ5jtoRCKvSWssPPcXSP/OfaF/JvWUTUXeYENF0zRZZSBRq+eK4xFPJ MVRsXN5ay0BfphOpjuJt94sZiRfC+MI= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702972593; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VdhTUt+FrTuSBSRKmaD9aOddOtS2xHAJn3bdMelwyoI=; b=S5RPX8rjfwaWxduNP4ARumN1dF0Mc6P09VrcIRPJjancgkznbPEiRTsoawzYM77AqpIJtt iWo9fhCmrEbF/tqpDWhxN8SphxsgnjPZSE4vvLGjbP6kwpp3tuyAiQcRPyrzq8fAU3onrC 8MLmu0WS3xik1xWOaimbrrtWtcTaV0A= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-443-zvB374S3NdqDENm7NImiSg-1; Tue, 19 Dec 2023 02:56:29 -0500 X-MC-Unique: zvB374S3NdqDENm7NImiSg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 298F1185A782; Tue, 19 Dec 2023 07:56:28 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7B65A2026D6F; Tue, 19 Dec 2023 07:56:17 +0000 (UTC) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Matthew Wilcox , Christophe Leroy , Lorenzo Stoakes , David Hildenbrand , Vlastimil Babka , Mike Kravetz , Mike Rapoport , Christoph Hellwig , John Hubbard , Andrew Jones , linux-arm-kernel@lists.infradead.org, Michael Ellerman , "Kirill A . Shutemov" , linuxppc-dev@lists.ozlabs.org, Rik van Riel , linux-riscv@lists.infradead.org, Yang Shi , James Houghton , "Aneesh Kumar K . V" , Andrew Morton , Jason Gunthorpe , Andrea Arcangeli , peterx@redhat.com, Axel Rasmussen Subject: [PATCH 03/13] mm: Provide generic pmd_thp_or_huge() Date: Tue, 19 Dec 2023 15:55:28 +0800 Message-ID: <20231219075538.414708-4-peterx@redhat.com> In-Reply-To: <20231219075538.414708-1-peterx@redhat.com> References: <20231219075538.414708-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Rspamd-Queue-Id: 5BDE314000B X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: xwx6cpadhwxxhoyqn3h4gn7siu8i8u8y X-HE-Tag: 1702972594-335933 X-HE-Meta: U2FsdGVkX19O35uAPIqPjMXnhgUxrl8sCKnVv9TnuZ3qCVRpSPkjCwz/X92X2TyjyA0eln7531jOt7a57PGMMRPI8s3T0RekDycHhWdCwjZ5w04N14haxdhxOMkkmHhFK3TRFz2WM2vArk+yimobEjpZSWR6ibD/uhNMO3avJcFGdQeN8YL7JEuIL3HsHxq0hi8Hy6QDvRYn0UXi0lDBPurEHezNzcRiZ5Dnpb1aNVKqNpYL6evqqCTrL/8I+y7FNqmPhSkjuJL9qDh3uXRovlgQl4rKJdNRvOjIjJnsUVSF3FjxSBfJQ0KN1Qx0TfYZtwvS8N8A9rNSZ/Am0npos2h6ic/ybBYEchGEwi2RyKQRECwLchJhjf80Pyp9q87YzGSbQoXptuv7YIPI0PUL4p+QY6cU5wznatSKBFuq1nAMj86104zxKZbNa17NVcTXzlpLLVvlZpuMipP/W+zVJh+4Ciw9rmlpp9p/YJ5gmqjD3STRL/a2hcs2+9ekf+yBkcS39ImTo24F1mkbxIllvnMUULhnutn6V3Mw/33HVnA0YABvR4LKPFuBkZq5gLTpS8EpX90x8y45V9hPGVLGHqr4LKRYxQwdSsA+Plpwp0mHDIGFZ7ZIkoGv4Su2N06ByXQy5D3wg31Kdlp3aYQyHPpmu8sQg15+HLH0k3VzNPD/o4FEccuMnUlDNNGVTwzR+retBWf5mxwGaAb536hcNcF6FWI86ewhg3qYlK/TbgrLVBqfpKsmeTsOXIJ5gs+PwiHm0Bev6byvjcW1elfs7gBYcv3Y7oPgBkIryI7MWPr85wnPias3KQpqR4DQxiVW2jmOvlAf9mPZo+/O0KmG2FWg7/v6fQX6E+MjjuzmCMIjW8tLDweWNTG161IqVeDHJ7C9iH894jAyc+DecZQ8d+bF818wcOBGqKxw0ph5d/Ml7YpIjlsIcHVc47x0SRMKaXL3VZn6iEG7C8qJ0B7 R9Tz+SzA 5JWWviYQ+vO1zCVqjZhq3etAMuquu59VJgOMV08+ow+EAPR4Lh/J4FfBtp+rn1VEa9Gbu2BgmRY1VCCGFEUk8U2mYAngwcqX584HLwwvD/GmlK9khw5I4WDcKvD/qZmV/h3//jgBpqk/k1Ds/CYrc4TwUr8LLow8IyOZnNtzJ21JOLp03ZHerPyybvr/vWoeBkhHrpvqVT8Pv1RtZlCpJsb3yJg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD. It can be a helpful helper if we want to merge more THP and hugetlb code paths. Make it a generic default implementation, only exist when CONFIG_MMU. Arch can overwrite it by defining its own version. For example, ARM's pgtable-2level.h defines it to always return false. Keep the macro declared with all config, it should be optimized to a false anyway if !THP && !HUGETLB. Signed-off-by: Peter Xu --- include/linux/pgtable.h | 4 ++++ mm/gup.c | 3 +-- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index af7639c3b0a3..6f2fa1977b8a 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1355,6 +1355,10 @@ static inline int pmd_write(pmd_t pmd) #endif /* pmd_write */ #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +#ifndef pmd_thp_or_huge +#define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd)) +#endif + #ifndef pud_write static inline int pud_write(pud_t pud) { diff --git a/mm/gup.c b/mm/gup.c index 0a5f0e91bfec..efc9847e58fb 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -3004,8 +3004,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo if (!pmd_present(pmd)) return 0; - if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) || - pmd_devmap(pmd))) { + if (unlikely(pmd_thp_or_huge(pmd) || pmd_devmap(pmd))) { /* See gup_pte_range() */ if (pmd_protnone(pmd)) return 0; From patchwork Tue Dec 19 07:55:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13497940 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF857C41535 for ; Tue, 19 Dec 2023 07:56:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6ED6F6B006E; Tue, 19 Dec 2023 02:56:49 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 69FD46B0074; Tue, 19 Dec 2023 02:56:49 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5680B6B0075; Tue, 19 Dec 2023 02:56:49 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 462D46B006E for ; Tue, 19 Dec 2023 02:56:49 -0500 (EST) Received: from smtpin04.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 278511C1416 for ; Tue, 19 Dec 2023 07:56:49 +0000 (UTC) X-FDA: 81582811338.04.7CF5A12 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf28.hostedemail.com (Postfix) with ESMTP id 7462CC001A for ; Tue, 19 Dec 2023 07:56:47 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=RnUc6K0c; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf28.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702972607; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=W3mfDaRytJv6LOYv89LHjQVs33fumKlGjLMhNR5CFmw=; b=DpcH2cHh6UDosF1pwTlMivgGLd3zC9Av+7KooJNvXGo+KlCGzFFw2S/c+n/T2tDukYJucD 2nzCjz1OIzCRINA1eRAvQoaTIt7ewUfwJADF9ga4gtz7Td/I9H3BTTi8zszzIeTFB6R0Ot yRuxN02Qf+B2DVUfAmO/eLhVg7iMomM= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=RnUc6K0c; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf28.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702972607; a=rsa-sha256; cv=none; b=Di9aFTsm5XJPfbsnej0S0z9qezzoxtVfZHNWsuMHg80i1CwNxJYNClYekjzaaKPp6dZYvJ mmvvLXptaub5t8rkJh29esHJkSYnK6abLg8Kx9VG/YsOluKshFZ3N0nwq1vVhNfsMnkvn/ 43Lpwve9nMeIsm/5vttHreSq6GPLW4A= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702972606; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=W3mfDaRytJv6LOYv89LHjQVs33fumKlGjLMhNR5CFmw=; b=RnUc6K0cGj0zBJlFV8m63X2HVayTxf1KoVUzY8i3sfBTfS+nyWORM9Yw08juYEf3BHnC0S IPT2fg+b7nF7Ln9GBQwNxfXQ+viavW8W1mAgMPRF6HKQffepGH8VH88ZbRliIIqsv+S/Qg YTE2TygdBkC4ayE8A/4+IDkakGtieWc= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-296-k0rHzqs9O9yPkXz_rzqp7g-1; Tue, 19 Dec 2023 02:56:41 -0500 X-MC-Unique: k0rHzqs9O9yPkXz_rzqp7g-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0C34B1C0513F; Tue, 19 Dec 2023 07:56:40 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id E06102026D66; Tue, 19 Dec 2023 07:56:28 +0000 (UTC) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Matthew Wilcox , Christophe Leroy , Lorenzo Stoakes , David Hildenbrand , Vlastimil Babka , Mike Kravetz , Mike Rapoport , Christoph Hellwig , John Hubbard , Andrew Jones , linux-arm-kernel@lists.infradead.org, Michael Ellerman , "Kirill A . Shutemov" , linuxppc-dev@lists.ozlabs.org, Rik van Riel , linux-riscv@lists.infradead.org, Yang Shi , James Houghton , "Aneesh Kumar K . V" , Andrew Morton , Jason Gunthorpe , Andrea Arcangeli , peterx@redhat.com, Axel Rasmussen Subject: [PATCH 04/13] mm: Make HPAGE_PXD_* macros even if !THP Date: Tue, 19 Dec 2023 15:55:29 +0800 Message-ID: <20231219075538.414708-5-peterx@redhat.com> In-Reply-To: <20231219075538.414708-1-peterx@redhat.com> References: <20231219075538.414708-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 7462CC001A X-Stat-Signature: j61roe96mebh6e3x4krg5xkpftcd317s X-HE-Tag: 1702972607-598398 X-HE-Meta: U2FsdGVkX1+gRUWqNe42Dvsdsnfqz/mccBbzpxW52XNBGXYF8OIUxtMVCj90dl3xGKWqr4+Nzgb9H3VlLwJhsrT1S43RT9AZl20tgw5xZuuEBr8gzho1b5Jve4QuWPxNw1VJpZKR2TPRs4WBCoWlDnVUW+E2JUH8p3bACPIO8KbVNv2eBpXiHkamG/9o5aHjLjFrFzp6cPMJ1wR6TTm1nU004Glb8XU8u3Ah+pHe+ttJJuokqsBitP8bdYn/yQZsJMcV2+ZboWvL6QfQ+zpiD3xvnTfuUmG6QqX2193eEDHpa7ppQVPFsVie84WI5VET+A+OHHYb88tbHU9E+pxUEdBQBpitVy3YdZyVY9wa5HI403p48l+eB236DHsTIZ1slWBjW3utWwMf8Et3nqvV6tjnNMFzOEtWupMpwwAkYYBgKm6BK+HHvnPZAE54uGR4ZTaMFa34vLU0g4v6yqeJr646NqoRwIjPC79uAnu7egYyIi26aBDf80V5hmUN3ksyL7tSdRSsAaOocpDZzCprw1PFlfqEetBQ279iiZrU4DmzoVNOzC5dyQwxqkFSTrbdhus2pw+zz/31qczKVrf0KbcD+qiR+oQSSdteYfbBOsQLBgMofcCTuJlevdnWd7hXLOX72qdG4Or6d0L2dnI2jgEtWqki/yYoAgPVUaKKdP3Xtw6M+INTxMP2824f8eN3hWV/NKp9z14sKarJiCNiVeHwLmWsrUXwdVzmgEZZdp3GlJQ9awMoWsKGaoZST4wu8nYFG1SoyRUwZPOK7SB7bM384IgyfPcDTQs/r7IBjTw66OZTN413lOsXO18gDY6nglYBcAiZJkM7Q5IZEpA2r8Y6cs3Uty7y7dX7Junl6JS2NRMxv8CUwtG3y3jESII/Z9gGjHwIPuIto0iRKJ+gfuN0C4+RbP24AZ/e9Wa/atzYLatFCVlweyOspiKjeZ5a448WWQp7WxpORNlu+kw rniT4VoR UbGqBeJ4uoLUAsfHvIeAwzzFDSX6wp/Lcut9FxRSXh2hPDmtmqwmSGmS2p/P2HnsPJmqFJhT/Rx8pUMgd1B1zMtxUrUT4R3GsKtFewVkPkS5HGyig9bbctTelt6Fd1RrAQa44ZlJPicXTxE0mXybzisSh0nFuG4WVKTl/dSNYaNfogOjv/Y3ogmP8YsR+oUwC/cq2LPa2/GYk8cmMn9LkG5SLWlDi4fLD56RZCp574U/oVuteYQ315CFE49mZqbZVlssY6K7+8jADroFUIDGcLvQ4ImdnAC1tZ9TEqBQMQgsVBiKeuQiypKj1d6nwE2M15u+x X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu These macros can be helpful when we plan to merge hugetlb code into generic code. Move them out and define them even if !THP. We actually already defined HPAGE_PMD_NR for other reasons even if !THP. Reorganize these macros. Reviewed-by: Christoph Hellwig Signed-off-by: Peter Xu --- include/linux/huge_mm.h | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index fa7a38a30fc6..d335130e145f 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -64,9 +64,6 @@ ssize_t single_hugepage_flag_show(struct kobject *kobj, enum transparent_hugepage_flag flag); extern struct kobj_attribute shmem_enabled_attr; -#define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT) -#define HPAGE_PMD_NR (1< X-Patchwork-Id: 13497941 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 409BDC41535 for ; Tue, 19 Dec 2023 07:57:03 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BAA9B8D0009; Tue, 19 Dec 2023 02:57:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B32068D0005; Tue, 19 Dec 2023 02:57:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9AD048D0009; Tue, 19 Dec 2023 02:57:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 812928D0005 for ; Tue, 19 Dec 2023 02:57:02 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 6772C402A9 for ; Tue, 19 Dec 2023 07:57:02 +0000 (UTC) X-FDA: 81582811884.07.A96BE3C Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf24.hostedemail.com (Postfix) with ESMTP id BD102180002 for ; Tue, 19 Dec 2023 07:57:00 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=I33wxrN3; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf24.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702972620; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cOaAbTjD+sxL1nd4c8n1i5s+aCyb/VfpADXT9rdcjmA=; b=bMSUMUvpKUwz8VFMpWaQS5MG/RVAc1VB7689o0Smd7d0AC29hJQ9OAJxHqALgoU3UycOs0 gD6RagFbpQcYbBOVceAb4eB5vNJ6ZHpSxQPosAxoOpv9daaaKMzLmfjetTwGVfY+W7KWap iau+ihYSNVfDcLNhbNFElA0mcLMoLDY= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=I33wxrN3; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf24.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702972620; a=rsa-sha256; cv=none; b=d19daDnOV8ZphVfX6MX5WAJ4xrwPFHMz/z+29P9AXJ8Ac+oGYMx/kpSjqaspjG/zD8JP7q rGSyD5TftXNzgWs0peRbTAcqMdzqZB1eEXPzfjPT4cFa9FMDWt2/OnhL+wwsKiDr1iQD29 T5Lg7bZt0Gwuj6BmSUKmYr0xiOOMf1o= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702972620; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cOaAbTjD+sxL1nd4c8n1i5s+aCyb/VfpADXT9rdcjmA=; b=I33wxrN3O9vPCiFjlesq0j6PIBNMuraGIScTkbYbvD126LJOerl7Qbv7lyJXAobbriJ5b2 PJE1iXoe277NG5Ay6Jh2t0hOd/pApRLH1V9Tj4fRgUEdtP4kZ+fKXS2veBY3VgWWGbNjpD kfNn/JDYU/BfLh5ML/ltkLYD4M7hhs4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-564-DMz-Hx_eOQmrxyRviaVqjQ-1; Tue, 19 Dec 2023 02:56:54 -0500 X-MC-Unique: DMz-Hx_eOQmrxyRviaVqjQ-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 26A2183B8E7; Tue, 19 Dec 2023 07:56:53 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id CFE972026F95; Tue, 19 Dec 2023 07:56:40 +0000 (UTC) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Matthew Wilcox , Christophe Leroy , Lorenzo Stoakes , David Hildenbrand , Vlastimil Babka , Mike Kravetz , Mike Rapoport , Christoph Hellwig , John Hubbard , Andrew Jones , linux-arm-kernel@lists.infradead.org, Michael Ellerman , "Kirill A . Shutemov" , linuxppc-dev@lists.ozlabs.org, Rik van Riel , linux-riscv@lists.infradead.org, Yang Shi , James Houghton , "Aneesh Kumar K . V" , Andrew Morton , Jason Gunthorpe , Andrea Arcangeli , peterx@redhat.com, Axel Rasmussen Subject: [PATCH 05/13] mm: Introduce vma_pgtable_walk_{begin|end}() Date: Tue, 19 Dec 2023 15:55:30 +0800 Message-ID: <20231219075538.414708-6-peterx@redhat.com> In-Reply-To: <20231219075538.414708-1-peterx@redhat.com> References: <20231219075538.414708-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Rspamd-Queue-Id: BD102180002 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: sfzxbei6r9z57i1pwkt6mfu9f4zk77yi X-HE-Tag: 1702972620-984410 X-HE-Meta: U2FsdGVkX184+s+upSyYKD6if5KntpJtCJg3ZrnhDjI9ofqFB1oW+goQuCOR4j8r+O4reVxHgjBmzXo2c+uui2cy9kO2BnsKTzxzgCLlTXKdm5Eg5zFHLpzIGwy0k7cjDyCPGPTVFhwyj2NXbjDml2NSLtuMuGce3mg+t0gA/brKzGSeslNwmqDktwBmqHwvibDW4jUPyYxwLtYbkRHEVx19tOifvs3c5xdP0VFXWe9TaH3lBOv8cuvWxi0SHX2S1XdyHC52RG3Ea5brUYwWA8kGN4hbnPphGNdMkktaMbHnLlGTR5xDUhpMuGDecgpEc5tboNKLGVXjaHvpV5RLzBKW+J7lisg77tr5oQ00meZUTtGWuWHKR1jtdVcT/Zxz+r/2tU0i14E6CeJCkJ7GNxYsBQynWh+owFBggn8mnd0Hi6WATJeA4IglteoYp/ZhhK0hIIKCNLU5FoNLkTBo9vjKttmHYaUJppO7Ig8aUhArUs7nmECfBvF7/GLZK9kUDvd7cYddFhuIKbo/RavDdkYMxL7PESU7qTkqgFejnKbXxlPR43sbeWCJg9b/CZ2rtOv4OVLYvzSYhY0ZByqgpKZPmWnu8+58tvHPRu0RzmJlTqoLWNkM/dcvEaqMeB+8928ipdSZMvrVcAq+JFrgyWXx5e4c1lhMM6+L/toGRZLJPYSY48nBQKSnEU3Ftv8CZGdgQ1qkN1o+iLFZYnYPPjQfSg+qqaGPbSrN3CGbhA/C+II1P2PTZUGzyL85JLAmIt2XLTJszJT2LBIwYrW65dheok7qe6JJlZlxKfkfryJDAyok63+dE8kNnANrCLl5YDHmiQ1gOxWDi9B2j93UqJIQYdiOsci0qEarb4CWJZ7g9BC9G8w5aTYU+89P0Pf0+kH61Me4MfDVm1ObjqdzQcO8PPP4O2FeDch2exLKldK75Dh0u6c7VrjPoQ6nqQO+qxxUgXiimWfYP5E1ylV exdUuoIX A/JWxxGe7iSR+OrjAa10l9xKL41twycX2vms5ankbFSzJvjSWPnpUUnJfQRoszxhHa9pLl5btfWtzBb1Cb24ZE5cPnLX0Zppafew5Ey+xoGCygAHCV3zDbdzNvM3LP5Q/ebiwiMz8NpWCWyJdtTU3F/23rJJLPbox9mCXCrtQMs4hamIJPwwi7ZQE1CyZFUCyzyqW75w/jcHF1jBRSt+dU71eIfJ4ib6OnpeHDzrz9Cy5+QRZomvC7QH79Ttgf0gED/XvDbcE1rp7EHsT3KacZXyWQA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Introduce per-vma begin()/end() helpers for pgtable walks. This is a preparation work to merge hugetlb pgtable walkers with generic mm. The helpers need to be called before and after a pgtable walk, will start to be needed if the pgtable walker code supports hugetlb pages. It's a hook point for any type of VMA, but for now only hugetlb uses it to stablize the pgtable pages from getting away (due to possible pmd unsharing). Reviewed-by: Christoph Hellwig Signed-off-by: Peter Xu Reviewed-by: Muchun Song --- include/linux/mm.h | 3 +++ mm/memory.c | 12 ++++++++++++ 2 files changed, 15 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index b72bf25a45cf..85e43775824b 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4181,4 +4181,7 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn) return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE); } +void vma_pgtable_walk_begin(struct vm_area_struct *vma); +void vma_pgtable_walk_end(struct vm_area_struct *vma); + #endif /* _LINUX_MM_H */ diff --git a/mm/memory.c b/mm/memory.c index 1795aba53cf5..9ac6a9db971e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6270,3 +6270,15 @@ void ptlock_free(struct ptdesc *ptdesc) kmem_cache_free(page_ptl_cachep, ptdesc->ptl); } #endif + +void vma_pgtable_walk_begin(struct vm_area_struct *vma) +{ + if (is_vm_hugetlb_page(vma)) + hugetlb_vma_lock_read(vma); +} + +void vma_pgtable_walk_end(struct vm_area_struct *vma) +{ + if (is_vm_hugetlb_page(vma)) + hugetlb_vma_unlock_read(vma); +} From patchwork Tue Dec 19 07:55:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13497942 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE4CDC41535 for ; Tue, 19 Dec 2023 07:57:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 47F2C8D000A; Tue, 19 Dec 2023 02:57:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 406D78D0005; Tue, 19 Dec 2023 02:57:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2830D8D000A; Tue, 19 Dec 2023 02:57:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 106878D0005 for ; Tue, 19 Dec 2023 02:57:13 -0500 (EST) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id E48ED120480 for ; Tue, 19 Dec 2023 07:57:12 +0000 (UTC) X-FDA: 81582812304.02.C486C37 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf11.hostedemail.com (Postfix) with ESMTP id 221504000F for ; Tue, 19 Dec 2023 07:57:10 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=OmeSxnUw; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf11.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702972631; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uyBT0nqGKaQsvTTr5mCIJmcBkZIho6YT9IXUJI5Nmzw=; b=HB08hR42PLdHIEu58BxNFikAO1C6KkibTXcN83WI931Xc5ei5xCAg9W2OLalPVSfQ3CVmS 08DO9tLIESrjabZWz4yz2Q91B7s2EOF7KWTizASZls7vTsEJISHYiRjRbcIBitYD5/zrxu TEzYy2B9w2JL2QhPTlJ4eQl2sb8nOtU= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=OmeSxnUw; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf11.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702972631; a=rsa-sha256; cv=none; b=LYUDkh9n/uTyqlYbM8n0iioeFV/Xc/UWdYMAjNQPtQVKO00knIonTZtoKZiWEtND3IBGoI m+h91QluJeUM7jDyuigEzrEgflMMW9Pas6uhPbOEjx+Kcdg5YQoI7qp49InbL/jbgO3fik iKsCbWu1q6oAlaB1qjFfbk8ArHfTkgk= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702972630; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=uyBT0nqGKaQsvTTr5mCIJmcBkZIho6YT9IXUJI5Nmzw=; b=OmeSxnUwWoa6tJ92TEOtR66UT11ctZnKhQExLfg5A/gxJiEJyQXNN9esZEndlyUOV1QssQ FDfhQRtavalAv+kVai8dgU++/wN8+fOhJURpiHAgFywr7UWQnM4bvY0nn8Q+2EzpcOXrZV 09ouaY5IqGNgSE3ZIuDkV8X89tVNuoA= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-150-L9GGlqWjOR6XdFWT616TOg-1; Tue, 19 Dec 2023 02:57:06 -0500 X-MC-Unique: L9GGlqWjOR6XdFWT616TOg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E4B5F3813F20; Tue, 19 Dec 2023 07:57:04 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id EAA8A2026D66; Tue, 19 Dec 2023 07:56:53 +0000 (UTC) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Matthew Wilcox , Christophe Leroy , Lorenzo Stoakes , David Hildenbrand , Vlastimil Babka , Mike Kravetz , Mike Rapoport , Christoph Hellwig , John Hubbard , Andrew Jones , linux-arm-kernel@lists.infradead.org, Michael Ellerman , "Kirill A . Shutemov" , linuxppc-dev@lists.ozlabs.org, Rik van Riel , linux-riscv@lists.infradead.org, Yang Shi , James Houghton , "Aneesh Kumar K . V" , Andrew Morton , Jason Gunthorpe , Andrea Arcangeli , peterx@redhat.com, Axel Rasmussen Subject: [PATCH 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing Date: Tue, 19 Dec 2023 15:55:31 +0800 Message-ID: <20231219075538.414708-7-peterx@redhat.com> In-Reply-To: <20231219075538.414708-1-peterx@redhat.com> References: <20231219075538.414708-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: 221504000F X-Stat-Signature: sptwqcr6g7sx4s1djaknm963nke8t8ts X-HE-Tag: 1702972630-640464 X-HE-Meta: U2FsdGVkX191wX89E29jUwHk6UFQxZQVcaRpTR6nQjBuDwv/iFKRhvKZ/S7tjAfsSoV2Qd/hi0itt3LdHYegkP31vP29J5PE/9Vju4R96HGRZ/RaUG+c6QrOsQGjjDqTbHJKdufbF6d5UH/3/866WyGI9LWTyL3aYt168xFI/0Fi1OUmUwM9kbyob1cJittefHdzOeJH/Nk+8bm+pKWgcM0O5vZxkvFIRBem0Urfd0MVQyPRuja20OZKZODPMWpWWDjEPGjO4vKMogc6tw0oXzbhz/19oAt733XzobmFwD/PCtAfh+rZNhPT86L3Rx6/NVICu/pP9wB/74PF9chyP3P4/q4Q68kXVM13g5PozfHwf/QXw8z6NkqD/dzRfHTuTAW7O0Z65UbloHgy/7qG8Erp8l3zh7HOc79jC+iubOgW0b4c5gvRvvNoc8u1gol6jMuzp2j1MgaDAYGk06/BS/UOXwx5+wP/1lcmADJ1p2rnNgnse+lHoi0jYXHGVTZd4BDnes1LOR09sQ81VZrtqJVD5imLvSSddIn/3MrsosRVx0Np/cXWrc2WeaqTZUEhCNjWXcgwYZ+i2ZNV5LnNhDvtcuTtW9L36KlAnt0F7S5cx7Olo2iDcDjDTauSb6x6VyZFEjqQ6ll4C+8Wh9nYkhSrvddZXb4+ZXKpNaw5CkAyVY2EJupGg85iKJx8fkXoN3QtC2o8A/QxFer3T23FgJ0w2IRggk6GVzcnY9NLr1EV3RWReliDKxRhmuucLBtL6znFLkL9to5wnavNmxXx9ItDhF1tk+ZX0ZSeF7MjsSnBcdU827E9Lg+vw2BlCSA54Ehnj+mVCVDz+/EwKGcFCiyyabT/2Y6yiYbFQ+7iZ4QOriqPgyoAK8aF7weKXFtruxaGr5OM4zHan39xTcCZrK/qlXdGX+b/poUmzDCpKCNh8STdHjDXVk307e6rd12j2t2SX86VlJmBrcJ2XOS uyq/th6x l4BLYQsH7Jy+eX3zWNQre0zqobRALJlO0uhzR8FXk7G7Csh2AXzpbYqjZd6u6Nb2qafGikxhgPcmet3Lizy9U/01AOYP4reh9AcoDFBon7qg5KDZcunnTiqp/B2AoKc+L3LLQTgXoYVhX82ejpHf+KmQa8F7RVD83Kia/FvT/lXZrii+py6uLfJQmBn6K8LvcKVaN/QfVgoySP1oDRR7DzKqEpitE9KSn07SHWUW5Vd8jMWp79jCcyHD8uRbN2p6q27y6pA3O7J5my1nbSHYLxN54QPfa8lry3poMfrYJZ+5uHaVkMKl61LBpuT9rNYQrC6/UGVbJ85KgdSx5a61jVQR3SM+N6Krts3sL93btHaCUYeI= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Hugepd format for GUP is only used in PowerPC with hugetlbfs. There are some kernel usage of hugepd (can refer to hugepd_populate_kernel() for PPC_8XX), however those pages are not candidates for GUP. Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to file-backed mappings") added a check to fail gup-fast if there's potential risk of violating GUP over writeback file systems. That should never apply to hugepd. Considering that hugepd is an old format (and even software-only), there's no plan to extend hugepd into other file typed memories that is prone to the same issue. Drop that check, not only because it'll never be true for hugepd per any known plan, but also it paves way for reusing the function outside fast-gup. To make sure we'll still remember this issue just in case hugepd will be extended to support non-hugetlbfs memories, add a rich comment above gup_huge_pd(), explaining the issue with proper references. Cc: Christoph Hellwig Cc: Lorenzo Stoakes Cc: Michael Ellerman Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Peter Xu --- mm/gup.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index efc9847e58fb..bb5b7134f10b 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2820,11 +2820,6 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, return 0; } - if (!folio_fast_pin_allowed(folio, flags)) { - gup_put_folio(folio, refs, flags); - return 0; - } - if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) { gup_put_folio(folio, refs, flags); return 0; @@ -2835,6 +2830,14 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, return 1; } +/* + * NOTE: currently GUP for a hugepd is only possible on hugetlbfs file + * systems on Power, which does not have issue with folio writeback against + * GUP updates. When hugepd will be extended to support non-hugetlbfs or + * even anonymous memory, we need to do extra check as what we do with most + * of the other folios. See writable_file_mapping_allowed() and + * folio_fast_pin_allowed() for more information. + */ static int gup_huge_pd(hugepd_t hugepd, unsigned long addr, unsigned int pdshift, unsigned long end, unsigned int flags, struct page **pages, int *nr) From patchwork Tue Dec 19 07:55:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13497943 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0C2DC46CA2 for ; Tue, 19 Dec 2023 07:57:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6A11D6B0072; Tue, 19 Dec 2023 02:57:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 62A806B007B; Tue, 19 Dec 2023 02:57:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4CB7D6B007D; Tue, 19 Dec 2023 02:57:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 383BE6B0072 for ; Tue, 19 Dec 2023 02:57:25 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 1423180888 for ; Tue, 19 Dec 2023 07:57:25 +0000 (UTC) X-FDA: 81582812850.15.F891CC2 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf16.hostedemail.com (Postfix) with ESMTP id 5C0E2180006 for ; Tue, 19 Dec 2023 07:57:23 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=SyDs1X7f; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702972643; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4pxuQ+zeUrZD43W0p30SCM4P3ajCE4Euxhqhb6PE8fI=; b=H4PCmNdk7J2HD+yNK3px9tq9IdD81yDVyNedwESRL9ox7TAXC571v2ZfvgtspzLl6eTbyW ophGjleFa26v1Cc/W/08Rbn7oK8jheKRdZ30+i/M9fLCv2puX20Uen7OIE6xIySfqOu6gU bTXsvZCrUWPpAk5E1ShJjfuPp3BoA1k= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=SyDs1X7f; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702972643; a=rsa-sha256; cv=none; b=ZoZNOuVVAhL1VvD16GiifDDwKXDjpvMgiVE3zQzZfNHBafbcO6TVhBXWMp2emjjnpwfpYZ y5fCDB5XcAa1fWgPCdmK0jOxIq5VHL9IpOXpW2EEVf8B2hgp+hDQ6Zj10UzNFutZ/gHPpZ K3m/Ib2y+lJ9IpXbaYe61O9pp3fanBg= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702972642; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4pxuQ+zeUrZD43W0p30SCM4P3ajCE4Euxhqhb6PE8fI=; b=SyDs1X7fCzcJCaDo1qG3sb0Y1RCZgbsFRW9xDymmwRPhJiJgRh3knwcM+Y9NOcR83XDiWI 7Xn0XgTV/dPk3RXRhGCWokxAapZMEiROHZe2cjWr672wEaeUJusV2QrCs0MC2UtF59wDWB h1sprC6SnqFJ4swoJtytquBaWcfKiEA= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-308-O_LLgdEzO2yAjxB50bIomw-1; Tue, 19 Dec 2023 02:57:17 -0500 X-MC-Unique: O_LLgdEzO2yAjxB50bIomw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 94A54837221; Tue, 19 Dec 2023 07:57:16 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id B40302026D66; Tue, 19 Dec 2023 07:57:05 +0000 (UTC) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Matthew Wilcox , Christophe Leroy , Lorenzo Stoakes , David Hildenbrand , Vlastimil Babka , Mike Kravetz , Mike Rapoport , Christoph Hellwig , John Hubbard , Andrew Jones , linux-arm-kernel@lists.infradead.org, Michael Ellerman , "Kirill A . Shutemov" , linuxppc-dev@lists.ozlabs.org, Rik van Riel , linux-riscv@lists.infradead.org, Yang Shi , James Houghton , "Aneesh Kumar K . V" , Andrew Morton , Jason Gunthorpe , Andrea Arcangeli , peterx@redhat.com, Axel Rasmussen Subject: [PATCH 07/13] mm/gup: Refactor record_subpages() to find 1st small page Date: Tue, 19 Dec 2023 15:55:32 +0800 Message-ID: <20231219075538.414708-8-peterx@redhat.com> In-Reply-To: <20231219075538.414708-1-peterx@redhat.com> References: <20231219075538.414708-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Rspamd-Queue-Id: 5C0E2180006 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: knbwns4cpjue79keyojg9thf8hcjpzay X-HE-Tag: 1702972643-389740 X-HE-Meta: U2FsdGVkX1/hEhCxYAgZMm8tce7blzhBZ9xhTQBZM5rpiiB+zDV3nA34hc4b+NkBYvA5fbv5Dj/WJxNvLydKIMNHbRcq1IrdY/oNuJ6GDU2clQbLRGL4DzULYiIds+Tj6H/5Vzo0zVHMjvmkzqWnXfQuur0M+OZ7OMJO45pVgXOevsn5iSslFmi5sDJ3xzV22zgGF7rFOj0dMExoI+7DnzF68iB+AHwqBLehQk4WSYjUs40yGTCbjA6eGQC5tVi2K/FGqaP6vbTokCyWcYA10ao+0yOhVlsYM0GZtwI0vvBJlmRKy8fmEbP3/YMrrHRByyJv4eX69WOSaq6MAfzLtjLzpyRt1p1Ru1hRimHAvmFSCkmkrWlsRd4VY7L6j0Zrw6jCxyX4qnWvTyqvF498/A5wiBcREEWeTcWOI0noB4SwWY5rmic9LDCnAUYyg5MJ/FYFwGXdgBd//0kU8ah9xYS8JZaYP1wT7c41q8w76fNujaxA58F6a+PcDxeHA6Cw1dU6vpFm15xi4Fjfj3Uf9aDIRSZJcC2dxGnwvXsnbIen1aYZ4AxxmVJte/tyF+j4adEpesQW/MGDbETPbxZWZ1GRRyapSdMipHGjjLUAe//zYMiSesIHq7r7rih4lycdBs1vkKcC3FPscGbI98AkOfL1w9Cke14/Tp/TvnNCRzpz8zs7XVAPeVpGL19QU0ddfVYD9VRG8MmaKYIXWTvQ005hgpGOlVmx2cKZpyTMC0X2HMN2D8CQH9hqqrTOLQHUJTCCs5RPM7evkTc0nZv8R4+j3fD2oC9iFU8G99zm719xjgHo6BRknxWcW8AzMalckQ2gBPXliN558uBaIZ9yZJ/XRMsuviqOoPTVP0N1UXmxopBaQ6PqBr64JN7NUINEJwFYVtoRj5IgHmBLmSlkA/sRqD75MbFHitINrFLfnHPqyzKvAvOt+i3fPlFoU3YEK0tCRWUscx5gqoqJ+lA O9oZN5tn QM/djwTRQv3zlntyUo5OvCAbtN6hCK1mwDHZfYE1G9NeiCf8p8w0cjYp1uLPA0YViSfX+6wlmNCxG8r+ghWABsUSCnvPU2+cq4uTLG25nGi+KvgK+5vN7SKRbIpy+i+HzUemCVIb6DeNRuw8im44REekbvpOIV4giFkdtX58+7uXZP2NvWoDH1luh8wmz3kkdBNZ+tNyEMr/GUyrZglsGET3rRQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu All the fast-gup functions take a tail page to operate, always need to do page mask calculations before feeding that into record_subpages(). Merge that logic into record_subpages(), so that it will do the nth_page() calculation. Signed-off-by: Peter Xu --- mm/gup.c | 25 ++++++++++++++----------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index bb5b7134f10b..82d28d517d0d 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2767,13 +2767,16 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr, } #endif -static int record_subpages(struct page *page, unsigned long addr, - unsigned long end, struct page **pages) +static int record_subpages(struct page *page, unsigned long sz, + unsigned long addr, unsigned long end, + struct page **pages) { + struct page *start_page; int nr; + start_page = nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT); for (nr = 0; addr != end; nr++, addr += PAGE_SIZE) - pages[nr] = nth_page(page, nr); + pages[nr] = nth_page(start_page, nr); return nr; } @@ -2808,8 +2811,8 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, /* hugepages are never "special" */ VM_BUG_ON(!pfn_valid(pte_pfn(pte))); - page = nth_page(pte_page(pte), (addr & (sz - 1)) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pte_page(pte); + refs = record_subpages(page, sz, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) @@ -2882,8 +2885,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, pages, nr); } - page = nth_page(pmd_page(orig), (addr & ~PMD_MASK) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pmd_page(orig); + refs = record_subpages(page, PMD_SIZE, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) @@ -2926,8 +2929,8 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, pages, nr); } - page = nth_page(pud_page(orig), (addr & ~PUD_MASK) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pud_page(orig); + refs = record_subpages(page, PUD_SIZE, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) @@ -2966,8 +2969,8 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr, BUILD_BUG_ON(pgd_devmap(orig)); - page = nth_page(pgd_page(orig), (addr & ~PGDIR_MASK) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pgd_page(orig); + refs = record_subpages(page, PGDIR_SIZE, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) From patchwork Tue Dec 19 07:55:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13497944 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAC76C41535 for ; Tue, 19 Dec 2023 07:57:38 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 67B196B007E; Tue, 19 Dec 2023 02:57:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 603576B0080; Tue, 19 Dec 2023 02:57:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 47C426B0081; Tue, 19 Dec 2023 02:57:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 2BC7F6B007E for ; Tue, 19 Dec 2023 02:57:38 -0500 (EST) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 01E84160415 for ; Tue, 19 Dec 2023 07:57:37 +0000 (UTC) X-FDA: 81582813396.13.F10C243 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf26.hostedemail.com (Postfix) with ESMTP id 3ED4D140016 for ; Tue, 19 Dec 2023 07:57:36 +0000 (UTC) Authentication-Results: imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=izH0ZoOu; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf26.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702972656; a=rsa-sha256; cv=none; b=D5TyLWA5Oe9ewNOn7JKANgjHoibBeO5RBcAB5Cwon3o/b6KuALlAYvPn7Asn8K2iXDSmYj 1MhwlxKYSvP9n7R1Ae0p7nO//FNyqB8XzE8+HOm9ICNt3fZWY6cx/UPXHm68i/Vsdd36qL wbSFaItHdkIl+ZUc3tSP/snp1oQpEfE= ARC-Authentication-Results: i=1; imf26.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=izH0ZoOu; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf26.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702972656; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WpRYnsdxCsgwENogJmMAQBWBGi+Y3txosu8YBUBJ2nU=; b=dHOMxxuSB1d10ogB7J1Z2wLOwmVQevEmVbVfKfFQeBOAW73/UODOs3woWg9w2i71+dvA2C xTPM17+Ra3fiX3P6dkUJX3V87XwH1MlCee0v2FLaLwUfyWmpSkURMHEOXER5Ac6gguFtfa vB5x64cRpkzvUd8s+bx12MddJS40F1o= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702972655; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=WpRYnsdxCsgwENogJmMAQBWBGi+Y3txosu8YBUBJ2nU=; b=izH0ZoOuSpAHa/yTWomIacw+FJS8UweXzUUJx72sktRZbJmdbdG5evA571dB61JPoHI0S0 L1EPln+Hx1wTBQ6DH/PP6kn6ChVFraYyPsdyISHIQl7KaCMM+dMMqykpFeRSJUJbBb/0d+ BmFqMj4q7Irg6Q1bI8PNiPteXYbn0PU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-650-6W9nSAUnNQOFp9mej7E_hg-1; Tue, 19 Dec 2023 02:57:29 -0500 X-MC-Unique: 6W9nSAUnNQOFp9mej7E_hg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1341F83B8E5; Tue, 19 Dec 2023 07:57:28 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 672D22026D66; Tue, 19 Dec 2023 07:57:17 +0000 (UTC) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Matthew Wilcox , Christophe Leroy , Lorenzo Stoakes , David Hildenbrand , Vlastimil Babka , Mike Kravetz , Mike Rapoport , Christoph Hellwig , John Hubbard , Andrew Jones , linux-arm-kernel@lists.infradead.org, Michael Ellerman , "Kirill A . Shutemov" , linuxppc-dev@lists.ozlabs.org, Rik van Riel , linux-riscv@lists.infradead.org, Yang Shi , James Houghton , "Aneesh Kumar K . V" , Andrew Morton , Jason Gunthorpe , Andrea Arcangeli , peterx@redhat.com, Axel Rasmussen Subject: [PATCH 08/13] mm/gup: Handle hugetlb for no_page_table() Date: Tue, 19 Dec 2023 15:55:33 +0800 Message-ID: <20231219075538.414708-9-peterx@redhat.com> In-Reply-To: <20231219075538.414708-1-peterx@redhat.com> References: <20231219075538.414708-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 3ED4D140016 X-Stat-Signature: apct8rapywgsdxjann7u8csgtfihbd6j X-HE-Tag: 1702972656-168091 X-HE-Meta: U2FsdGVkX1/GyG9J//BEWrlUCWOqiu3gxcgPrrvoUPNkK4yjp9CA572cKswXCtm2BPCm8r0DjNjybfkm+oc+0TQW36Cf+oCKs4Dh7qbb2rXxfHn7z4dNGv/TMTDn+aHsJ9YgEGF45tDUqGSjn+TCuUpckg4vaZxApVLXKytk6YyOClyD7oKWSMg2U+Tx07Tm3ywj2usPcY0/sOF99KnC3YzU520XJpLszL/vgl/oKwDS81qivHywu5IT1bxsWZsKTjjGvmhuzVzArKeNeRC5eNNE5EmC8fYUXWcsw9N3H3fzYwxm/zswy6YP6s1FWg534+EWK4srwW7QitvwFJ+Nknqre071n0QpihNT8gE0T9CpLODJWk9y7JBnyWE/E0naCIjZtig8cPlDU5rlX+8gTS2nznIyMn3aGPuYBq5TsK8dTOY0R+ifWByIm5AeoFsxl1mc/PLQ81IYFAZDMsJ+wUW78MJruwmHo4T7HDPnQjh7DlXy5qljiber4RDurs6RdPRqMbwnZ8gk7fwMp+DGx9Vk9p1D+YDx37nooCvYrhy4+ZgvttQkXYAftqlqWzS0w2sq6AhE3MTr4j9P8rIy8dMLywd86IP1vVYQAeD2wfQUzMSJJJHE8rVc3PjUFD60/Iw0xpNYg+P5WfNXTJOMsMmJAr15vhKR3xxm2MoGGwLMIjFu5rwEW8KBnYQ+kJfCl/PCYa89pStjx43sJW/bMJNa1m7AekJCwRsNu3owBXpQI2cn9ILmIjed2x1r0/5/E6GeWrP9L3wrUrGXTWRUG2z7bGfj/DP0Y5vR2skcTZoPC2Xw3nL0DzLqDMBvH53Wd70qlP+HNVGzru7ZL5NvOP+AZ/oBSlD6r3NzWatP05XbVuW4EROiBB4J64ab+SBXFsLHBMU3O4BD9C8XVACEpSlmv96MNsIAzRW4seGTBx9kb66deaNRk+cjdfFld84Y X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu no_page_table() is not yet used for hugetlb code paths. Make it prepared. The major difference here is hugetlb will return -EFAULT as long as page cache does not exist, even if VM_SHARED. See hugetlb_follow_page_mask(). Pass "address" into no_page_table() too, as hugetlb will need it. Reviewed-by: Christoph Hellwig Signed-off-by: Peter Xu --- mm/gup.c | 44 ++++++++++++++++++++++++++------------------ 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 82d28d517d0d..6c0d82fa8cc7 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -501,19 +501,27 @@ static inline void mm_set_has_pinned_flag(unsigned long *mm_flags) #ifdef CONFIG_MMU static struct page *no_page_table(struct vm_area_struct *vma, - unsigned int flags) + unsigned int flags, unsigned long address) { + if (!(flags & FOLL_DUMP)) + return NULL; + /* - * When core dumping an enormous anonymous area that nobody - * has touched so far, we don't want to allocate unnecessary pages or + * When core dumping, we don't want to allocate unnecessary pages or * page tables. Return error instead of NULL to skip handle_mm_fault, * then get_dump_page() will return NULL to leave a hole in the dump. * But we can only make this optimization where a hole would surely * be zero-filled if handle_mm_fault() actually did handle it. */ - if ((flags & FOLL_DUMP) && - (vma_is_anonymous(vma) || !vma->vm_ops->fault)) + if (is_vm_hugetlb_page(vma)) { + struct hstate *h = hstate_vma(vma); + + if (!hugetlbfs_pagecache_present(h, vma, address)) + return ERR_PTR(-EFAULT); + } else if ((vma_is_anonymous(vma) || !vma->vm_ops->fault)) { return ERR_PTR(-EFAULT); + } + return NULL; } @@ -593,7 +601,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, ptep = pte_offset_map_lock(mm, pmd, address, &ptl); if (!ptep) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); pte = ptep_get(ptep); if (!pte_present(pte)) goto no_page; @@ -685,7 +693,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, pte_unmap_unlock(ptep, ptl); if (!pte_none(pte)) return NULL; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } static struct page *follow_pmd_mask(struct vm_area_struct *vma, @@ -701,27 +709,27 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, pmd = pmd_offset(pudp, address); pmdval = pmdp_get_lockless(pmd); if (pmd_none(pmdval)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (!pmd_present(pmdval)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (pmd_devmap(pmdval)) { ptl = pmd_lock(mm, pmd); page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (likely(!pmd_trans_huge(pmdval))) return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); ptl = pmd_lock(mm, pmd); if (unlikely(!pmd_present(*pmd))) { spin_unlock(ptl); - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (unlikely(!pmd_trans_huge(*pmd))) { spin_unlock(ptl); @@ -752,17 +760,17 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, pud = pud_offset(p4dp, address); if (pud_none(*pud)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (pud_devmap(*pud)) { ptl = pud_lock(mm, pud); page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (unlikely(pud_bad(*pud))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_pmd_mask(vma, address, pud, flags, ctx); } @@ -776,10 +784,10 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma, p4d = p4d_offset(pgdp, address); if (p4d_none(*p4d)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); BUILD_BUG_ON(p4d_huge(*p4d)); if (unlikely(p4d_bad(*p4d))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_pud_mask(vma, address, p4d, flags, ctx); } @@ -829,7 +837,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, pgd = pgd_offset(mm, address); if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_p4d_mask(vma, address, pgd, flags, ctx); } From patchwork Tue Dec 19 07:55:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13497945 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E39FC41535 for ; Tue, 19 Dec 2023 07:57:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 14CDE6B0082; Tue, 19 Dec 2023 02:57:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 0D3666B0083; Tue, 19 Dec 2023 02:57:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EB7C86B0085; Tue, 19 Dec 2023 02:57:47 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id D36E86B0082 for ; Tue, 19 Dec 2023 02:57:47 -0500 (EST) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id AC1521A02A5 for ; Tue, 19 Dec 2023 07:57:47 +0000 (UTC) X-FDA: 81582813774.05.00D00CC Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf27.hostedemail.com (Postfix) with ESMTP id 0EF4B4000F for ; Tue, 19 Dec 2023 07:57:45 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Q1YJROa1; spf=pass (imf27.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702972666; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=7UtlH+qJqHqzo7Y62yc3egVvRJlUYI/8xtDdmttQSUc=; b=AJ0qZ5ZWrnVwtHcIWfs5ZB6DHRDSKj4f3NxRjDbzhqtKOLR9S+SdMUOCdvylh34P9psla2 9OL7AvYeHhyusvRK6LKtN/f/U8gOVVF6dVBXbuErSHsfWtyXokn2POKFPd/fKaczqIMV/P uUan2BNFZIGxx9aQrU3EOYhGtVPhvnI= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Q1YJROa1; spf=pass (imf27.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702972666; a=rsa-sha256; cv=none; b=dhPwO+wqwOTt+NbktvIElWeChYyEfEcWhzLcNr9+t0P+stXc8BH/mTE9xYCSD8JFUfPi1M RFh6raARatHrROHfEIb5L56IUSEk2ZV/6uk2py3ISvWOOn0QE/jK4NLAwWnV84HGQIrHjy oskbYMVDyX9PcZt4taR5trPFLtC51Qc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702972665; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=7UtlH+qJqHqzo7Y62yc3egVvRJlUYI/8xtDdmttQSUc=; b=Q1YJROa1/VmSgLjoxcAKyDOoR0WNKYkAvU7zteAfdWQ7TCbFo/lKGgOlCQDl63qLZvjhvK Ib2sNdp5Lv+tRirQmYL2+YTDFxtYGK/jIBhVfaFu0hSf0mYAH6nmo5xBfQ1Iw9CEqhlAZ8 6oivtRn2xw2SdfpQdW8DK7R9eZtqS0s= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-197-HHPXr_gTMGKJxc1F8Y3Vvw-1; Tue, 19 Dec 2023 02:57:42 -0500 X-MC-Unique: HHPXr_gTMGKJxc1F8Y3Vvw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 13EAF1C0513F; Tue, 19 Dec 2023 07:57:41 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id D699F2026D66; Tue, 19 Dec 2023 07:57:28 +0000 (UTC) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Matthew Wilcox , Christophe Leroy , Lorenzo Stoakes , David Hildenbrand , Vlastimil Babka , Mike Kravetz , Mike Rapoport , Christoph Hellwig , John Hubbard , Andrew Jones , linux-arm-kernel@lists.infradead.org, Michael Ellerman , "Kirill A . Shutemov" , linuxppc-dev@lists.ozlabs.org, Rik van Riel , linux-riscv@lists.infradead.org, Yang Shi , James Houghton , "Aneesh Kumar K . V" , Andrew Morton , Jason Gunthorpe , Andrea Arcangeli , peterx@redhat.com, Axel Rasmussen Subject: [PATCH 09/13] mm/gup: Cache *pudp in follow_pud_mask() Date: Tue, 19 Dec 2023 15:55:34 +0800 Message-ID: <20231219075538.414708-10-peterx@redhat.com> In-Reply-To: <20231219075538.414708-1-peterx@redhat.com> References: <20231219075538.414708-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Rspamd-Queue-Id: 0EF4B4000F X-Rspam-User: X-Stat-Signature: xdmhna6d5xzaizkzc9d86d41uhddyym8 X-Rspamd-Server: rspam01 X-HE-Tag: 1702972665-591757 X-HE-Meta: U2FsdGVkX186lzNlMZEDHUlxtPcqXAB7U95XBqTYlTvRPbQ1OaoI8sGs3COVZUPrPfQfA8uvpJUCQktwqGul6dKGPzlKvFX+n/3JXz1n/RBrIdFp6uaKo7Bnpf4QhlV3BfHE0+8Di4nIWZPbopfIAZq/sgYt16Nyyq06IAD+SUJFCHh4VWfHRgOVsxehYOxUlrh+QmsqYfjq/2yTK6E9H7QVdqdU4Qkxd4oF9kBq8PKCfTd/JCmhfJNtka/eKZ75UuNNeeXx1lDVCTpaLVtrNXLiBphquExPv3X158ZLl8kAGLDNc+vaNsqU+yAEiumhx58HoUX3saAu3xZ0pvqEaSwhkXWWg8ooVzOSm8ioYDFSGvrg7EjjeqgAH3FAkFef0Pa+9IfPv+hqKQi4XUSTc2jrYOxtmn9TBsjrUbYM/Kpnkzbq0qH4w8IV3B8aiCavqs8zgp0Qn6GmBlnezgMTZTCBWu5mbRCzVBLhb2FWQBQITlJi5n6l79xU+g4QfIfApVKcBAICl/XpZhtSpfGF60rJ+Yeg09aTMT5lP2WATmksb96ImRByVllvXl/N5Dn8khPPNdDXa3QpsmpOJZBJvvqqsHObfdn9lOk7BzGmRa9E8paVSQOVOAwGK4gMq2Te92nL9SIwLo0rCF1M9O9aLHUv0h22KorSB/jsje1rUN/yBr7UbDc1ylmR2BbAeEYtA7kreSsoee+oMMqw2j6tK2g/3c0B+ygLmIHmKWILCkLtp6/53cvCUEovEwpi1QogHhfN9G213UyHBme1IS2tZr5wCT+s31CpUHeO/YpXjeoe+ilbncnAfo6ncX2+BGo3zJ+9SfrBGZQGWxVVordMIAg5LVp5iAK6WCKMCeUHd+st417/f0jpLbxYGTgFY4PSnyGGcdSgZGxyBzrVfxklOIdUBDu4sPBvZ6GkrOnJPx/UNK5wtFf7dl7uTDV8WicqBvRetGRQf0VmjPqRVhM oNauqqAx gnq2Pg2kHfSW8VE8a9HQ3hR58/X0HjGLeuZ4AfPbdSexKBHSEuaCbKK1ilSXL2vgd35reZe6KmAJi2P59MAd+RsUX2hUHY264D7TRT8FSVTlYiBNvs+FqQAgBXSZmDFYlo67Izc8TwOwJkPyzxXzIk2iRIyUjZQXBmIuncbKUKCpm0sqAU98Gs9F2AFmpUyZGVOsqjHiYJL0V3sS/8m6m4RaH9Q== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Introduce "pud_t pud" in the function, so the code won't dereference *pudp multiple time. Not only because that looks less straightforward, but also because if the dereference really happened, it's not clear whether there can be race to see different *pudp values if it's being modified at the same time. Signed-off-by: Peter Xu Acked-by: James Houghton --- mm/gup.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 6c0d82fa8cc7..97e87b7a15c3 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -753,26 +753,27 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, unsigned int flags, struct follow_page_context *ctx) { - pud_t *pud; + pud_t *pudp, pud; spinlock_t *ptl; struct page *page; struct mm_struct *mm = vma->vm_mm; - pud = pud_offset(p4dp, address); - if (pud_none(*pud)) + pudp = pud_offset(p4dp, address); + pud = *pudp; + if (pud_none(pud)) return no_page_table(vma, flags, address); - if (pud_devmap(*pud)) { - ptl = pud_lock(mm, pud); - page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap); + if (pud_devmap(pud)) { + ptl = pud_lock(mm, pudp); + page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; return no_page_table(vma, flags, address); } - if (unlikely(pud_bad(*pud))) + if (unlikely(pud_bad(pud))) return no_page_table(vma, flags, address); - return follow_pmd_mask(vma, address, pud, flags, ctx); + return follow_pmd_mask(vma, address, pudp, flags, ctx); } static struct page *follow_p4d_mask(struct vm_area_struct *vma, From patchwork Tue Dec 19 07:55:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13497946 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FF45C46CD4 for ; Tue, 19 Dec 2023 07:58:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 875E38D000B; Tue, 19 Dec 2023 02:58:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7FECB8D0005; Tue, 19 Dec 2023 02:58:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6516B8D000B; Tue, 19 Dec 2023 02:58:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 4D99D8D0005 for ; Tue, 19 Dec 2023 02:58:01 -0500 (EST) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 1DB7E120340 for ; Tue, 19 Dec 2023 07:58:01 +0000 (UTC) X-FDA: 81582814362.30.EC31FD6 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf28.hostedemail.com (Postfix) with ESMTP id 66800C0016 for ; Tue, 19 Dec 2023 07:57:59 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=fQknvCbV; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf28.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702972679; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5ijea/674RZI9IDvQT14lUfypWJKjpWiawe+yIIZEUw=; b=mDgY18l52YdIqxZqOhRQESRh0iXaOuuIWRrkUYrJ83VfNLVvODCSDBHuU2bkmtnzdurWTm 2X7UGG2JnKGcDS890BBYrYoxjjzYJS6n4+Y3FWpqnNQZ6xgpLoiPTI5WtVQI7vt1LZncxh E7A5OsOF1GbmqNOjaNF/kG4BsiUqArk= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=fQknvCbV; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf28.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702972679; a=rsa-sha256; cv=none; b=oqc/Bo3uQtAX08+pO9YiVaEyXH5KuG0sB+Sc+YoN6a3b35vu9JXnoXYtKecbl/ImgsgYOC rfezIkr9oa3y2xrAtCZT7NlGgzB864jIZ6dDoFpKOL0Pq2SmdxH6PO/NkRHI/g4e7dPqap cDyy1gIYay8ErPmn/uFElLsEuosB65s= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702972678; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=5ijea/674RZI9IDvQT14lUfypWJKjpWiawe+yIIZEUw=; b=fQknvCbVhetvN0TXIRAn7+BNxL2s4KvCwVsknoz43iD1x3/8n2uQZDfR6baUtWIQUA3gDG rwGgfsexeBi0QhWZTKZyScl/31hAyKRBJUa8PPJoeou8kmQWyHNtrJf1qDOtFi7tyBRmyL /Sx80Is7+t3WJMVkr7K7lZF6vJesTN4= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-48-DALhkOU0MoKtInSHgoxvqA-1; Tue, 19 Dec 2023 02:57:54 -0500 X-MC-Unique: DALhkOU0MoKtInSHgoxvqA-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1D1EE3806720; Tue, 19 Dec 2023 07:57:53 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id BF70B2026D66; Tue, 19 Dec 2023 07:57:41 +0000 (UTC) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Matthew Wilcox , Christophe Leroy , Lorenzo Stoakes , David Hildenbrand , Vlastimil Babka , Mike Kravetz , Mike Rapoport , Christoph Hellwig , John Hubbard , Andrew Jones , linux-arm-kernel@lists.infradead.org, Michael Ellerman , "Kirill A . Shutemov" , linuxppc-dev@lists.ozlabs.org, Rik van Riel , linux-riscv@lists.infradead.org, Yang Shi , James Houghton , "Aneesh Kumar K . V" , Andrew Morton , Jason Gunthorpe , Andrea Arcangeli , peterx@redhat.com, Axel Rasmussen Subject: [PATCH 10/13] mm/gup: Handle huge pud for follow_pud_mask() Date: Tue, 19 Dec 2023 15:55:35 +0800 Message-ID: <20231219075538.414708-11-peterx@redhat.com> In-Reply-To: <20231219075538.414708-1-peterx@redhat.com> References: <20231219075538.414708-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Rspamd-Queue-Id: 66800C0016 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: 7eo1yugfztho9rnoqciwy954qqarnhx8 X-HE-Tag: 1702972679-27275 X-HE-Meta: U2FsdGVkX19PjC4L8KsI3HOJZv+Fzfw+DcgYtOkr6dHXVeHlAFcmyNO2LUuDAF/2FBoZjGHSs65RAOHgQhRaxrPY72iHLupuWdpXLuJekJQnFykSNESGRQihBhgzpec2O90SdTBKu+QaBUh0jc1wJ6tsgQopsBhHK1bg9PAK5TJ0fwwqcAEbjY3Ua29OcH/VsfbOMpO3s/RsFuLH/TkAs5zN/Qr8J+KyI5mZmXA0wzcpE3Ga9WMZRlvdLvgCBlp+yQ2gk0O2D/gULMRKJgoWjzPEPnXwXKSP5EhzQus5SBG7KOyxqxpClEcJyw4kgtbfvcp7VWBRU5Kc6xbHdVmqGhTiSgeW0tZRMadDhMFFQi1A+GyrvPJlrfmknGkt4prfteJDSgFMiqahtHFvHRHwRITKNidkLufoU7ZrF6Y2eF2O88N012g+qMUirX8eXCrTlcG3mLL6MLPM8bzNj8/2i+LJUGIr+mnXIwjIqKz385Qn+Es6Hb2mGDo/zxuGbNHzrxzR5bWdK9UQKoMyMKYbdFokItnZdx0Du1zIm3zkH0BPtHNKuwBC8KxudTM27BFQUn8CPcjreNA5qgiI2WUlOhUgIoD0IIN2jUbWEBSHhAgSdW8/3po9K/CpGGABcTR+6dYaA7ZyoCrCElkE1ZDGr9YLZI/7TFUDDvw3Zo3/MZiYiNul/rmsvWn+oO6GBozpizpdqeqR5oKEiNcUzZV7nvn6lMUhab1NurYnoI6Vk5NmiyKwwzuCVTKacFyoGKbb0oR66EQUevJHMAC74gr17ENWceB93vyKpqwOOz2zvOYRIwzXFn7fOgHJvD0F6f27tRa1GEbxooryhwGRvcT+7xwcDNZrEPvw+ErsmsuTjxOJ14LPf9H8WGS7jZTNrSCIU/2aXPiCaanCMEjJAixqICpRQzXLfgEf8BdQ79hAfcraMfhbs+VcyFiLPktJz3rFUk5oi6i+O0c8OGSpTea gz6eOJua keNzEMQb0Rp18CoBBNWWdFxFIcVO2YVveRdMVr6kbjx6lKJe0pWm8+Rs/Slgr7EulMDUQ9koe/cTLBA/9c6xu6gkD9s1yCvKhbnIKxQDWSiUEiTz09cEx3GmBG0vEJ+5XgrGAEDXWQKlFFg0D5kkvQGr/6Sd0bgm1odoFft1sQffu4gukwpfQj3zeR1INw42ZCHvFV78iMkaKqMvGvhYi6AzPWQWTHJf1VUa0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Teach follow_pud_mask() to be able to handle normal PUD pages like hugetlb. Rename follow_devmap_pud() to follow_huge_pud() so that it can process either huge devmap or hugetlb. Move it out of TRANSPARENT_HUGEPAGE_PUD and and huge_memory.c (which relies on CONFIG_THP). In the new follow_huge_pud(), taking care of possible CoR for hugetlb if necessary. touch_pud() needs to be moved out of huge_memory.c to be accessable from gup.c even if !THP. Since at it, optimize the non-present check by adding a pud_present() early check before taking the pgtable lock, failing the follow_page() early if PUD is not present: that is required by both devmap or hugetlb. Use pud_huge() to also cover the pud_devmap() case. One more trivial thing to mention is, introduce "pud_t pud" in the code paths along the way, so the code doesn't dereference *pudp multiple time. Not only because that looks less straightforward, but also because if the dereference really happened, it's not clear whether there can be race to see different *pudp values when it's being modified at the same time. Setting ctx->page_mask properly for a PUD entry. As a side effect, this patch should also be able to optimize devmap GUP on PUD to be able to jump over the whole PUD range, but not yet verified. Hugetlb already can do so prior to this patch. Signed-off-by: Peter Xu --- include/linux/huge_mm.h | 8 ----- mm/gup.c | 70 +++++++++++++++++++++++++++++++++++++++-- mm/huge_memory.c | 47 ++------------------------- mm/internal.h | 2 ++ 4 files changed, 71 insertions(+), 56 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index d335130e145f..80f181d76f94 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -346,8 +346,6 @@ static inline bool folio_test_pmd_mappable(struct folio *folio) struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, int flags, struct dev_pagemap **pgmap); -struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, int flags, struct dev_pagemap **pgmap); vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf); @@ -503,12 +501,6 @@ static inline struct page *follow_devmap_pmd(struct vm_area_struct *vma, return NULL; } -static inline struct page *follow_devmap_pud(struct vm_area_struct *vma, - unsigned long addr, pud_t *pud, int flags, struct dev_pagemap **pgmap) -{ - return NULL; -} - static inline bool thp_migration_supported(void) { return false; diff --git a/mm/gup.c b/mm/gup.c index 97e87b7a15c3..5b14f91d2f6b 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma, return NULL; } +#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES +static struct page *follow_huge_pud(struct vm_area_struct *vma, + unsigned long addr, pud_t *pudp, + int flags, struct follow_page_context *ctx) +{ + struct mm_struct *mm = vma->vm_mm; + struct page *page; + pud_t pud = *pudp; + unsigned long pfn = pud_pfn(pud); + int ret; + + assert_spin_locked(pud_lockptr(mm, pudp)); + + if ((flags & FOLL_WRITE) && !pud_write(pud)) + return NULL; + + if (!pud_present(pud)) + return NULL; + + pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT; + +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD + if (pud_devmap(pud)) { + /* + * device mapped pages can only be returned if the caller + * will manage the page reference count. + * + * At least one of FOLL_GET | FOLL_PIN must be set, so + * assert that here: + */ + if (!(flags & (FOLL_GET | FOLL_PIN))) + return ERR_PTR(-EEXIST); + + if (flags & FOLL_TOUCH) + touch_pud(vma, addr, pudp, flags & FOLL_WRITE); + + ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap); + if (!ctx->pgmap) + return ERR_PTR(-EFAULT); + } +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ + page = pfn_to_page(pfn); + + if (!pud_devmap(pud) && !pud_write(pud) && + gup_must_unshare(vma, flags, page)) + return ERR_PTR(-EMLINK); + + ret = try_grab_page(page, flags); + if (ret) + page = ERR_PTR(ret); + else + ctx->page_mask = HPAGE_PUD_NR - 1; + + return page; +} +#else /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ +static struct page *follow_huge_pud(struct vm_area_struct *vma, + unsigned long addr, pud_t *pudp, + int flags, struct follow_page_context *ctx) +{ + return NULL; +} +#endif /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ + static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, pte_t *pte, unsigned int flags) { @@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, pudp = pud_offset(p4dp, address); pud = *pudp; - if (pud_none(pud)) + if (pud_none(pud) || !pud_present(pud)) return no_page_table(vma, flags, address); - if (pud_devmap(pud)) { + if (pud_huge(pud)) { ptl = pud_lock(mm, pudp); - page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap); + page = follow_huge_pud(vma, address, pudp, flags, ctx); spin_unlock(ptl); if (page) return page; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 6be1a380a298..def1dbe0d7e8 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1371,8 +1371,8 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, } #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD -static void touch_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, bool write) +void touch_pud(struct vm_area_struct *vma, unsigned long addr, + pud_t *pud, bool write) { pud_t _pud; @@ -1384,49 +1384,6 @@ static void touch_pud(struct vm_area_struct *vma, unsigned long addr, update_mmu_cache_pud(vma, addr, pud); } -struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, int flags, struct dev_pagemap **pgmap) -{ - unsigned long pfn = pud_pfn(*pud); - struct mm_struct *mm = vma->vm_mm; - struct page *page; - int ret; - - assert_spin_locked(pud_lockptr(mm, pud)); - - if (flags & FOLL_WRITE && !pud_write(*pud)) - return NULL; - - if (pud_present(*pud) && pud_devmap(*pud)) - /* pass */; - else - return NULL; - - if (flags & FOLL_TOUCH) - touch_pud(vma, addr, pud, flags & FOLL_WRITE); - - /* - * device mapped pages can only be returned if the - * caller will manage the page reference count. - * - * At least one of FOLL_GET | FOLL_PIN must be set, so assert that here: - */ - if (!(flags & (FOLL_GET | FOLL_PIN))) - return ERR_PTR(-EEXIST); - - pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT; - *pgmap = get_dev_pagemap(pfn, *pgmap); - if (!*pgmap) - return ERR_PTR(-EFAULT); - page = pfn_to_page(pfn); - - ret = try_grab_page(page, flags); - if (ret) - page = ERR_PTR(ret); - - return page; -} - int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm, pud_t *dst_pud, pud_t *src_pud, unsigned long addr, struct vm_area_struct *vma) diff --git a/mm/internal.h b/mm/internal.h index 222e63b2dea4..2fca14553d0f 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1007,6 +1007,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags); /* * mm/huge_memory.c */ +void touch_pud(struct vm_area_struct *vma, unsigned long addr, + pud_t *pud, bool write); struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, unsigned int flags); From patchwork Tue Dec 19 07:55:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13497979 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 625FEC46CA2 for ; Tue, 19 Dec 2023 07:58:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id F2FBE8D000C; Tue, 19 Dec 2023 02:58:13 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EB5F38D0005; Tue, 19 Dec 2023 02:58:13 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D08828D000C; Tue, 19 Dec 2023 02:58:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id BC2F48D0005 for ; Tue, 19 Dec 2023 02:58:13 -0500 (EST) Received: from smtpin08.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 802EC408CF for ; Tue, 19 Dec 2023 07:58:13 +0000 (UTC) X-FDA: 81582814866.08.A398BD7 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf16.hostedemail.com (Postfix) with ESMTP id C7D82180011 for ; Tue, 19 Dec 2023 07:58:11 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=hOmhLtLR; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702972691; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=z+0dGl/nMUgxPM498YSXtipz9Gb3hgqO7wNjJOzkT+E=; b=rXjsXmVG8AFVhyaMFIltF7/p3GwiBXVvwDKrWJ+5nqVc5dbJdeojJiH0YkeqfAmWUdVeJw jQZAVkoza9racUsvVRwhmwyKPVtREG4SEJY4pqdw9idWPsz5/WWKv20+idnOGyoMjbD4hk qWFUF/Lw9ajEGF2HVCJ405/+1h3DyvY= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=hOmhLtLR; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf16.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702972691; a=rsa-sha256; cv=none; b=y/rH4RGwUgU0vNqbewr2/YPwQpe9EoWuW2qOinLs0fTDISYMpaMOsQN3U/0x2guv+/20S1 zg/80ACmJcf74duh/tv8dAkwNI3tQPU0ojirE/Ja+ab5ztK8BD6alN4zuNsiN6dyruHy6K EQKP9VGoqJDrxzw8zcIULU/qTXRAwm8= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702972691; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=z+0dGl/nMUgxPM498YSXtipz9Gb3hgqO7wNjJOzkT+E=; b=hOmhLtLRBFLrz2CS/0L5Fi+2VN5zqyzGRhfRB9BMj/xXrQe7bc64J9TDyxKq77fX5cZdDl e6Y123mJQHyONeTFkObVHWnHbGhKmCJbxyzCv0dwAk1WDyp38HhCXZnumWh5ng+SK3GNnD 1e9Z9DEA2hhexx8E4+o62nwYiPH9Xjo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-678-9aIBjkYHNgawgyAMHjSpHg-1; Tue, 19 Dec 2023 02:58:05 -0500 X-MC-Unique: 9aIBjkYHNgawgyAMHjSpHg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 82F6C83BA88; Tue, 19 Dec 2023 07:58:04 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id B7CD92026D66; Tue, 19 Dec 2023 07:57:53 +0000 (UTC) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Matthew Wilcox , Christophe Leroy , Lorenzo Stoakes , David Hildenbrand , Vlastimil Babka , Mike Kravetz , Mike Rapoport , Christoph Hellwig , John Hubbard , Andrew Jones , linux-arm-kernel@lists.infradead.org, Michael Ellerman , "Kirill A . Shutemov" , linuxppc-dev@lists.ozlabs.org, Rik van Riel , linux-riscv@lists.infradead.org, Yang Shi , James Houghton , "Aneesh Kumar K . V" , Andrew Morton , Jason Gunthorpe , Andrea Arcangeli , peterx@redhat.com, Axel Rasmussen Subject: [PATCH 11/13] mm/gup: Handle huge pmd for follow_pmd_mask() Date: Tue, 19 Dec 2023 15:55:36 +0800 Message-ID: <20231219075538.414708-12-peterx@redhat.com> In-Reply-To: <20231219075538.414708-1-peterx@redhat.com> References: <20231219075538.414708-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: C7D82180011 X-Stat-Signature: mxpk4twpesi7fkndcncjcjn5z3ipewfb X-Rspam-User: X-HE-Tag: 1702972691-273461 X-HE-Meta: U2FsdGVkX19pXRGTfI8Lf2sthhktDVCDfDCV4qXpIiIrjs3f80L1er3NS4Y8ciwjZr/09/uR9D7Yj+J490v9hsNTbuug9V9rnEc3RYjCEjIWmEk0ZBi0qDl4gXbBfa4BfAxvBcvTqwWVGt9tZJZ+kykhROdeID7hF/gsYQsFD2T9ISL6fYtpzEPIhcaDIICWT4twTs2WQaeLfvZYjvf4wd4dlgc4l6uyzeq4N32JnMCUz4i3jlKRiS6UJ9CGiG71EG4IsGtJhYn0C2yu52ixMKeH5uA+WxHujWG9EPveEZv9oNU3fiuUvKXj08yfQgtarpXRXIiOdxqKUaz4dBIO6LXZYXiC0DhHOZXFfYK0ScwBlewd/WqD896Sng5wtuUwCNLjR0DOVlRZv/o/r1yaTKpCjhLNwYg3o23Z1O4VGb/wpH2rw47ZidrvVXYG7lOYOOet6hBeDOZGKqZMG7GCn4neusdAgwnonO770hUEGmhQq0TBADMe6Wpd5sdNqcNwzD+7KWAn3WDvj63MvLimSVXd/SPGB3eTMox7cECYlBG+7dmMH0KcmBmJGFTxLYZZMVHjXrzdIPOXZ9M51I32Gsv6RhM/QT95JeHa1eyPIBHil61WEaM3PeNR7vckM41aaJavYFX2iyKiXTQDmI6f3LEoHU4LAIyWuFr61hk29thNW4Kf8jIb6mq40ByKiZLZTqelcEecboUITfZoAz/eChJjng6jqtRqePv0SZW1wkeGc57oYa8eVCk/zzj7X45KKY4weg8BvRM+dLRutclAmOZYgVZ/+IQPEksR4j4B2QQ5xUlg0eM2T52zZSgEq94r9F8JR5FkgrGJW5Eief4og1grMHQeZ1SqQMKPlUpEBRrQlW1ZoxM7M3Z6fOmEzDzXPTnYWHahgU7eyDwwU+/OeC/4nood3GnpsEF1uHVCezPRDvCGx0n7RVRF0BscUH4kFOeyoivYFlsSwMdThWu ioZWytXJ KoewUA+uUVo3YtR9RiKMhtQwnucBKBEUA8jS0t/Z07vq/Q2ubgX9AUasH+J/5SKWm3Y0ZmbykhSOLs7dVvlFq4shE0OIvYAx/oE27DP28Xp7Subf6sAtFIU1RUBaWO1sHLesL4H6thw8l4ZlANYLrZPtiMpY7Sw6xSbXwERj9J8a7sFSZS5w6bS6OBc+0ErIkjFUZfkmdda/I3BnWa8DEEBoTThuvLYC/JC1b X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Replace pmd_trans_huge() with pmd_thp_or_huge() to also cover pmd_huge() as long as enabled. FOLL_TOUCH and FOLL_SPLIT_PMD only apply to THP, not yet huge. Since now follow_trans_huge_pmd() can process hugetlb pages, renaming it into follow_huge_pmd() to match what it does. Move it into gup.c so not depend on CONFIG_THP. When at it, move the ctx->page_mask setup into follow_huge_pmd(), only set it when the page is valid. It was not a bug to set it before even if GUP failed (page==NULL), because follow_page_mask() callers always ignores page_mask if so. But doing so makes the code cleaner. Signed-off-by: Peter Xu --- mm/gup.c | 107 ++++++++++++++++++++++++++++++++++++++++++++--- mm/huge_memory.c | 86 +------------------------------------ mm/internal.h | 5 +-- 3 files changed, 105 insertions(+), 93 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 5b14f91d2f6b..080dff79b650 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -580,6 +580,93 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma, return page; } + +/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */ +static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page, + struct vm_area_struct *vma, + unsigned int flags) +{ + /* If the pmd is writable, we can write to the page. */ + if (pmd_write(pmd)) + return true; + + /* Maybe FOLL_FORCE is set to override it? */ + if (!(flags & FOLL_FORCE)) + return false; + + /* But FOLL_FORCE has no effect on shared mappings */ + if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED)) + return false; + + /* ... or read-only private ones */ + if (!(vma->vm_flags & VM_MAYWRITE)) + return false; + + /* ... or already writable ones that just need to take a write fault */ + if (vma->vm_flags & VM_WRITE) + return false; + + /* + * See can_change_pte_writable(): we broke COW and could map the page + * writable if we have an exclusive anonymous page ... + */ + if (!page || !PageAnon(page) || !PageAnonExclusive(page)) + return false; + + /* ... and a write-fault isn't required for other reasons. */ + if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd)) + return false; + return !userfaultfd_huge_pmd_wp(vma, pmd); +} + +static struct page *follow_huge_pmd(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmd, + unsigned int flags, + struct follow_page_context *ctx) +{ + struct mm_struct *mm = vma->vm_mm; + pmd_t pmdval = *pmd; + struct page *page; + int ret; + + assert_spin_locked(pmd_lockptr(mm, pmd)); + + page = pmd_page(pmdval); + VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page); + + if ((flags & FOLL_WRITE) && + !can_follow_write_pmd(pmdval, page, vma, flags)) + return NULL; + + /* Avoid dumping huge zero page */ + if ((flags & FOLL_DUMP) && is_huge_zero_pmd(pmdval)) + return ERR_PTR(-EFAULT); + + if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags)) + return NULL; + + if (!pmd_write(pmdval) && gup_must_unshare(vma, flags, page)) + return ERR_PTR(-EMLINK); + + VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && + !PageAnonExclusive(page), page); + + ret = try_grab_page(page, flags); + if (ret) + return ERR_PTR(ret); + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + if (pmd_trans_huge(pmdval) && (flags & FOLL_TOUCH)) + touch_pmd(vma, addr, pmd, flags & FOLL_WRITE); +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + + page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT; + ctx->page_mask = HPAGE_PMD_NR - 1; + VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page); + + return page; +} + #else /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ static struct page *follow_huge_pud(struct vm_area_struct *vma, unsigned long addr, pud_t *pudp, @@ -587,6 +674,14 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma, { return NULL; } + +static struct page *follow_huge_pmd(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmd, + unsigned int flags, + struct follow_page_context *ctx) +{ + return NULL; +} #endif /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, @@ -784,31 +879,31 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, return page; return no_page_table(vma, flags, address); } - if (likely(!pmd_trans_huge(pmdval))) + if (likely(!pmd_thp_or_huge(pmdval))) return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags)) return no_page_table(vma, flags, address); ptl = pmd_lock(mm, pmd); - if (unlikely(!pmd_present(*pmd))) { + pmdval = *pmd; + if (unlikely(!pmd_present(pmdval))) { spin_unlock(ptl); return no_page_table(vma, flags, address); } - if (unlikely(!pmd_trans_huge(*pmd))) { + if (unlikely(!pmd_thp_or_huge(pmdval))) { spin_unlock(ptl); return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); } - if (flags & FOLL_SPLIT_PMD) { + if (pmd_trans_huge(pmdval) && (flags & FOLL_SPLIT_PMD)) { spin_unlock(ptl); split_huge_pmd(vma, pmd, address); /* If pmd was left empty, stuff a page table in there quickly */ return pte_alloc(mm, pmd) ? ERR_PTR(-ENOMEM) : follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); } - page = follow_trans_huge_pmd(vma, address, pmd, flags); + page = follow_huge_pmd(vma, address, pmd, flags, ctx); spin_unlock(ptl); - ctx->page_mask = HPAGE_PMD_NR - 1; return page; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index def1dbe0d7e8..930c59d7ceab 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1216,8 +1216,8 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write) EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud); #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ -static void touch_pmd(struct vm_area_struct *vma, unsigned long addr, - pmd_t *pmd, bool write) +void touch_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, bool write) { pmd_t _pmd; @@ -1570,88 +1570,6 @@ static inline bool can_change_pmd_writable(struct vm_area_struct *vma, return pmd_dirty(pmd); } -/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */ -static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page, - struct vm_area_struct *vma, - unsigned int flags) -{ - /* If the pmd is writable, we can write to the page. */ - if (pmd_write(pmd)) - return true; - - /* Maybe FOLL_FORCE is set to override it? */ - if (!(flags & FOLL_FORCE)) - return false; - - /* But FOLL_FORCE has no effect on shared mappings */ - if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED)) - return false; - - /* ... or read-only private ones */ - if (!(vma->vm_flags & VM_MAYWRITE)) - return false; - - /* ... or already writable ones that just need to take a write fault */ - if (vma->vm_flags & VM_WRITE) - return false; - - /* - * See can_change_pte_writable(): we broke COW and could map the page - * writable if we have an exclusive anonymous page ... - */ - if (!page || !PageAnon(page) || !PageAnonExclusive(page)) - return false; - - /* ... and a write-fault isn't required for other reasons. */ - if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd)) - return false; - return !userfaultfd_huge_pmd_wp(vma, pmd); -} - -struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, - unsigned long addr, - pmd_t *pmd, - unsigned int flags) -{ - struct mm_struct *mm = vma->vm_mm; - struct page *page; - int ret; - - assert_spin_locked(pmd_lockptr(mm, pmd)); - - page = pmd_page(*pmd); - VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page); - - if ((flags & FOLL_WRITE) && - !can_follow_write_pmd(*pmd, page, vma, flags)) - return NULL; - - /* Avoid dumping huge zero page */ - if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd)) - return ERR_PTR(-EFAULT); - - if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags)) - return NULL; - - if (!pmd_write(*pmd) && gup_must_unshare(vma, flags, page)) - return ERR_PTR(-EMLINK); - - VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && - !PageAnonExclusive(page), page); - - ret = try_grab_page(page, flags); - if (ret) - return ERR_PTR(ret); - - if (flags & FOLL_TOUCH) - touch_pmd(vma, addr, pmd, flags & FOLL_WRITE); - - page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT; - VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page); - - return page; -} - /* NUMA hinting page fault entry point for trans huge pmds */ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) { diff --git a/mm/internal.h b/mm/internal.h index 2fca14553d0f..c0e953a1eb62 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1009,9 +1009,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags); */ void touch_pud(struct vm_area_struct *vma, unsigned long addr, pud_t *pud, bool write); -struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, - unsigned long addr, pmd_t *pmd, - unsigned int flags); +void touch_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, bool write); /* * mm/mmap.c From patchwork Tue Dec 19 07:55:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13497980 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 97427C41535 for ; Tue, 19 Dec 2023 07:58:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 345538D000D; Tue, 19 Dec 2023 02:58:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2CEC38D0005; Tue, 19 Dec 2023 02:58:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 148998D000D; Tue, 19 Dec 2023 02:58:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id F235A8D0005 for ; Tue, 19 Dec 2023 02:58:25 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id CF7991A0439 for ; Tue, 19 Dec 2023 07:58:25 +0000 (UTC) X-FDA: 81582815370.26.8B74F15 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf15.hostedemail.com (Postfix) with ESMTP id 2AEC9A0019 for ; Tue, 19 Dec 2023 07:58:23 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=bLZnMMvB; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf15.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702972704; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=z9BJA067XqZpd3Nf7GBMDnnzZoziQMnRZVuO9/Z3c+Q=; b=tzzSoZnDyy8h1bG/Lit1s495nSaoXBYqaOBzWvjdifNA5vSoSoCo/q+6M6yuNGwawXeiSE HvWIDfEAJPTmXYBhqDfB//FfwpukZesP3lYUa+keLpae8fn+7VKsMOOFL+R/xPhFOOExvS Hz8GQe5lCqrqshGEGnOdlKajEzhWRvY= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=bLZnMMvB; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf15.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702972704; a=rsa-sha256; cv=none; b=oTNceH8e3XNeJwvyfZQD23DwFrKzYh7x6y2hzUmN9Qtid+FmzyX7DBT/w6gXMEQp0o/FGp kFg05hR5Eql3pUJFthtfS30vL8h4kJsPD15EHfZd7Srf3eeo0J7JPDKRbE22cybjlwLVBr lNl35EUSjbawQtV9t6J21N/cNXx+G8w= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702972703; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=z9BJA067XqZpd3Nf7GBMDnnzZoziQMnRZVuO9/Z3c+Q=; b=bLZnMMvBbTXQHWTYLfC9owY4qgvPevf2DudV/PmgzRoWT9BifVXE4/yXJ/BKgaXTpiNK9d S3KD2SMH2WpuqzjxYFjhGUSNeQmdg4efvEqwY279Gv85S/eaT7vukZbEJN5PFhBmMExV0O PPpnnygS3cREcR3P89n70FJ2mBrNV/g= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-175-dRyYBC1tOkeHiKu36YFQHw-1; Tue, 19 Dec 2023 02:58:18 -0500 X-MC-Unique: dRyYBC1tOkeHiKu36YFQHw-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 7D4B63806720; Tue, 19 Dec 2023 07:58:16 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 528C52026D66; Tue, 19 Dec 2023 07:58:04 +0000 (UTC) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Matthew Wilcox , Christophe Leroy , Lorenzo Stoakes , David Hildenbrand , Vlastimil Babka , Mike Kravetz , Mike Rapoport , Christoph Hellwig , John Hubbard , Andrew Jones , linux-arm-kernel@lists.infradead.org, Michael Ellerman , "Kirill A . Shutemov" , linuxppc-dev@lists.ozlabs.org, Rik van Riel , linux-riscv@lists.infradead.org, Yang Shi , James Houghton , "Aneesh Kumar K . V" , Andrew Morton , Jason Gunthorpe , Andrea Arcangeli , peterx@redhat.com, Axel Rasmussen Subject: [PATCH 12/13] mm/gup: Handle hugepd for follow_page() Date: Tue, 19 Dec 2023 15:55:37 +0800 Message-ID: <20231219075538.414708-13-peterx@redhat.com> In-Reply-To: <20231219075538.414708-1-peterx@redhat.com> References: <20231219075538.414708-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Rspamd-Queue-Id: 2AEC9A0019 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: uu9feyn1w9izqdpitmpti3dqimew6ctc X-HE-Tag: 1702972703-386255 X-HE-Meta: U2FsdGVkX1+cniLjce3X0JDrAnW5nfHx5rFHZ+4CNDPu3IXCz3E6Gsf9Mr83VUdvBTM53WxHEGV0Umy8jeU/Mb6pITtfnCBBVMPPV+i0OIxUcOuIvuS9fBbSZ4ub7gCjJoWaXnri4iKxkixf8HbMy7/GFx8zcAsF5/flP8XlbNRuxY2qlKEYCE73j/V7IfIm7vodtpX+/wcisOLw4RBVX/1yR7v/z1qbaf+PHbhyRwpMp3mi/yLxEtrnu/Cft3gn1qlUIYkT2WWXO9igXvwmAUyJ2O7itNNZgHoVB3nuSZPT9rErOB213yL1lcImtJ97sDn7jEQyS5JZN/Zcgqg+wSfXIA9Mk3uPklMKrdDOcSTLqeUnsm3sw/gTZrBbEZNWk0Rvwqc8jpJZhz9FagC42M+JxZn0Qlr3+bQzxRWfZZnG9f6HOlWPOYXn2MbgQpBl9pZ6qMDZHPxHWxFgpNcgucTGy+mZMSzBRKtN0CuRlgqzVzZItNYvMZr3UQrBueiUjqRU00Zto90fUM9efTQ0pVSSM7hgUC8QeP897e2uhL/M3yn8YLRXKqcrNlWFu3B96osFFUIYj6trAwuNxc1DpOhD587UqnTYha81G5ndPOQA9vSiSl0dWCHUWMGFH059n1kLy0gkvI8SL7fyd1HqlhtSgG8IFQO4sU4WGr5OaZDg6rJPGas0IjT2FMIPa+0uUvh7zJ6KFdAW9VZSj+IIYvoCvJVQ8OUcRSkSyUT7KsckIEqWlLyk/YKNg/zcag1STA1+Df9YCCIJYeOzhEjwnrPCBqNxW9zwiAef898xVAkOEs47/cw23Swu0MUOwSMnmXt7S/xsQVa0sYiysPpzEh1M3gn0zXe8DeV35IEWoqSXa/cw/5yQ5wa2xcwd3fEkuQ4YPXay2l48yS+sHY2WhgR2f4SVrFV0OTVer41aBe8jLtp6zjL309i0RV3peTU1ygB2ZWowEFN89AcvyVa WJWlY9zN J0Hb4V5mFF20zKuEdxzwYLiY3ZQIkjtxQdrjzie2gmzSDyxVJens3Xlh1JRpaUNiHypWFjsarcIBdL9mpP3hU/rqshAIJOukRdVCAM65pyPoCPYr6WLB42G3BNbrJp+awuyzpQtxazu2NFBmusvRCBBQU1dGYciaNQ7Lm87iojsgghOJmDJODpc2W87llWUmBNZAHMOqZrMaYReYJDmQ4McZy4S+9eMtastpc X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Hugepd is only used in PowerPC so far on 4K page size kernels where hash mmu is used. follow_page_mask() used to leverage hugetlb APIs to access hugepd entries. Teach follow_page_mask() itself on hugepd. With previous refactors on fast-gup gup_huge_pd(), most of the code can be easily leveraged. There's something not needed for follow page, for example, gup_hugepte() tries to detect pgtable entry change which will never happen with slow gup (which has the pgtable lock held), but that's not a problem to check. Since follow_page() always only fetch one page, set the end to "address + PAGE_SIZE" should suffice. We will still do the pgtable walk once for each hugetlb page by setting ctx->page_mask properly. One thing worth mentioning is that some level of pgtable's _bad() helper will report is_hugepd() entries as TRUE on Power8 hash MMUs. I think it at least applies to PUD on Power8 with 4K pgsize. It means feeding a hugepd entry to pud_bad() will report a false positive. Let's leave that for now because it can be arch-specific where I am a bit declined to touch. In this patch it's not a problem as long as hugepd is detected before any bad pgtable entries. Signed-off-by: Peter Xu --- mm/gup.c | 78 +++++++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 69 insertions(+), 9 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 080dff79b650..14a7d13e7bd6 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -30,6 +30,11 @@ struct follow_page_context { unsigned int page_mask; }; +static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd, + unsigned long addr, unsigned int pdshift, + unsigned int flags, + struct follow_page_context *ctx); + static inline void sanity_check_pinned_pages(struct page **pages, unsigned long npages) { @@ -871,6 +876,9 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, return no_page_table(vma, flags, address); if (!pmd_present(pmdval)) return no_page_table(vma, flags, address); + if (unlikely(is_hugepd(__hugepd(pmd_val(pmdval))))) + return follow_hugepd(vma, __hugepd(pmd_val(pmdval)), + address, PMD_SHIFT, flags, ctx); if (pmd_devmap(pmdval)) { ptl = pmd_lock(mm, pmd); page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap); @@ -921,6 +929,9 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, pud = *pudp; if (pud_none(pud) || !pud_present(pud)) return no_page_table(vma, flags, address); + if (unlikely(is_hugepd(__hugepd(pud_val(pud))))) + return follow_hugepd(vma, __hugepd(pud_val(pud)), + address, PUD_SHIFT, flags, ctx); if (pud_huge(pud)) { ptl = pud_lock(mm, pudp); page = follow_huge_pud(vma, address, pudp, flags, ctx); @@ -940,13 +951,17 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma, unsigned int flags, struct follow_page_context *ctx) { - p4d_t *p4d; + p4d_t *p4d, p4dval; p4d = p4d_offset(pgdp, address); - if (p4d_none(*p4d)) - return no_page_table(vma, flags, address); - BUILD_BUG_ON(p4d_huge(*p4d)); - if (unlikely(p4d_bad(*p4d))) + p4dval = *p4d; + BUILD_BUG_ON(p4d_huge(p4dval)); + + if (unlikely(is_hugepd(__hugepd(p4d_val(p4dval))))) + return follow_hugepd(vma, __hugepd(p4d_val(p4dval)), + address, P4D_SHIFT, flags, ctx); + + if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval))) return no_page_table(vma, flags, address); return follow_pud_mask(vma, address, p4d, flags, ctx); @@ -980,7 +995,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, unsigned long address, unsigned int flags, struct follow_page_context *ctx) { - pgd_t *pgd; + pgd_t *pgd, pgdval; struct mm_struct *mm = vma->vm_mm; ctx->page_mask = 0; @@ -995,11 +1010,17 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, &ctx->page_mask); pgd = pgd_offset(mm, address); + pgdval = *pgd; - if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) - return no_page_table(vma, flags, address); + if (unlikely(is_hugepd(__hugepd(pgd_val(pgdval))))) + page = follow_hugepd(vma, __hugepd(pgd_val(pgdval)), + address, PGDIR_SHIFT, flags, ctx); + else if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) + page = no_page_table(vma, flags, address); + else + page = follow_p4d_mask(vma, address, pgd, flags, ctx); - return follow_p4d_mask(vma, address, pgd, flags, ctx); + return page; } struct page *follow_page(struct vm_area_struct *vma, unsigned long address, @@ -3026,6 +3047,37 @@ static int gup_huge_pd(hugepd_t hugepd, unsigned long addr, return 1; } + +static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd, + unsigned long addr, unsigned int pdshift, + unsigned int flags, + struct follow_page_context *ctx) +{ + struct page *page; + struct hstate *h; + spinlock_t *ptl; + int nr = 0, ret; + pte_t *ptep; + + /* Only hugetlb supports hugepd */ + if (WARN_ON_ONCE(!is_vm_hugetlb_page(vma))) + return ERR_PTR(-EFAULT); + + h = hstate_vma(vma); + ptep = hugepte_offset(hugepd, addr, pdshift); + ptl = huge_pte_lock(h, vma->vm_mm, ptep); + ret = gup_huge_pd(hugepd, addr, pdshift, addr + PAGE_SIZE, + flags, &page, &nr); + spin_unlock(ptl); + + if (ret) { + WARN_ON_ONCE(nr != 1); + ctx->page_mask = (1U << huge_page_order(h)) - 1; + return page; + } + + return NULL; +} #else static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr, unsigned int pdshift, unsigned long end, unsigned int flags, @@ -3033,6 +3085,14 @@ static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr, { return 0; } + +static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd, + unsigned long addr, unsigned int pdshift, + unsigned int flags, + struct follow_page_context *ctx) +{ + return NULL; +} #endif /* CONFIG_ARCH_HAS_HUGEPD */ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, From patchwork Tue Dec 19 07:55:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13497981 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D534EC46CA2 for ; Tue, 19 Dec 2023 07:58:39 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6EF248D000E; Tue, 19 Dec 2023 02:58:39 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 678398D0005; Tue, 19 Dec 2023 02:58:39 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F2278D000E; Tue, 19 Dec 2023 02:58:39 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 388C78D0005 for ; Tue, 19 Dec 2023 02:58:39 -0500 (EST) Received: from smtpin26.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 066AD14023B for ; Tue, 19 Dec 2023 07:58:39 +0000 (UTC) X-FDA: 81582815958.26.29F3E47 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf30.hostedemail.com (Postfix) with ESMTP id 406B580017 for ; Tue, 19 Dec 2023 07:58:37 +0000 (UTC) Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=dE9yJylT; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf30.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1702972717; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=C+SULC41WW+Kd90iko+LfCmzwEshCatJrP9IQa4Pxio=; b=AgGySVwitkezGc33F87FeUgD0ZRVEUlzxdckUdQU4R6zxNTKuab1wfYKX9oCKyoeFuzOlw hzJf4uI3d2NfPuzSzD+PI5UjY+MB3pPokdDTvAvkpH/dEmkqeBTaQV1PN+SBQhORm5+yOz vhUTriQWyBxznvTU6u2GdfYltqWNyoM= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=dE9yJylT; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf30.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1702972717; a=rsa-sha256; cv=none; b=BKti39W/J96qB86XyvNgpltg6DKOQ272e+JnRgO4JZN6L0AEd5Sn9edNhd5XZVqsMKeotd p0fYBK1MaXhaDIaILj37OE1EU9mb/2QSqstaTYBJAOUCWlHLWyn4y3Kdvy80zXMO2IuKvp bUIqzNIcVI+0O0kEMxmWEMicigUOb2k= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1702972716; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=C+SULC41WW+Kd90iko+LfCmzwEshCatJrP9IQa4Pxio=; b=dE9yJylT1JDJL0dmSRBRpwPIZSV3wwwqTg8Pok3U5iVNZs7kA9BOsOIpp8t08lGKVIcYiJ Ea43sbBoD5SqDh6ZJuZDKz7kOMSjGkHKWQzBCnIDF0McNtNmcAMoS2hctEmyhmwCRw/5Cq RkI05gqNojYnjWv8IljIHjijtd6HtNE= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-443-hR7CaQnoO4OWgrV4n0G44w-1; Tue, 19 Dec 2023 02:58:29 -0500 X-MC-Unique: hR7CaQnoO4OWgrV4n0G44w-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.rdu2.redhat.com [10.11.54.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 462A5185A782; Tue, 19 Dec 2023 07:58:28 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.117]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4D2F42026D66; Tue, 19 Dec 2023 07:58:16 +0000 (UTC) From: peterx@redhat.com To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Matthew Wilcox , Christophe Leroy , Lorenzo Stoakes , David Hildenbrand , Vlastimil Babka , Mike Kravetz , Mike Rapoport , Christoph Hellwig , John Hubbard , Andrew Jones , linux-arm-kernel@lists.infradead.org, Michael Ellerman , "Kirill A . Shutemov" , linuxppc-dev@lists.ozlabs.org, Rik van Riel , linux-riscv@lists.infradead.org, Yang Shi , James Houghton , "Aneesh Kumar K . V" , Andrew Morton , Jason Gunthorpe , Andrea Arcangeli , peterx@redhat.com, Axel Rasmussen Subject: [PATCH 13/13] mm/gup: Handle hugetlb in the generic follow_page_mask code Date: Tue, 19 Dec 2023 15:55:38 +0800 Message-ID: <20231219075538.414708-14-peterx@redhat.com> In-Reply-To: <20231219075538.414708-1-peterx@redhat.com> References: <20231219075538.414708-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.4 X-Rspam-User: X-Stat-Signature: n5aof9scwbyf7stkee98j1dsaq8qkb3r X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 406B580017 X-HE-Tag: 1702972717-923869 X-HE-Meta: U2FsdGVkX1+GvWF7OHU7JVjKn/fgd5u1Rrh4PccSfehlXK6J5iqDxTwwAxv797W/xAF2p0IKpxvWeU9elvXNynw2d+kF5Uzj/ztQXyjeMjIUeIbTAC+s2NPPkxq1MHQO8tApfVtGhLcnz2P5Q0yyrj0Ly7UZeswaeGmQhs/URT4+YSLco+NrjG5rM9lzH777BJUSRNAcOWV1SxrZjInYSh5P57MxOg8yHvGNhYtOM5CXlqon8A4YP24AcKdy83hV/7QdsRqEgyOzguZocA8xGg/tHaLgF0rwMy7qRt8T2uSxDcazaaRP41dZl96zXQrgkHguHDLPDKOVxCSCuaF7jdeHVTs5fQYvEm+85+CCfaqLE7w3QMLOw2yTkniy78iyJINKsgwjtVmQjdnzc3Cw/kOUUlFrTNceGnIgUCgJJSOsk+Lh44ONLswRHDsRG6t+eHO3K22XLSu9y4b+Vf+XqPYhbdR4FuIBXinSsTtCK6S+G/W3L01yB21TRLVC4NuVA29aGtVvo04NqAXQMiydLu6VrLfqgTXmPnXSgBYUr/re/I/zhiPry0UdfDeO1XaNKoZilRSnpkmuKeDaFPUML2ZrwyCdaOU9YeFw6dbnaxDsZSsZPdGrRPiX4Kc5kzhOaHQzL437tyxZKa8aN+OcVGx6j65dB2nCclFyZnUkbcqcotSYtXe5Tw2212F1F5cJJu4cBx9ccYje4g1APQFIT965gOcAJPG39wphX13HxHp4yxMJeYjFm5g/9+72TtBkdiRcVSubHILGx4NB8FIgj8ZqGBCMjXfbmalmvP7KytoBVBxWhqUUos2wCUIzrDHqTbG216CcTHKnkwlxH52S/sttrDVfrFfwZs+Zpz/ZfWYCSF82xeqrFUKHFxSEAz/IBPDqOehmORT58Gsy3N0uccf2ICrUKm7xD6GacakmzrWQvc6jjXkk8wW44RO8JRMKtudZ4xPwamX4TUt9Qw3 /f6bWWKN t+suWY1MDqh8sei3wzqVbuvEOfk5IjJftlEXsABYkwQRddnyPWI0RUdDQwQB3VTCDMB7ixrAX++TtEwGuh3+Xwfqm9noCqB4ZRvAAQXikJIJLrtHb1Xjm1KMwvNK/fedVSMYfTZZvfGE4JSNc84bCj4S5OrApciD6uYXUSqhlU6I2aIqQ0nEDxTe8+yPyOAmvjqH3H0Pf4de6YE5U7tXI+/gE7ZNcX3RYQTT0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Now follow_page() is ready to handle hugetlb pages in whatever form, and over all architectures. Switch to the generic code path. Time to retire hugetlb_follow_page_mask(), following the previous retirement of follow_hugetlb_page() in 4849807114b8. There may be a slight difference of how the loops run when processing slow GUP over a large hugetlb range on cont_pte/cont_pmd supported archs: each loop of __get_user_pages() will resolve one pgtable entry with the patch applied, rather than relying on the size of hugetlb hstate, the latter may cover multiple entries in one loop. A quick performance test on an aarch64 VM on M1 chip shows 15% degrade over a tight loop of slow gup after the path switched. That shouldn't be a problem because slow-gup should not be a hot path for GUP in general: when page is commonly present, fast-gup will already succeed, while when the page is indeed missing and require a follow up page fault, the slow gup degrade will probably buried in the fault paths anyway. It also explains why slow gup for THP used to be very slow before 57edfcfd3419 ("mm/gup: accelerate thp gup even for "pages != NULL"") lands, the latter not part of a performance analysis but a side benefit. If the performance will be a concern, we can consider handle CONT_PTE in follow_page(). Before that is justified to be necessary, keep everything clean and simple. Signed-off-by: Peter Xu --- include/linux/hugetlb.h | 7 ---- mm/gup.c | 15 +++------ mm/hugetlb.c | 71 ----------------------------------------- 3 files changed, 5 insertions(+), 88 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index f8c5c174c8a6..8a352d577ebf 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -332,13 +332,6 @@ static inline void hugetlb_zap_end( { } -static inline struct page *hugetlb_follow_page_mask( - struct vm_area_struct *vma, unsigned long address, unsigned int flags, - unsigned int *page_mask) -{ - BUILD_BUG(); /* should never be compiled in if !CONFIG_HUGETLB_PAGE*/ -} - static inline int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, struct vm_area_struct *dst_vma, diff --git a/mm/gup.c b/mm/gup.c index 14a7d13e7bd6..f34c0a912311 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -997,18 +997,11 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, { pgd_t *pgd, pgdval; struct mm_struct *mm = vma->vm_mm; + struct page *page; - ctx->page_mask = 0; - - /* - * Call hugetlb_follow_page_mask for hugetlb vmas as it will use - * special hugetlb page table walking code. This eliminates the - * need to check for hugetlb entries in the general walking code. - */ - if (is_vm_hugetlb_page(vma)) - return hugetlb_follow_page_mask(vma, address, flags, - &ctx->page_mask); + vma_pgtable_walk_begin(vma); + ctx->page_mask = 0; pgd = pgd_offset(mm, address); pgdval = *pgd; @@ -1020,6 +1013,8 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, else page = follow_p4d_mask(vma, address, pgd, flags, ctx); + vma_pgtable_walk_end(vma); + return page; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 29705e5c6f40..3013122a739f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6783,77 +6783,6 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, } #endif /* CONFIG_USERFAULTFD */ -struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, - unsigned long address, unsigned int flags, - unsigned int *page_mask) -{ - struct hstate *h = hstate_vma(vma); - struct mm_struct *mm = vma->vm_mm; - unsigned long haddr = address & huge_page_mask(h); - struct page *page = NULL; - spinlock_t *ptl; - pte_t *pte, entry; - int ret; - - hugetlb_vma_lock_read(vma); - pte = hugetlb_walk(vma, haddr, huge_page_size(h)); - if (!pte) - goto out_unlock; - - ptl = huge_pte_lock(h, mm, pte); - entry = huge_ptep_get(pte); - if (pte_present(entry)) { - page = pte_page(entry); - - if (!huge_pte_write(entry)) { - if (flags & FOLL_WRITE) { - page = NULL; - goto out; - } - - if (gup_must_unshare(vma, flags, page)) { - /* Tell the caller to do unsharing */ - page = ERR_PTR(-EMLINK); - goto out; - } - } - - page = nth_page(page, ((address & ~huge_page_mask(h)) >> PAGE_SHIFT)); - - /* - * Note that page may be a sub-page, and with vmemmap - * optimizations the page struct may be read only. - * try_grab_page() will increase the ref count on the - * head page, so this will be OK. - * - * try_grab_page() should always be able to get the page here, - * because we hold the ptl lock and have verified pte_present(). - */ - ret = try_grab_page(page, flags); - - if (WARN_ON_ONCE(ret)) { - page = ERR_PTR(ret); - goto out; - } - - *page_mask = (1U << huge_page_order(h)) - 1; - } -out: - spin_unlock(ptl); -out_unlock: - hugetlb_vma_unlock_read(vma); - - /* - * Fixup retval for dump requests: if pagecache doesn't exist, - * don't try to allocate a new page but just skip it. - */ - if (!page && (flags & FOLL_DUMP) && - !hugetlbfs_pagecache_present(h, vma, address)) - page = ERR_PTR(-EFAULT); - - return page; -} - long hugetlb_change_protection(struct vm_area_struct *vma, unsigned long address, unsigned long end, pgprot_t newprot, unsigned long cp_flags)