From patchwork Wed Jan 3 09:14:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13509764 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 853CEC3DA6E for ; Wed, 3 Jan 2024 09:14:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0DBCE8D004F; Wed, 3 Jan 2024 04:14:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 08C088D0035; Wed, 3 Jan 2024 04:14:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E96AC8D004F; Wed, 3 Jan 2024 04:14:58 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id DB7A28D0035 for ; Wed, 3 Jan 2024 04:14:58 -0500 (EST) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id AA49A160265 for ; Wed, 3 Jan 2024 09:14:58 +0000 (UTC) X-FDA: 81637440276.10.795272E Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf12.hostedemail.com (Postfix) with ESMTP id ED55D40002 for ; Wed, 3 Jan 2024 09:14:56 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=V9EpdFsl; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf12.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704273297; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YJJA5xKvoSxf1aIwLnbJbk4H1JjSjqj4D+JMLQtFT+4=; b=WdSJAJBEOIDI2c9EJZWOBFZysIXdPu1b8CQwFDUc8xcseEOgeWcqHAiYNrDfYjP2mDkDQA 3LO8r1Gyc+UbsWxgMWxfFM9kEOHv5Nbj01zz/Y4GfC7CWlx78NukXiR2/r7MUYeF+k2HMc +xa/Y+mN4yksuqgtMQAjHlhP1zpuqF4= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=V9EpdFsl; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf12.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704273297; a=rsa-sha256; cv=none; b=wl9u7PiiMpRdIXc90r11+g93Y1iFtQOYQCatzfDMWXsPa2FCpMMdE9CYmLWX3IiViJwr42 V6mASLLsXuTGdxchDXHifBjJTzQWrj3ILAnKODQGPlPhWN4lfZ+bAWfMecFqZNuqWEgO/3 oHbhnt4fZ6mH4JV5RSFhqls+wVy6Tjo= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704273296; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YJJA5xKvoSxf1aIwLnbJbk4H1JjSjqj4D+JMLQtFT+4=; b=V9EpdFslxRIH2/lLW4yB2vVju2Wmw6ulwmw7navuSF732JvWfyJofY0ILaeqeayV7bEpau 4xyTZl+t9eaICVM+XXC3nn4twTx3ZuqPzhVdLOZKUWgF2ixP2jrxGJoJY1j03LNGGowFhS m5axgNfWhAhv60Hw7EsH72rk16WeIfM= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-173-O3owD6t5PRKmlxLnlxARAQ-1; Wed, 03 Jan 2024 04:14:53 -0500 X-MC-Unique: O3owD6t5PRKmlxLnlxARAQ-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E7E9B87DC02; Wed, 3 Jan 2024 09:14:51 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.69]) by smtp.corp.redhat.com (Postfix) with ESMTP id 40971492BE6; Wed, 3 Jan 2024 09:14:39 +0000 (UTC) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: James Houghton , David Hildenbrand , "Kirill A . Shutemov" , Yang Shi , peterx@redhat.com, linux-riscv@lists.infradead.org, Andrew Morton , "Aneesh Kumar K . V" , Rik van Riel , Andrea Arcangeli , Axel Rasmussen , Mike Rapoport , John Hubbard , Vlastimil Babka , Michael Ellerman , Christophe Leroy , Andrew Jones , linuxppc-dev@lists.ozlabs.org, Mike Kravetz , Muchun Song , linux-arm-kernel@lists.infradead.org, Jason Gunthorpe , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox Subject: [PATCH v2 01/13] mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES Date: Wed, 3 Jan 2024 17:14:11 +0800 Message-ID: <20240103091423.400294-2-peterx@redhat.com> In-Reply-To: <20240103091423.400294-1-peterx@redhat.com> References: <20240103091423.400294-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Rspam-User: X-Rspamd-Server: rspam12 X-Rspamd-Queue-Id: ED55D40002 X-Stat-Signature: f7towx7qyoowio7uyi3nh8qta1km93ey X-HE-Tag: 1704273296-678128 X-HE-Meta: U2FsdGVkX1+YXIpgW2O+tVm1XWuzkRvyjlPAT0g0eLF9KdZYUxYFcr4mvirFGo6hlaFU/g+SxNdXTOTsSscysqF4fyhZB55h9hcQA1Bj3za/V8UubKCo4EzKRbYzvFnwYMwOpeoyBwZ7KqDcU5h9Y3DWykyioaBtIgSFrU4OKlB5E2EMUBFZ7f4E5DpHRGYAyqkyWm9joWUBtmatNsrsttQz93YLXzxNeuVM1KwS2N54NfLntkVA5pj9RvU5ZnxHzqXFWHhDjRAJpAKHv7OvVTqP67JaMSSlCkoidBUjBbCm38JPSAXf42mMEBj/PyoMhkHcssU77JlM9oJHvuOlyozj/KQrNjz11QIexIrsJFLunNrUSbn0ictOuiURRv1JQnxzT0B+cLLpfW1a6Gf45mwsskVvb4UbT8VPWXdul5eSYmNJuI73HzpHakmin7NlweTTQwb0pYRPm4hrEMwg0qbULFvWGUPNZ4T6Yqf9bm2ewkrn/hFXVKl5eheHGtoeh064FE+vb10Y3KeVBaoaexAzqC/cJxYbT0DvT75jnLxq3/Be0whOUm3jgfPE3MS8BZpYiYTlYRIdTOigBXZTcN+kD1sRfwOkZBa2PDh6PP2IpUQwtfJW9XMk8oUjJfiZN5t0qRCpx5OFaa01pYuERVXarIVJ3lHYwZu5zCfZYK6ryu3tP5uvAF4IzXJ1s6IrVcOtIpbL82mKAWlKgJj8LkNWaAL5LeHupb3X/UKZlLX8Sit4mCWUzOQzuuxz2BesL+x2kHgAKzgETcUAKnQt/I0+yG1J1k6sz0vCfidm2Iu429JkV7K5MQopLB/r9YRwrHQwrVcWFOddgV5cLHk3nR4O0Hs4FOiZncPg+WBPPe+C/tezKDNclEYLI6+hbaZYr8jlBRHRkgcnwrTJu5rNIQbboL/srbmbegC7e2Ps5N/ggG2AXsD44OoxDDLOUac99AmfOjMlWWpEyxmcXaj hssMBV2o xHO/Y7Orc59NyjZhtoAKjsrONfIuUzAgl+gRj2r0G5dSaEvqpOGQtSUfsVFKqX6WVL4k9sfssy5h6kebaXp2Zpt9NbgQL1+Bn8wVIPOwprVk6OYPw2vIBUl/MwqDttFbV6TwBCUgRdKR0um5eSUAlxZTyTdvxn9/IHV0VPJSPU0NNWnqelVZafpsjwtdXbkZFqGXZN1NdsAYQkx8iDSU5ZjiVtg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Introduce a config option that will be selected as long as huge leaves are involved in pgtable (thp or hugetlbfs). It would be useful to mark any code with this new config that can process either hugetlb or thp pages in any level that is higher than pte level. Signed-off-by: Peter Xu Reviewed-by: Jason Gunthorpe --- mm/Kconfig | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/Kconfig b/mm/Kconfig index cb9d470f0bf7..9350ba180d52 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -904,6 +904,9 @@ config READ_ONLY_THP_FOR_FS endif # TRANSPARENT_HUGEPAGE +config PGTABLE_HAS_HUGE_LEAVES + def_bool TRANSPARENT_HUGEPAGE || HUGETLB_PAGE + # # UP and nommu archs use km based percpu allocator # From patchwork Wed Jan 3 09:14:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13509765 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D651C47074 for ; Wed, 3 Jan 2024 09:15:15 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D5EB88D0050; Wed, 3 Jan 2024 04:15:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D0EB68D0035; Wed, 3 Jan 2024 04:15:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BFD828D0050; Wed, 3 Jan 2024 04:15:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id B07CD8D0035 for ; Wed, 3 Jan 2024 04:15:14 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 7E013C027D for ; Wed, 3 Jan 2024 09:15:14 +0000 (UTC) X-FDA: 81637440948.16.7B4A9E1 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf12.hostedemail.com (Postfix) with ESMTP id C64984000E for ; Wed, 3 Jan 2024 09:15:12 +0000 (UTC) Authentication-Results: imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="AlUV/z80"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf12.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704273312; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UYdbUpgayprdP7mOXYmvw/zAFGJlyw1zyb2FaqhnNyk=; b=J0nFe3ynjKWue9rbuFpBus1jluhN2fWyhOVcxrfXf96+Hq7fAgl4rsTV5ZtfnvNBYWr6y0 RcfmF7gJpeRYrd07NIgkh0CzkpFuuL9hRsKH9m5a77p/bdVF7G5WnrAJEctUr5wvMu4ePS 38fuDNuczXTWBl15jJtxETDhjgpmXXM= ARC-Authentication-Results: i=1; imf12.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="AlUV/z80"; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf12.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704273312; a=rsa-sha256; cv=none; b=YXxg45qHH5utfjHHZ2LgKQFQL5+WYMxciPY3lEWJNTNY6bz95azm21r99KmjpJIzFfatLo ewX+a/kUhfYoZp1lyvfn0LQH5N5MYwASC8q/DHB9+tEdYxrsoodRCz/8o89/4yqVn4zTTI cn8T9TDYQMgT/8wWSH0tzsBraFeyfEs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704273312; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=UYdbUpgayprdP7mOXYmvw/zAFGJlyw1zyb2FaqhnNyk=; b=AlUV/z80KXdz5fYMVed5kPPXCaws3eSCWxnAgV0f3BOmMtsJiZi62elbqNqngZur7BHu/5 sWwEG6LRuMRUUi8E4RuHbg0FZ82tBvowyc7bFgfx0mvNBEkcCqoFVzH/AqcQmfICCZ1Epu XJ1GNfAD+MuunlVEu1P1uK6YJaew91Y= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-298-bCcbehl8OCy9I42YtKqcXA-1; Wed, 03 Jan 2024 04:15:05 -0500 X-MC-Unique: bCcbehl8OCy9I42YtKqcXA-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 20CC685CBA6; Wed, 3 Jan 2024 09:15:04 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.69]) by smtp.corp.redhat.com (Postfix) with ESMTP id B69E3492BE6; Wed, 3 Jan 2024 09:14:52 +0000 (UTC) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: James Houghton , David Hildenbrand , "Kirill A . Shutemov" , Yang Shi , peterx@redhat.com, linux-riscv@lists.infradead.org, Andrew Morton , "Aneesh Kumar K . V" , Rik van Riel , Andrea Arcangeli , Axel Rasmussen , Mike Rapoport , John Hubbard , Vlastimil Babka , Michael Ellerman , Christophe Leroy , Andrew Jones , linuxppc-dev@lists.ozlabs.org, Mike Kravetz , Muchun Song , linux-arm-kernel@lists.infradead.org, Jason Gunthorpe , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox Subject: [PATCH v2 02/13] mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static Date: Wed, 3 Jan 2024 17:14:12 +0800 Message-ID: <20240103091423.400294-3-peterx@redhat.com> In-Reply-To: <20240103091423.400294-1-peterx@redhat.com> References: <20240103091423.400294-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Rspamd-Queue-Id: C64984000E X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: pdt75upou7bhbdtw4ziatmnw8qhzqwhm X-HE-Tag: 1704273312-482641 X-HE-Meta: U2FsdGVkX19wuuQThMZn1CBQ39NF9EaeXJ/+Y19STLkzKqS0DxYYAo+l3nQDe2uU5y5JmSpvk6w/0MsV/rjpul574dASNb0gR0SNpTLAEEvaKcKCwu8hUWKCHVCrljj6MeWYbzt8MIxMdT6Btv9m4/6Dn7Bpx0XIT/Piw9OXwoLjq6h63QBUs/a07OTMFt+et5iRWcQra2AdnhRnQ9m5k0eHo40ZZHOfYUedsgBBAMjoUtkl/EbSeIUOWDyXdPw8A0/FYukAfZU/09djhPVh4IyI0z4Xa2t68r6ZZxFN8TwgLmWIdlzc7gvXMx3W2IBrQlDxG1ANRvWq6UnPiQRUGnPXhtuUeaqtZrAyv112yymJUx+YamIansh7hbl3dwahMH4n4UwplsRUIUCiMAssv1FZw2x1QGxj6VEhLJvJU6XWV+K/O0RlYrToYrtKafw/HTeS8yI+sDQ9wFxLRd5MZCuRlrJTaabqdA/hagpm3Iw7SWei9TUOnnGHVwWHFnO0WP12GZUhiZy0SM/L8WfwuVRPHL8TRfCmnje1mLw6jc57tehTinHhV0DOSwQRM0L/94btCx+gxFfu8W5VXh/LjJJB72AAhCjzD2nvJLGQ63J41zr7IBXi6lR8Wr0g5ZpvG1EW3oQ5Pbu+Z0W1hn0h3GbDaYbYI5WTjGAS8De7kH2usZB87ARX7/tZW+jpcV/TXUOkG/uHm0P8rYoOdaugZNe2UF9ZHO4aCQBN6IgLG5ZIon/NIhGTnkgqMvlO1vsZsJDu8NnpbwWbfNlkeDpLHqJRFUcKJAYmlCovypBslPvGPQo50BkNRBH1R+ivaJw1F4RkM910tJiJYDiWr7GTKXMOaNZOE3WGmDx93T1WSRGwPu5Xxhzc9EGsbzpdllpPMUI8AM8viL3qNSPhG1rtYYIYoJmtFQ0EP83pAvJCim5S7VdNxTUJ99ZhFrTlz+xNKvEzXdnDcW4XkpXIUEi hJPYYF/u nC5be3sJw/bP/nxWbqzFIBAf+rDroGfFncp24yrDi6vbjsBUGhQB996MJ724Pg+zgijKiynTTqN7xr73vytxJ6wwygdqPNGT22b6IFZh5DWUbm9iVyU5x2M9Vn/dwyMitNlBknqHrikpW8fNX0spq0HS5/LUR1WjpU5xJYqCj0Bcr9RaJsMdyWuSb4a2Hjqh0JCIrAKROP3rht0BjWwlFgGdXIQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu It will be used outside hugetlb.c soon. Signed-off-by: Peter Xu --- include/linux/hugetlb.h | 9 +++++++++ mm/hugetlb.c | 4 ++-- 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index c1ee640d87b1..e8eddd51fc17 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -174,6 +174,9 @@ u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx); pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long addr, pud_t *pud); +bool hugetlbfs_pagecache_present(struct hstate *h, + struct vm_area_struct *vma, + unsigned long address); struct address_space *hugetlb_page_mapping_lock_write(struct page *hpage); @@ -1221,6 +1224,12 @@ static inline void hugetlb_register_node(struct node *node) static inline void hugetlb_unregister_node(struct node *node) { } + +static inline bool hugetlbfs_pagecache_present( + struct hstate *h, struct vm_area_struct *vma, unsigned long address) +{ + return false; +} #endif /* CONFIG_HUGETLB_PAGE */ static inline spinlock_t *huge_pte_lock(struct hstate *h, diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 0d262784ce60..bfb52bb8b943 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6017,8 +6017,8 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, struct vm_area_struct *vma, /* * Return whether there is a pagecache page to back given address within VMA. */ -static bool hugetlbfs_pagecache_present(struct hstate *h, - struct vm_area_struct *vma, unsigned long address) +bool hugetlbfs_pagecache_present(struct hstate *h, + struct vm_area_struct *vma, unsigned long address) { struct address_space *mapping = vma->vm_file->f_mapping; pgoff_t idx = linear_page_index(vma, address); From patchwork Wed Jan 3 09:14:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13509766 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id C514CC3DA6E for ; Wed, 3 Jan 2024 09:15:26 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5E6A08D0051; Wed, 3 Jan 2024 04:15:26 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 594788D0035; Wed, 3 Jan 2024 04:15:26 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 435A48D0051; Wed, 3 Jan 2024 04:15:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0010.hostedemail.com [216.40.44.10]) by kanga.kvack.org (Postfix) with ESMTP id 2FB868D0035 for ; Wed, 3 Jan 2024 04:15:26 -0500 (EST) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id F2A7CC01FB for ; Wed, 3 Jan 2024 09:15:25 +0000 (UTC) X-FDA: 81637441410.29.820B279 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf28.hostedemail.com (Postfix) with ESMTP id 6034EC001A for ; Wed, 3 Jan 2024 09:15:24 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=N1v0LuzP; spf=pass (imf28.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704273324; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fpxdFtXHPoqbi8Omg7z7HSYBdI7g84IQc8JywSZrzBY=; b=qy1OMBUUWW3gJm0kwymhgad+L+xbmVi5B4R9H6wY5NJ9gOGYXnLLJ0ncssHOJj6rr8mgzT VG3OCJofstbGcHQPysQtNdJN9iTf1Z06DbNCtYiig2OmxOdelYRbmCNjfYYyBJ5rTq2dQB +/gQmcZRBJYggLyGy3blkXgbnhBhfRk= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704273324; a=rsa-sha256; cv=none; b=TPaH75M80nz1t5d+oWr+zkWMi+JtSGZ5GABluTJThQSFPvVx4V9Q6sJur5VYpGTYUR7enT Q946XfQrcsaWztOvp+3lNEYAfrTHQJzd7Q3mBGfoZrrk+l467GGgbt1K1aGItYt82bZZh9 BTGQGl39063McwdcsRSw2RckWBxt3CU= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=N1v0LuzP; spf=pass (imf28.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704273323; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=fpxdFtXHPoqbi8Omg7z7HSYBdI7g84IQc8JywSZrzBY=; b=N1v0LuzPWpZJQS9PRXFJt/gRgyCdvzHYQwMU+H14wgftC6TD6Uwu3uVsJday7pGAFAG9uv /E/Yb47EcIZHr0giKXH3XnvhmblFwSIdFKmtifwSJjjXg99mc7GK/8YmOoYLsOEtWE7WOC Mmp70dG7DaJMgbYeOjtMKI6kXfbot7w= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-479-b9n2pak4N8WkCJuJM8p50w-1; Wed, 03 Jan 2024 04:15:17 -0500 X-MC-Unique: b9n2pak4N8WkCJuJM8p50w-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 806BE85A588; Wed, 3 Jan 2024 09:15:16 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.69]) by smtp.corp.redhat.com (Postfix) with ESMTP id 08899492BE6; Wed, 3 Jan 2024 09:15:04 +0000 (UTC) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: James Houghton , David Hildenbrand , "Kirill A . Shutemov" , Yang Shi , peterx@redhat.com, linux-riscv@lists.infradead.org, Andrew Morton , "Aneesh Kumar K . V" , Rik van Riel , Andrea Arcangeli , Axel Rasmussen , Mike Rapoport , John Hubbard , Vlastimil Babka , Michael Ellerman , Christophe Leroy , Andrew Jones , linuxppc-dev@lists.ozlabs.org, Mike Kravetz , Muchun Song , linux-arm-kernel@lists.infradead.org, Jason Gunthorpe , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox Subject: [PATCH v2 03/13] mm: Provide generic pmd_thp_or_huge() Date: Wed, 3 Jan 2024 17:14:13 +0800 Message-ID: <20240103091423.400294-4-peterx@redhat.com> In-Reply-To: <20240103091423.400294-1-peterx@redhat.com> References: <20240103091423.400294-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Rspamd-Queue-Id: 6034EC001A X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: yhsoje4wq6qamegdu1g1b6tfubzmk4sf X-HE-Tag: 1704273324-167318 X-HE-Meta: U2FsdGVkX1/8+RzbVVvnxRYGjX3A3Cme6rrtAECNMOWiGd6+w605iO/IennzxIL/Ui4gpdZFKCLMOhnooaMjHCe9hHxbNZBXg1ausie/+CnvHA+tVWHZZic34DuRSlNI7kk3krbsVvw0yPCwuTFpa17MfRwV7Paxs2k4QPbKfGprZzpxdNjcPU1f2chc8jdXscWFWmG2qkgvgMqndkp/vEs6zW9+8lYWykm0lF4V8+R61bR9crkxUXJ+wmejvTPQVvgDy4ikye0QZHZQV0XxQc57dpgny7eThHGNK+XnswBzXRLAQ/9bZVrrjY9j2KKfHVOVQJdmXupn3QD/fmHCEWjVlsdOStpKKI57nWjvquM99wKR1S0EIO6FCGw5v19ah0xTVal/9j618Ilg20SmjwQ6ujADHcPGsm1gsii6kP97os/k4XAtEiUVQ269cr1PdVIO6BGVOVOL+baZgtOzZTi/0olcih1QbjDjScCYmorkBGgipQiLPh5Zu6gXuJiuCluALpyeYkDNptS83q1FNhKr0TX2r1RFsF0F0n8NFezmI2BGr8uk1A2GPmXNLdf0otQImdyO/F/0DJEa8iEQbDBPENgKXKuSKoAm3TxgsabU5VSzwKV2KqsjWOf0hNrxdu/kr+Ugn0HyaJ8+Mr4OE0yxFcBtGqAF0wULQJkyeIhs9W3+fwV3ZqaAPfDbH0r4usMtLuSurvVtiDCXkdrndpHNqqf9rj7DtVdBNtvW7u7e3Qk6x0gYfWE2oUHu37sT/8U+QmJx47RFGGVY9PEc/t9VT55eFfrA1TB3mHvu7w2mZyWoAAGwHEBgMsxgxHm/FS9YpyyzVdWonIWSSgbe1f1n82pIdDtiyL00pkavCA2857Phx7vzB+CDWROt2eEG5s+nJUzUxgROkGvTJIV/BZBXz6qy0oYd0g/wzesXENzWqrTwJnEDVL/2OpEQXIgb7JxJc6gtIZKEeQwtY4i gsmJOz+a BmaL8fvp8CKsSIUQlE0Tr9pWCkD6QznetTAUHHuJoN3l3XSyr6Q23pzvsq/TKuqsv0ASnOQXLCyTzR9jhDCzOlg8Ug+ODGNwfW0rmvuBWWSVtHFuFy/Vl+OMl6mn1EJxTZzkWKxo+zbX/Z+3MY+InEreft+H/foHz5kNrjOUo9KsCYrH8Ju0WfqavsoG7PwQGn+ryCFfn3QbbiS04yZFiX9KP6A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu ARM defines pmd_thp_or_huge(), detecting either a THP or a huge PMD. It can be a helpful helper if we want to merge more THP and hugetlb code paths. Make it a generic default implementation, only exist when CONFIG_MMU. Arch can overwrite it by defining its own version. For example, ARM's pgtable-2level.h defines it to always return false. Keep the macro declared with all config, it should be optimized to a false anyway if !THP && !HUGETLB. Signed-off-by: Peter Xu --- include/linux/pgtable.h | 4 ++++ mm/gup.c | 3 +-- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 466cf477551a..2b42e95a4e3a 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -1362,6 +1362,10 @@ static inline int pmd_write(pmd_t pmd) #endif /* pmd_write */ #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +#ifndef pmd_thp_or_huge +#define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd)) +#endif + #ifndef pud_write static inline int pud_write(pud_t pud) { diff --git a/mm/gup.c b/mm/gup.c index df83182ec72d..eebae70d2465 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -3004,8 +3004,7 @@ static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned lo if (!pmd_present(pmd)) return 0; - if (unlikely(pmd_trans_huge(pmd) || pmd_huge(pmd) || - pmd_devmap(pmd))) { + if (unlikely(pmd_thp_or_huge(pmd) || pmd_devmap(pmd))) { /* See gup_pte_range() */ if (pmd_protnone(pmd)) return 0; From patchwork Wed Jan 3 09:14:14 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13509769 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4A89C3DA6E for ; Wed, 3 Jan 2024 09:16:09 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 475878D0035; Wed, 3 Jan 2024 04:16:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 414088D0055; Wed, 3 Jan 2024 04:16:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 215748D0035; Wed, 3 Jan 2024 04:16:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id F05F58D0035 for ; Wed, 3 Jan 2024 04:16:05 -0500 (EST) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 01BF51A08BF for ; Wed, 3 Jan 2024 09:15:37 +0000 (UTC) X-FDA: 81637441956.17.C2CB7D5 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf22.hostedemail.com (Postfix) with ESMTP id 3EA00C0005 for ; Wed, 3 Jan 2024 09:15:36 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=E1fM4pJc; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf22.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704273336; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=t8fO7lBnC9by3RspplJseWzRvj3+WjzIcx8hEKl7Z7Y=; b=PWbkNJV9sZeOYnElOqefLH7p3qRHxlDb6V8aVq3cs/sf/38qW4Z4rQQZZoBVRj21LRYw4G 0NeEWzn3Q7Mzc7C9lbZK6WID4ymTQQUZr5fWHqdFlZ8n9zoIndIYqMOEgrid0sUt1OP8t7 me00ZSO6PG58RRnQnYboZHb6GcKhWzk= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=E1fM4pJc; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf22.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704273336; a=rsa-sha256; cv=none; b=AdQXfW3K6fP+XMtfqxaevO/8TEzi68fiKWJqX5mpIdVRi+YT4gZFvtFQFyz5XSD76R91wc a2pRtj+a+zEkjTJJvwU8HVlUOpqjB1vuY/vJAI40Ag/MMnZ+HCWlLzTXOH2soMkll/aCqz 7D566L6shOfvI7CEo82UWgyQxsqWp/U= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704273335; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=t8fO7lBnC9by3RspplJseWzRvj3+WjzIcx8hEKl7Z7Y=; b=E1fM4pJcayThRK6FRpk4M6AoSQQY9YEn8zt79JrRs2PqbOA+GPvspCo8e0dFmfypG+8ALq lccQ/0a3kby1WpWmRJkSO1oE/leKsu3dLXWtYHgvpJmJk+Ljdyp2vD/8PjEGjHSLp5leW5 MFPL//oWJqTNK0KbGI6hC+7u7XaeFbs= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-138-l-IHSWebO-mu9VTGyPcb0Q-1; Wed, 03 Jan 2024 04:15:30 -0500 X-MC-Unique: l-IHSWebO-mu9VTGyPcb0Q-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AFB6887DC01; Wed, 3 Jan 2024 09:15:28 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.69]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4F290492BE6; Wed, 3 Jan 2024 09:15:16 +0000 (UTC) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: James Houghton , David Hildenbrand , "Kirill A . Shutemov" , Yang Shi , peterx@redhat.com, linux-riscv@lists.infradead.org, Andrew Morton , "Aneesh Kumar K . V" , Rik van Riel , Andrea Arcangeli , Axel Rasmussen , Mike Rapoport , John Hubbard , Vlastimil Babka , Michael Ellerman , Christophe Leroy , Andrew Jones , linuxppc-dev@lists.ozlabs.org, Mike Kravetz , Muchun Song , linux-arm-kernel@lists.infradead.org, Jason Gunthorpe , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox Subject: [PATCH v2 04/13] mm: Make HPAGE_PXD_* macros even if !THP Date: Wed, 3 Jan 2024 17:14:14 +0800 Message-ID: <20240103091423.400294-5-peterx@redhat.com> In-Reply-To: <20240103091423.400294-1-peterx@redhat.com> References: <20240103091423.400294-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Rspam-User: X-Stat-Signature: r5akjo1nutnp7x4mo6kxz4s5ebhd34ro X-Rspamd-Server: rspam07 X-Rspamd-Queue-Id: 3EA00C0005 X-HE-Tag: 1704273336-634745 X-HE-Meta: U2FsdGVkX1/Uqz9xy9VoqChCfvS4sgk/bXcgBvCHP4url2NQ5s3bNEGDUIIACWtn/eZcHA9DjngLIwEE1K01jxDnweyos84sj1YzSGcL6/9Y+6CCnZnEWUObWe9gjSVOCyKSXfSTgmXugLRG5B/CtPiiMKyGFGddOEt4wlSkFmDh3cEVzK9rNW8A6Gpez6aC5P0vVuyNmkWEc+rMRJlZrEujj+6aeZIL5QeKpYJ5I2W8an49LxWvqyDp22vURhNTlBavcU4+9joiRgGaLH8+vtTk6SARH18tzy9G7fPP2thTKhGXwFhslTan7WQVeeK0ezOGA0kntCOQOK39NCI9lZq4sroa1OnepkrbtkoJR53rKczoC7AuwsCybDkze3x8p8lCp8zCKMl9k8hYG9i3o+2a0glNT88ZC6RhexaPUU2rkjaWUfwIcFj2lnHkepRLQtmS9wYn/931JtckneNFsPyz6/o9MwLEMCMlHRuhfaehEy8doXla+A3X8rIo3fp1yIxrd9OHZxmltx8q32TfOmk7brbTFUSAKA+CjrpwlySSXouneh+ygWjE5cX1vPIdMk1/8N7aDsPzgYvKxm9Bbj9L1UijQjdtPnsuQ1IZkqdnVM7m7UMM43fgovYNv7bSbfxUEUdhnJvSIErRijHvUIGb/z7bfnyGDeboMGBuB0lCs+AYEddBb/wbEn5L4x0wUguVJGTuY3YiPmwfNb20Jis/dgXdf1WEsOqD8oafeXDrEPGFpZj/88uG8PAtOxCjYaqDD5m3s4C/ys8YkpyTAxt0pfg8/kVv6MyhSB2vcwr89vft2q9dSA/ve5al8H6z3QDJAsmG3/wIK/FV0ACIGoNc4LoxtTolodHMIY3pZldHbgeHvIOmIASWvbooED4avJAC5bGoIy87qfQewrunkXp6EqQFDdrdkVvdAvH2qlpz/Dr+V8CORPB/E31jrAZ8tlhTNxGuHQkPH5D+QTK MuAnI9TF SA69Z4EFvTpfS89UlTK7XmshrU37dPh8DXQeNe7T4E467k0tKzXz8L4B2UvA9F36NFzli+YSAqZ8BCIrpBG2TYuvKsxA1W9SNuPGt96UdEiyWHMY3bJP/q5yTBb2hsDeo8TDzwm+e+pUnQGp46hOwd4ly9NrRVgdwvMiaFKWGW82GN1TUjJaEgcyVzcA2WcdL3ML7q++Lg95305+xLMMFUbQZpKonArD04Gokqz9EMyf0QSwpNr7i0MVhE4hcpVm++gPwSV8AA8cfgmfMyCL09bDXZCMW7oU08Kc5EturMW2bmVRGj6C9vvmFmlhmWOOuDxlE X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu These macros can be helpful when we plan to merge hugetlb code into generic code. Move them out and define them even if !THP. We actually already defined HPAGE_PMD_NR for other reasons even if !THP. Reorganize these macros. Reviewed-by: Christoph Hellwig Signed-off-by: Peter Xu Reviewed-by: Jason Gunthorpe --- include/linux/huge_mm.h | 17 ++++++----------- 1 file changed, 6 insertions(+), 11 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 5adb86af35fc..96bd4b5d027e 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -64,9 +64,6 @@ ssize_t single_hugepage_flag_show(struct kobject *kobj, enum transparent_hugepage_flag flag); extern struct kobj_attribute shmem_enabled_attr; -#define HPAGE_PMD_ORDER (HPAGE_PMD_SHIFT-PAGE_SHIFT) -#define HPAGE_PMD_NR (1< X-Patchwork-Id: 13509767 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 47F4DC3DA6E for ; Wed, 3 Jan 2024 09:16:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id CC77E8D0052; Wed, 3 Jan 2024 04:16:05 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C317A8D0053; Wed, 3 Jan 2024 04:16:05 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A78868D0052; Wed, 3 Jan 2024 04:16:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 95EEC8D0035 for ; Wed, 3 Jan 2024 04:16:05 -0500 (EST) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id E50E71A08D9 for ; Wed, 3 Jan 2024 09:15:47 +0000 (UTC) X-FDA: 81637442334.24.4F88E19 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf25.hostedemail.com (Postfix) with ESMTP id 3BFF9A001B for ; Wed, 3 Jan 2024 09:15:46 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Y51juoeg; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf25.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704273346; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=GERLJEPmXyjsod6a3w+H+MRPlzFy0pZTus1RG/9onmU=; b=S6lRJVIO0RBWL6LZ2HWlKKMoZW7egD7csWfDJGoP9bRJjAEQn5wDRf/FSlmHi6tEAzl5hU QbT2f2FBfzarW9e5kdX5sFCyFdlDUyFqNl7TzNWIdgNve7/mXWkxGkhueDr80psphHgcZE Q+D+OkZE6bGb5JVV+jmuOe4loCAfMxo= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=Y51juoeg; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf25.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704273346; a=rsa-sha256; cv=none; b=KpfqH1CZ6Ef+4HNoskWr6jjGk7CI7GAy83dW4KxR1MwzV5lvBelxKYiI/kONS6QfzYpOLn yuvf2xG5E2EpbiiWwGry2KqY8EifI65QDkaRlPjxQmsqzLEsjNiT+fG+dqUScFD6fJlP+I g0ktycqZPyRLGhP+xzktvgRPQgj++v0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704273345; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GERLJEPmXyjsod6a3w+H+MRPlzFy0pZTus1RG/9onmU=; b=Y51juoegElrXveh17ucgztUNWyc3Djgsb94Z70aj+Q2TaY+Wx4OPNnN18eliZTEZ206aUd jxMVPkHNlHBttdb6R1cgJpz4h3Xp3Yxg4kIgFWiekOb9fqdSOXSsLUpj1zboSDO9ha20TE O/Rx5tZw5lIFN/tTPABM1UKOVlXPEB0= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-621-AvRlxFvDMHWmUtn2K9LV5Q-1; Wed, 03 Jan 2024 04:15:42 -0500 X-MC-Unique: AvRlxFvDMHWmUtn2K9LV5Q-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1957D29AA38F; Wed, 3 Jan 2024 09:15:41 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.69]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7C3DE492BE6; Wed, 3 Jan 2024 09:15:29 +0000 (UTC) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: James Houghton , David Hildenbrand , "Kirill A . Shutemov" , Yang Shi , peterx@redhat.com, linux-riscv@lists.infradead.org, Andrew Morton , "Aneesh Kumar K . V" , Rik van Riel , Andrea Arcangeli , Axel Rasmussen , Mike Rapoport , John Hubbard , Vlastimil Babka , Michael Ellerman , Christophe Leroy , Andrew Jones , linuxppc-dev@lists.ozlabs.org, Mike Kravetz , Muchun Song , linux-arm-kernel@lists.infradead.org, Jason Gunthorpe , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox Subject: [PATCH v2 05/13] mm: Introduce vma_pgtable_walk_{begin|end}() Date: Wed, 3 Jan 2024 17:14:15 +0800 Message-ID: <20240103091423.400294-6-peterx@redhat.com> In-Reply-To: <20240103091423.400294-1-peterx@redhat.com> References: <20240103091423.400294-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Rspamd-Queue-Id: 3BFF9A001B X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 8njg5qekez6pfbzgjh7676jk67rp9e5k X-HE-Tag: 1704273346-813892 X-HE-Meta: U2FsdGVkX19Q2Z2qsD3DehuDYPaTEFajZntriADbVZr1U6PX+scy7GGEUxc+q1QDhZj+j5M094fGx4PlwgGVGGPU5VBjMpXX9u6qk4vHR46F1x7mlh5VBcTVfcuLJv0X2vb1knJBCRWkUVYvBjo0JUV9dWz0jnFVteI8+vX+rSfu45hEq406cFRL80eFYEX6jEbZXTMPHL5DeeGrV7704JflBdfRiu3ZQ44Db6b3caiKNhQk1gwaCST5jGe5Gw/igLhvAXy2+vZKBONmP+RuO9t+cTJeQdgEcU0qfE1bS+Jdy4HBzJZzE7qpOFpwHsTEAykr083eo+HwbGY87AtJMKDEQ5nai2Bl1MsmVm06cuklg8Y4IhaGfWuYfoL24UdZ+FbS/oWVHXPS+2Lq0/DYxDZPM/B94uIWz5tONjPBncVtOAa+R8Ytsx7B5NBAMqcGUoIStibKs2DSQonVUELw6YMtCDTupG+m5ziYXgpQ92ki5QhtdKWjNwqM8WLP91drFstNoxR+oJFtYP8fdhbCYP0+BrZT6tioyPoK64UEmjRuJxXDs0RQWngFtVL7xynPPCkko2NSCVHosaACBNdwOgfiVLB+jhMXKGQRhdrf3wfrU/3aRtTVddzGrLsbZsXFnmg9K67tl+P87wIZgBceJRPec2V8FqlS1g5JI5pV637zFo+2aiginVDrWljeISQhwI0YWxMDIK4lH7L0GqDoYYCWjMmAry1ZDULl6GcCNkYW2M0EsoBX8yGBVOI0oKboYvSDR7BSm5s1dzfspOmf4ExK7PbDDVsEqWum9Jk8K1WV86yTm5v9Dx/KYtLJX+mm0RHVPKPHPM+B1J9xMU/XYOYOYI39OBzrnLnzPOszprY+zcsY0wgYJMaVqz8KtKnyo1eB6Cm9EyQTXapLn9hwkG1+HqHGkTRhB0aiNQY/J7cvKQX9O8/fFtYxdI7zA6dwWuUqgbDBD35xgDVIW2s Cn9qp3At 18Qu4SDf9fwwcv1veuM0HIkKlmXsVvZz9FA4o/epLlYV/xfYAFDPFRfZIxcYzGEl8GjCpj57ymi+KUd2n9MxwQi7ypgs91GrMFaM2aquRPb8bqfsChGodWXJx4CzJForUitAxN9vOowRR1RqR0+UVDOS/2yodGrdG1Dk0tzdxuWteMaMW7t+RyJrQkb35dFfXx7AiyGwwot/EmBqwwXyYZRVz1s97bWjrXuLZBS5zNAPuePe/paK1O8XiNHESp84ygdX47zB0GAkZP6K0+//oAfpzLg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Introduce per-vma begin()/end() helpers for pgtable walks. This is a preparation work to merge hugetlb pgtable walkers with generic mm. The helpers need to be called before and after a pgtable walk, will start to be needed if the pgtable walker code supports hugetlb pages. It's a hook point for any type of VMA, but for now only hugetlb uses it to stablize the pgtable pages from getting away (due to possible pmd unsharing). Reviewed-by: Christoph Hellwig Reviewed-by: Muchun Song Signed-off-by: Peter Xu --- include/linux/mm.h | 3 +++ mm/memory.c | 12 ++++++++++++ 2 files changed, 15 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index 896c0079f64f..6836da00671a 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -4181,4 +4181,7 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn) return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE); } +void vma_pgtable_walk_begin(struct vm_area_struct *vma); +void vma_pgtable_walk_end(struct vm_area_struct *vma); + #endif /* _LINUX_MM_H */ diff --git a/mm/memory.c b/mm/memory.c index 7e1f4849463a..89f3caac2ec8 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6279,3 +6279,15 @@ void ptlock_free(struct ptdesc *ptdesc) kmem_cache_free(page_ptl_cachep, ptdesc->ptl); } #endif + +void vma_pgtable_walk_begin(struct vm_area_struct *vma) +{ + if (is_vm_hugetlb_page(vma)) + hugetlb_vma_lock_read(vma); +} + +void vma_pgtable_walk_end(struct vm_area_struct *vma) +{ + if (is_vm_hugetlb_page(vma)) + hugetlb_vma_unlock_read(vma); +} From patchwork Wed Jan 3 09:14:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13509768 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A39FEC47079 for ; Wed, 3 Jan 2024 09:16:07 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 217838D0054; Wed, 3 Jan 2024 04:16:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1C5948D0053; Wed, 3 Jan 2024 04:16:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E6F748D0054; Wed, 3 Jan 2024 04:16:05 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id C22888D0035 for ; Wed, 3 Jan 2024 04:16:05 -0500 (EST) Received: from smtpin19.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 69FC31A08FD for ; Wed, 3 Jan 2024 09:16:02 +0000 (UTC) X-FDA: 81637442964.19.135BA4C Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf20.hostedemail.com (Postfix) with ESMTP id 77FF51C0046 for ; Wed, 3 Jan 2024 09:15:58 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=W2nDFnAo; spf=pass (imf20.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704273358; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=G2mMJAnaoVdNju5RvSwq+KPjZVLCTkTT7F8NTqIl1do=; b=Ha6zlxughOBOzESekTIcaH/vzsfE284iyWOWkGyxmbmNOEnCodD2NzCGbAeMZFMkzxQZC3 4smwPDUSuLzZrqv9v97i3tPwNJkLaXGOwwm3K8iEy/xkAucjQczFFm/CoC4M9BbS+mnyAz Sm+VuykwpmBfx5rDvWCTrdJ2V12iS0I= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704273358; a=rsa-sha256; cv=none; b=YYzB/DFH/u4cWgc6jtU7yLXRNUlLalbBZ7qkijUVHoQ0ZN2lS1+2Ed5qMcYdLYxDrvnALx dbDOI40x+CW5xrXJKveiSeSI6CyOeDWSsYP9UjhW0ZK/+sbrC5KMZTn2QVZdK/WQz6Sn73 aMKfj8hsvXvD3YsMtuAGsfvTf++n9Ac= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=W2nDFnAo; spf=pass (imf20.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704273357; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=G2mMJAnaoVdNju5RvSwq+KPjZVLCTkTT7F8NTqIl1do=; b=W2nDFnAo2k/r2mcW39A/X1smFLZBFebFY6dVdlmkG+cpBa/G5kzVVIe0fdI9hvcNc5oWNf DelYLVoXhZnROpgS8bGkAsCTxEYKxi08nEdhGsDRZt1vA3QQqcJZbXG+eKrIZHI5cmFBlA Ei37M+7CX10S+m/zLlaHsccumopGPpk= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-597-sJHnyUgTOIGCIKVC1MWAaQ-1; Wed, 03 Jan 2024 04:15:53 -0500 X-MC-Unique: sJHnyUgTOIGCIKVC1MWAaQ-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 731831C0512D; Wed, 3 Jan 2024 09:15:52 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.69]) by smtp.corp.redhat.com (Postfix) with ESMTP id DBCD9492BE6; Wed, 3 Jan 2024 09:15:41 +0000 (UTC) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: James Houghton , David Hildenbrand , "Kirill A . Shutemov" , Yang Shi , peterx@redhat.com, linux-riscv@lists.infradead.org, Andrew Morton , "Aneesh Kumar K . V" , Rik van Riel , Andrea Arcangeli , Axel Rasmussen , Mike Rapoport , John Hubbard , Vlastimil Babka , Michael Ellerman , Christophe Leroy , Andrew Jones , linuxppc-dev@lists.ozlabs.org, Mike Kravetz , Muchun Song , linux-arm-kernel@lists.infradead.org, Jason Gunthorpe , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox Subject: [PATCH v2 06/13] mm/gup: Drop folio_fast_pin_allowed() in hugepd processing Date: Wed, 3 Jan 2024 17:14:16 +0800 Message-ID: <20240103091423.400294-7-peterx@redhat.com> In-Reply-To: <20240103091423.400294-1-peterx@redhat.com> References: <20240103091423.400294-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Stat-Signature: ph7y8wst48k4fdozrdyik8bcrcgdr3ji X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 77FF51C0046 X-Rspam-User: X-HE-Tag: 1704273358-790057 X-HE-Meta: U2FsdGVkX18L5IvFNLIxkQ9I798jJCpk/7/ExBUR/Vq4j+xQD+c01woB+Ge9EjBRmemktfN/7YfKPD7pdReKGJahm/cAdmj+T6UhNxdRy9evHbdE/DkFhgSQ1qh0uz5Cf1arXWgFab5gb6ZRlX5Sm63LERd59QsLsioF6I7P2kPwGFvIBfDuWSCZAKKuvg3wsSkKLuZAasvd4k/eyVHwPOF1oD5ae/pD7DyPlBsuhgL2RDwbp2sg8qF9Le8WnvOi6kAMGwmtWYVV68B/Jlg77eCX9Sjr+ht4xujsweW0uX7DzOrKrcpyVEPPKZ3h0ggd7iDxTAgxVVRboVRO5MU2I+hmVik5C4B2/sJNnIUvzLu7x+u4548BK9rTBGYIFLwlMBOJvMVACvO4uSm/OieUc03DgpBOG7M3Q8d6vUlZvQTNT+5ONEKF2othSllHtEJ1mY+b1pKGK0TmExdkZnm3ZftMStTxkd37P25fTBOV6OfC8yuv6D6TJPH+B/BEecqfAOvsyR8I+UpDBFPfeEn38JmOllapZIDeZxh1e4kAfitduIoEuAYSyvbyRB8754y3LzlOu/XSGPgqnCTSX/+tzp5/JGL0zxpoemTEdm1MPoq5/w4GqtietkbDaeLcAQIuGE+UJI1t08W7A5WWFdcVTEIvfQ5H9XubMZ00QLF36ChedHB8WvW3PCAV02dldENkXbwoKS1O8O9+3NYyy/hSCCyDLxtwhDQ+7ZwetE7+wAJnajqlwMSWi5MyINd0HxerQWvfyOOtquBfUnnScXygKa9aIxwO9KvRN4Dagm6Eq1yi7RvJNlTRbQvcA6Ufo3HxKiKk2FLvDRQbb/+uQC39kT0LbOtSn0F7VVNJ/eCbKWB3qIx/IeW//67VXpTxAYJupWjIubq66wMa/4xPCyNkGx8uRusdUUvr4ta6KYwDN3OwlWdZBW96P0HDIov0YhRIGMBuGXF9z6L8fWdBMCf KVihbwZO b5tP0m8CkmwnoOq6oyAuENqEprrKE0CvROCooUHS7GnvdNtnVDGWYOzrpnG0jr22FltZJACo7VqTJwpA6p6AmyBjNh1YcPILdB9b7FsHQ7WaKQ03ZXMZjgn0fynvtuEMjKEBRgb5rb5zZvC6aRAxXGFMo2B1qDX+y8h6JdxsgsORFkzS/ZvBcUfVvuZe0W57V9dNflh1P63MSgvYgXfdWcTSmjOYuigmvA/K3kbbO31lL4gyY9gQ2yPYX91NkoQJKEKVeaK7s0NsA4MkweSztqtU8N85LRhxaRdCCaDo4T15S7pwRahAbPcXvjxko2riDFwyaEl4KNrfvQZv4Xsz8+BVRXrpyLosbm6CAZVnohKGZMAU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Hugepd format for GUP is only used in PowerPC with hugetlbfs. There are some kernel usage of hugepd (can refer to hugepd_populate_kernel() for PPC_8XX), however those pages are not candidates for GUP. Commit a6e79df92e4a ("mm/gup: disallow FOLL_LONGTERM GUP-fast writing to file-backed mappings") added a check to fail gup-fast if there's potential risk of violating GUP over writeback file systems. That should never apply to hugepd. Considering that hugepd is an old format (and even software-only), there's no plan to extend hugepd into other file typed memories that is prone to the same issue. Drop that check, not only because it'll never be true for hugepd per any known plan, but also it paves way for reusing the function outside fast-gup. To make sure we'll still remember this issue just in case hugepd will be extended to support non-hugetlbfs memories, add a rich comment above gup_huge_pd(), explaining the issue with proper references. Cc: Christoph Hellwig Cc: Lorenzo Stoakes Cc: Michael Ellerman Cc: linuxppc-dev@lists.ozlabs.org Signed-off-by: Peter Xu --- mm/gup.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index eebae70d2465..fa93e14b7fca 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2820,11 +2820,6 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, return 0; } - if (!folio_fast_pin_allowed(folio, flags)) { - gup_put_folio(folio, refs, flags); - return 0; - } - if (!pte_write(pte) && gup_must_unshare(NULL, flags, &folio->page)) { gup_put_folio(folio, refs, flags); return 0; @@ -2835,6 +2830,14 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, return 1; } +/* + * NOTE: currently GUP for a hugepd is only possible on hugetlbfs file + * systems on Power, which does not have issue with folio writeback against + * GUP updates. When hugepd will be extended to support non-hugetlbfs or + * even anonymous memory, we need to do extra check as what we do with most + * of the other folios. See writable_file_mapping_allowed() and + * folio_fast_pin_allowed() for more information. + */ static int gup_huge_pd(hugepd_t hugepd, unsigned long addr, unsigned int pdshift, unsigned long end, unsigned int flags, struct page **pages, int *nr) From patchwork Wed Jan 3 09:14:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13509770 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8AF33C47079 for ; Wed, 3 Jan 2024 09:16:14 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1C94C8D0055; Wed, 3 Jan 2024 04:16:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 179278D0053; Wed, 3 Jan 2024 04:16:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id F334F8D0055; Wed, 3 Jan 2024 04:16:13 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id E08568D0053 for ; Wed, 3 Jan 2024 04:16:13 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id BECAF1607D5 for ; Wed, 3 Jan 2024 09:16:13 +0000 (UTC) X-FDA: 81637443426.16.AE03C01 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf17.hostedemail.com (Postfix) with ESMTP id 1F56C40006 for ; Wed, 3 Jan 2024 09:16:11 +0000 (UTC) Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="c21Pn/6/"; spf=pass (imf17.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704273372; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=w96HtEkzdknb+h0JjsHCRRCM9jOy1Kg+MIEx5UNvA1Y=; b=Y6cE4cd/UQyky1+wn34JxIzTu1d9TQZNxOJ45C6BU9x3S5pV7ypa76QlQ/zzhUrYu8SlRs BJFF1f4mUvzFm8o0NXxxNbx/2SANhMHbyxsPyoEmbaOQ46+Gei7BYBu4e23OVY+pum6+IY JX9JYmmcuGWAymG+VGaC75OUrKHfXyA= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704273372; a=rsa-sha256; cv=none; b=aZTFfsi2DDkOctD3eLfmajRXDzyhEqDyeq2toevJq6alOTgAz2nOhY+1UE8T2yNS095HlF dfv62tlY3PPv1O7tBtiYRmyCKeHtDJeDBlZG4gOeEky1CxY+ZoZ74HYGETqmoTLBj9+qoH 2WmzjXsf10tkkUV1b7Kv6qMTsodDiFo= ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b="c21Pn/6/"; spf=pass (imf17.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704273370; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=w96HtEkzdknb+h0JjsHCRRCM9jOy1Kg+MIEx5UNvA1Y=; b=c21Pn/6/nCBSDLyAT3izS9wWdvVfGGVOM5wGNqk95M0KZWcjtTVKbYWRSwmsvWj/j064v4 msXf4b4nv/rn8+z4GJbitHGjz8vYdKnELCNVBbUiuTn4RvUc8BXb2uz6k10LN1vdNZdKWO xFSEnTu2zlVET6OnmQsugQB7+sAqCFA= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-277--A0wl26XO_yPNePIDNUfuA-1; Wed, 03 Jan 2024 04:16:06 -0500 X-MC-Unique: -A0wl26XO_yPNePIDNUfuA-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B7A993806714; Wed, 3 Jan 2024 09:16:04 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.69]) by smtp.corp.redhat.com (Postfix) with ESMTP id 29D1F492BE6; Wed, 3 Jan 2024 09:15:52 +0000 (UTC) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: James Houghton , David Hildenbrand , "Kirill A . Shutemov" , Yang Shi , peterx@redhat.com, linux-riscv@lists.infradead.org, Andrew Morton , "Aneesh Kumar K . V" , Rik van Riel , Andrea Arcangeli , Axel Rasmussen , Mike Rapoport , John Hubbard , Vlastimil Babka , Michael Ellerman , Christophe Leroy , Andrew Jones , linuxppc-dev@lists.ozlabs.org, Mike Kravetz , Muchun Song , linux-arm-kernel@lists.infradead.org, Jason Gunthorpe , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox Subject: [PATCH v2 07/13] mm/gup: Refactor record_subpages() to find 1st small page Date: Wed, 3 Jan 2024 17:14:17 +0800 Message-ID: <20240103091423.400294-8-peterx@redhat.com> In-Reply-To: <20240103091423.400294-1-peterx@redhat.com> References: <20240103091423.400294-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Rspamd-Queue-Id: 1F56C40006 X-Rspam-User: X-Stat-Signature: pds35qpxiu97s44bp5yi3xafpy98n4fp X-Rspamd-Server: rspam03 X-HE-Tag: 1704273371-380045 X-HE-Meta: U2FsdGVkX1+cU27KC69hCMv07Hs6d1zVq7CxpxSoScvwM394YQeuWZa+Hift6ODqpj3PADST2+mG3ZCPdTul4n5PPGuVGIJLKR9sCjSAO0Up1vdaSCEwENeOgNFuS6hdo0ANn1Nobc2KMWRLJ6Aof/8WNRd/gV/mDZnLMIHGfB50h1SI8fNO45eW59ife3f4iXbsWGYvPscctbyjBZ4tiwjTUnn1sk+UOzHYClqoC7Kt8UZ8ls0TeHALyP9lRWsvJkEnGMstN2CXU0fRTW5glRwhNGv4/TEDrsjcGngqr9rfAYIz9zJk+CnPXHKJa2gKPNEoFMyP9cxQhFEBQSzT5+GSGPEbVjeLWCTOzXb+ZhiqzUnWh1014Ov9pmbZWKGtTHyOlfuY3oBO30OQNZMWb64wVGfIfUssN+dt3cYDXuY/8rsSROkFFR6n9y3OAcN9h3XBiTPAnWYJmc0AhJRWEfNl3mzy7yAHevV/o1Be9Pez5tYfum2oXXhOCG3RQu+VXBzC7gdb5E4EobfzT7r3o4bTwkG3OGPhDO0LLE8gAO59VOh0HncwICqK75TeTkmzt2RpVMo0tH1pSBs9pEl8A9hV801gGiFQYmiRu0blpjYnDqWBVsIWxg7c4VxFfzUPcZr5ecCPIjUo+/m98CgUaZ1v4U/MH29lzyFeehBvIR0oNhRsEx6JauJ0H8rb6JefKd7T+s+V34HD+lbS97/z5oqQ2hDhjDCt3Ct7mInMEp8qD/gaOtV3YRdSctvDkOx++0IzR3lJXC75pVu/aUoD79qF7Q+YloJt1gwPkWFKnX3GFdLEVU9fsIiSsQ6FHU1Wc/lvYvNdy8LY66s8iQgbeiYEfIml0PnpH0EG6VH4JVUo48smRhY7VHnDYAtb+Ejt2HRxDgszDh3XtDVsRlKzjTQmOsbMVKT0Bz54PscHN0qPY8UCRfmEFFFZ6ZCqLYf8afOvqVRoJ4FsDhmKtpN mT6sS9c4 QBuQ+fgXSAGqV+eZoNI/5AdxJ36OTB8rYUh20nK74uybSk9oDaTOFucGTUCYeZbiBElCp/RIAAsR9oc/biuEurAUlkK61k+ewD/HytZuhviq3CjfSR/DTQfRq7VxMBSGMYQpZ3Ef9+kNLPweUus9M46VDCRV+uq7UBvWOnTyg+z7AhAnZuVO80XuOr/ctm4vdq0OtHtV0s405R/Gi8TBVJ1T/SA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu All the fast-gup functions take a tail page to operate, always need to do page mask calculations before feeding that into record_subpages(). Merge that logic into record_subpages(), so that it will do the nth_page() calculation. Signed-off-by: Peter Xu --- mm/gup.c | 25 ++++++++++++++----------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index fa93e14b7fca..3813aad79c4a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -2767,13 +2767,16 @@ static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr, } #endif -static int record_subpages(struct page *page, unsigned long addr, - unsigned long end, struct page **pages) +static int record_subpages(struct page *page, unsigned long sz, + unsigned long addr, unsigned long end, + struct page **pages) { + struct page *start_page; int nr; + start_page = nth_page(page, (addr & (sz - 1)) >> PAGE_SHIFT); for (nr = 0; addr != end; nr++, addr += PAGE_SIZE) - pages[nr] = nth_page(page, nr); + pages[nr] = nth_page(start_page, nr); return nr; } @@ -2808,8 +2811,8 @@ static int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr, /* hugepages are never "special" */ VM_BUG_ON(!pfn_valid(pte_pfn(pte))); - page = nth_page(pte_page(pte), (addr & (sz - 1)) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pte_page(pte); + refs = record_subpages(page, sz, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) @@ -2882,8 +2885,8 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, pages, nr); } - page = nth_page(pmd_page(orig), (addr & ~PMD_MASK) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pmd_page(orig); + refs = record_subpages(page, PMD_SIZE, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) @@ -2926,8 +2929,8 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, pages, nr); } - page = nth_page(pud_page(orig), (addr & ~PUD_MASK) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pud_page(orig); + refs = record_subpages(page, PUD_SIZE, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) @@ -2966,8 +2969,8 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr, BUILD_BUG_ON(pgd_devmap(orig)); - page = nth_page(pgd_page(orig), (addr & ~PGDIR_MASK) >> PAGE_SHIFT); - refs = record_subpages(page, addr, end, pages + *nr); + page = pgd_page(orig); + refs = record_subpages(page, PGDIR_SIZE, addr, end, pages + *nr); folio = try_grab_folio(page, refs, flags); if (!folio) From patchwork Wed Jan 3 09:14:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13509771 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EFB1CC3DA6E for ; Wed, 3 Jan 2024 09:16:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 80B336B0093; Wed, 3 Jan 2024 04:16:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7B9756B009D; Wed, 3 Jan 2024 04:16:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 635146B009E; Wed, 3 Jan 2024 04:16:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 501926B0093 for ; Wed, 3 Jan 2024 04:16:25 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 2A02CC084E for ; Wed, 3 Jan 2024 09:16:25 +0000 (UTC) X-FDA: 81637443930.20.5E72F46 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf02.hostedemail.com (Postfix) with ESMTP id 8092B80005 for ; Wed, 3 Jan 2024 09:16:23 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=idHQtjv+; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf02.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704273383; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=e2/9/uEWavAhcbGvkqENWO1xIuQmr8G4qpstfk2UpBk=; b=6fZ9+K2DJDUoFbyP/NiCBPRd4NrYcxnivk1charyV0+v/6b04yOOq68ssh6q7rua1ITyQX pU/wviL3tuauN3U3DrQFHgx+JoKr8BhNGKus6VqLPITP8I2NBoKsnOwJE73dfM6RkGAdl2 6uM4KVWkIMrxWuePohSx5eJ2/lc5oL8= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=idHQtjv+; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf02.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704273383; a=rsa-sha256; cv=none; b=8c48UQztBkDej8JH+mVaw9snGkpYzrpKLazPLc7f1LWJ+A5mBEvvFH9udQjNHnDjqF/Agr QECeZT2bsi1+yZHkghjDqOI3WRT482tXndiD5dp7aOZR9sAKeOfJGP1zO4hKRu3/bT5H21 4yBtt2PTDyeytfqq7e1VKWBrCAxBITc= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704273382; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=e2/9/uEWavAhcbGvkqENWO1xIuQmr8G4qpstfk2UpBk=; b=idHQtjv+Lj8GNAESrCB93cRt6q+/nmz28JnTPKphlygU1o5W3xhgXU6uLR1PlJAAjqDmKn WIdWFy9LO8MGOdHtictxPbA5hhlxuDGOgBaX6GwQdtdGgjcXk7i7dSeGs/oceHwgtLy4jW nAwSiSfr6zHSjmj6HllP3CCtMhho+CY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-562-bfNmJltGNpCYZPt_-QXM8w-1; Wed, 03 Jan 2024 04:16:16 -0500 X-MC-Unique: bfNmJltGNpCYZPt_-QXM8w-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 633D085CBA5; Wed, 3 Jan 2024 09:16:15 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.69]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6E9FD492BE6; Wed, 3 Jan 2024 09:16:05 +0000 (UTC) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: James Houghton , David Hildenbrand , "Kirill A . Shutemov" , Yang Shi , peterx@redhat.com, linux-riscv@lists.infradead.org, Andrew Morton , "Aneesh Kumar K . V" , Rik van Riel , Andrea Arcangeli , Axel Rasmussen , Mike Rapoport , John Hubbard , Vlastimil Babka , Michael Ellerman , Christophe Leroy , Andrew Jones , linuxppc-dev@lists.ozlabs.org, Mike Kravetz , Muchun Song , linux-arm-kernel@lists.infradead.org, Jason Gunthorpe , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox Subject: [PATCH v2 08/13] mm/gup: Handle hugetlb for no_page_table() Date: Wed, 3 Jan 2024 17:14:18 +0800 Message-ID: <20240103091423.400294-9-peterx@redhat.com> In-Reply-To: <20240103091423.400294-1-peterx@redhat.com> References: <20240103091423.400294-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 8092B80005 X-Stat-Signature: 9iuh34p6p4hwhdxrq33ug8xpwq9cbs4z X-Rspam-User: X-HE-Tag: 1704273383-211020 X-HE-Meta: U2FsdGVkX19iZ5YGcZe97Prva3snoGEPBi+IwenHVrTj6/0HWLGfUXsEy6KXLLOPm0dqu1aYLk9jyg7cjl2JZt4Trwl5dF3KCEHh3sJ0nnv14LsNOHP2BZO4QmppEw5E4ROIWF+1gB36BkLXikIeDHlfGHfgaJadOz2V4YaD3B5MUoZuDOhbsDNZLdKICLQhPiFxsqI00Miyp8fZZoQncuJ1hw8sFit4t/fPcOVuuYu5WDfH1KhNZQRixw8LIyQpTmXLsWQ9xwMK4rDeTpXQDJwNG+tXgImtj9UKgZTHHmA+y8stYQX1dwFIxHMhoCHJVvh01tRaYEh1u8wtjWzzD5e93mrWl+0Bp3ZiYzw0ORFlm9TXi+u5tbNIIg0Zi6rVD3nqKS11xU2RA8q3ZJKt3DWuEhYzPlN/LCDTVXScJHb6brPeIeLz43QQVw6wDWTLIjGJhg4FOcNHjuWv1OkVNCJswcrQKWBPN7iBO1Mzdt6k+yFPXXUd99j4r7NKOATr9VjR4WBeUauGSkUynwsq54OgIt0z/Fi+CSfXv7iK/6EoKHGu+VJBt6/c4jOxmbevetO4q/1+F7r/TedPmHRTr/rozDoQ0/ZemuexxDrcTAjUz5K32Mby6uAqAoQHp2AdBJ0Q0yVFggAk198HXkFZWJ2Xj6j60kCwob4LIb6+8gNgdySh9YM0X8oeze9PlMWsX7UR3uJzgIudNz9BzJh28Eqi1nCxZF/J4R08pdJS7luuE45ifN3IczCFQYAU8zI/AOeUJPamPxl8LYKdDNfCG1GcrgbRQDcpBKXczfcglAju67U4pRsCdKmUow5ka+J5zBdedeCcryvP/bBilyEwgMgWB5zTzPwh9RAKaBt889mr3z/CycEMRUyrfYblRjlPSsDYlq2ywpBc3DA4FYBu75ZAoINGz8psRoQwul75AaFWRzWHOpBtuYKbQiVtDL8jHPoe8lSIb4VrF8Jypo5 kO2zfCan ebnkwZ9yvBoNHIBdBLbAvmw9aDsKZTwCFfXDiyg3N810FH/xQnMicUi3wcVcYareWIjsqaP0g2CAtNrdM6jQEeYWp+OScXSWrJRAjUmplrpkuHY+Dhu3seuSgAuKMeZkfcO0TiG+Hll/nW77qOi0ubrIV/zVdeQ5SW+HpUcFVi//uMMUcDFUmaeqQ0ry055/sNCme7IzJHnzcpR7y+ht5VOBJjiI6KXb29qacPdFKf+Ws2I78Udc8X/PEwquKlAvk9UhOTuS0hi10FgA1E7NNF+Ng6A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu no_page_table() is not yet used for hugetlb code paths. Make it prepared. The major difference here is hugetlb will return -EFAULT as long as page cache does not exist, even if VM_SHARED. See hugetlb_follow_page_mask(). Pass "address" into no_page_table() too, as hugetlb will need it. Reviewed-by: Christoph Hellwig Signed-off-by: Peter Xu Reviewed-by: Jason Gunthorpe --- mm/gup.c | 44 ++++++++++++++++++++++++++------------------ 1 file changed, 26 insertions(+), 18 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 3813aad79c4a..b8a80e2bfe08 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -501,19 +501,27 @@ static inline void mm_set_has_pinned_flag(unsigned long *mm_flags) #ifdef CONFIG_MMU static struct page *no_page_table(struct vm_area_struct *vma, - unsigned int flags) + unsigned int flags, unsigned long address) { + if (!(flags & FOLL_DUMP)) + return NULL; + /* - * When core dumping an enormous anonymous area that nobody - * has touched so far, we don't want to allocate unnecessary pages or + * When core dumping, we don't want to allocate unnecessary pages or * page tables. Return error instead of NULL to skip handle_mm_fault, * then get_dump_page() will return NULL to leave a hole in the dump. * But we can only make this optimization where a hole would surely * be zero-filled if handle_mm_fault() actually did handle it. */ - if ((flags & FOLL_DUMP) && - (vma_is_anonymous(vma) || !vma->vm_ops->fault)) + if (is_vm_hugetlb_page(vma)) { + struct hstate *h = hstate_vma(vma); + + if (!hugetlbfs_pagecache_present(h, vma, address)) + return ERR_PTR(-EFAULT); + } else if ((vma_is_anonymous(vma) || !vma->vm_ops->fault)) { return ERR_PTR(-EFAULT); + } + return NULL; } @@ -593,7 +601,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, ptep = pte_offset_map_lock(mm, pmd, address, &ptl); if (!ptep) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); pte = ptep_get(ptep); if (!pte_present(pte)) goto no_page; @@ -685,7 +693,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma, pte_unmap_unlock(ptep, ptl); if (!pte_none(pte)) return NULL; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } static struct page *follow_pmd_mask(struct vm_area_struct *vma, @@ -701,27 +709,27 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, pmd = pmd_offset(pudp, address); pmdval = pmdp_get_lockless(pmd); if (pmd_none(pmdval)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (!pmd_present(pmdval)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (pmd_devmap(pmdval)) { ptl = pmd_lock(mm, pmd); page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (likely(!pmd_trans_huge(pmdval))) return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); ptl = pmd_lock(mm, pmd); if (unlikely(!pmd_present(*pmd))) { spin_unlock(ptl); - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (unlikely(!pmd_trans_huge(*pmd))) { spin_unlock(ptl); @@ -752,17 +760,17 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, pud = pud_offset(p4dp, address); if (pud_none(*pud)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); if (pud_devmap(*pud)) { ptl = pud_lock(mm, pud); page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); } if (unlikely(pud_bad(*pud))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_pmd_mask(vma, address, pud, flags, ctx); } @@ -776,10 +784,10 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma, p4d = p4d_offset(pgdp, address); if (p4d_none(*p4d)) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); BUILD_BUG_ON(p4d_huge(*p4d)); if (unlikely(p4d_bad(*p4d))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_pud_mask(vma, address, p4d, flags, ctx); } @@ -829,7 +837,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, pgd = pgd_offset(mm, address); if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) - return no_page_table(vma, flags); + return no_page_table(vma, flags, address); return follow_p4d_mask(vma, address, pgd, flags, ctx); } From patchwork Wed Jan 3 09:14:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13509772 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DD6C5C47079 for ; Wed, 3 Jan 2024 09:16:36 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6D1EE8D0057; Wed, 3 Jan 2024 04:16:36 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 683E78D0053; Wed, 3 Jan 2024 04:16:36 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 523C08D0057; Wed, 3 Jan 2024 04:16:36 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 3E5E18D0053 for ; Wed, 3 Jan 2024 04:16:36 -0500 (EST) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 18AF180224 for ; Wed, 3 Jan 2024 09:16:36 +0000 (UTC) X-FDA: 81637444392.07.5BE774B Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf08.hostedemail.com (Postfix) with ESMTP id 6B1FC160006 for ; Wed, 3 Jan 2024 09:16:34 +0000 (UTC) Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=hfcmehe0; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf08.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704273394; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dkbsR+hObgkKanb0hWK1+Xo39A1xL8364xJAXCv3uQU=; b=FMOFExvei0sEORyY7OaebH7x3RfQ3SilbqBWXUTo6v3l2j40SV3zwKlRIo81Us2DDyNuyi WbXHY87xU9p29TgXMeRVIiQF96lTYhpW8clBJ3ruQuQlp7hTdu2TI9TJ1LM6NjgufJHpPr eJ1DOrjHahOmcH33uUyiFI3I5/GAHkc= ARC-Authentication-Results: i=1; imf08.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=hfcmehe0; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf08.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704273394; a=rsa-sha256; cv=none; b=25EQ9d24ExoKMOZCAAdDEF60D/NTHdMKVfVNqEf61BdpMazSkLeMkxhowHeU26vmODUojY 3I63k4K1q+iK2/fW0caUptCgV8h/SqSwa6raGYlldqI1+w12jiGrVvwYMJaxYllJ6W5TfD LDWY64I/1EU7sGzl8h4LCScNPvyK1Qw= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704273393; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=dkbsR+hObgkKanb0hWK1+Xo39A1xL8364xJAXCv3uQU=; b=hfcmehe06fv8w81geqFni+ZztDEZ3+zNQBNuzJ2ShLa8wl0yl1w1qmzu3c745CfqQu9RZY TwWj6SYHZIzpCMisXjL6JKjgxJ70FRXVnNrSyCoD5FijwCYgWgNM1BNy2mw8ENkRZgJwLV N+HFCx2cykIPm1+tdR+SGfFmEx0B7ZQ= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-611-7CEiuF6PMvSSKuTScRXfJw-1; Wed, 03 Jan 2024 04:16:28 -0500 X-MC-Unique: 7CEiuF6PMvSSKuTScRXfJw-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 8EF2A185A780; Wed, 3 Jan 2024 09:16:27 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.69]) by smtp.corp.redhat.com (Postfix) with ESMTP id 33861492BE6; Wed, 3 Jan 2024 09:16:15 +0000 (UTC) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: James Houghton , David Hildenbrand , "Kirill A . Shutemov" , Yang Shi , peterx@redhat.com, linux-riscv@lists.infradead.org, Andrew Morton , "Aneesh Kumar K . V" , Rik van Riel , Andrea Arcangeli , Axel Rasmussen , Mike Rapoport , John Hubbard , Vlastimil Babka , Michael Ellerman , Christophe Leroy , Andrew Jones , linuxppc-dev@lists.ozlabs.org, Mike Kravetz , Muchun Song , linux-arm-kernel@lists.infradead.org, Jason Gunthorpe , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox Subject: [PATCH v2 09/13] mm/gup: Cache *pudp in follow_pud_mask() Date: Wed, 3 Jan 2024 17:14:19 +0800 Message-ID: <20240103091423.400294-10-peterx@redhat.com> In-Reply-To: <20240103091423.400294-1-peterx@redhat.com> References: <20240103091423.400294-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Rspamd-Queue-Id: 6B1FC160006 X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: wd1oc8cbgcx68j967rh48a7ha83gdywi X-HE-Tag: 1704273394-919146 X-HE-Meta: U2FsdGVkX1/NXMzevLXCEC3B51Z1yk4nF5JX4PSJUS4vXgRR7wDUSi2QqHMxSFOK3aXzLDOOUL/I0dNr6ZNNUvYd8I19T00l3zDvLdPT00hKteRyUQBcZvooE6MVuvpejXmz51nviV0V2tXFMtiNfJev20lGYYgfzKwmQ5LXPPsR2ElSokH9x91EzwVXfkHzdAGTd57n+kmb5He4hx/UzcOswBz/XT8/D6+hhaBokdaBZWGnxigmWXr6jjkCLchs3kcb46Kvoex1kxbgqyh7Xh5FEjWiaqeWz3xVzQ1o60k5Ihl1bXWWbebg4LLE1u69fr1BtMGH0zUjVkg8ht3JmGU7bHAINzmTlqyUCxmDIu/ffvoadX/NzESSae0Yn1jwbQd4eJ4b0rmJsAVHJ/pn3mnz/xLS/BfN+tcrWElLdnNE9bfJClchp3Z4wAW7uuHdxMw8CToDOYrwk3AZjvIT01iW+N2cT1/Q+O+ihKD/o43cYb/skB32o7qfF8kh+HZ16zSxjuPXSh0f4xb7LMF6GGZ4zrHv6nSY5MlaRYDK0X/RVftmZ8xnSZNwusXpFzLJPaIaAiL+g0lyFaDVYl37xcX3JVenJibIjAYHaTdJ9DSsu1+UHYvb0CeS+RHk2v6BSSF99aZ7Dx4NwqOKupv+k6UnTb3v7TaPROF3AjZESsc1qfVbC9vujdelpsFbQlyJxdTvqC5HBuElghHvCTjp1U0rNCqMGk7R6n2eUhYbTSOXBbRtC/G1we5B3hJg1US4nQRLbXDqehN1PHGcBAHX0KthJrHv9Ywu+M4gLK+JlsIqf60MdFebaVi77GgWD1lSl6+LWjAwkOwkZ8KoDUkeWsxQZ9yie7RYFOyUQYZGdzeQpWIbUTLbjPowAeJuTpL3CyWhp6Sc8+MtMijerwcOgrZuua+7UEB8ZA8QpE1bnwJCccoVCoQlUjtRmlbsj19Wfjs+bnXgETLB0bCI/f1 +htIdOnF IbsphceGa1igtrR1H8DeRx8REjPON6IEbswLZXb8aCEI/lYbpLk+zEQ4OT+x9M7l+0uiiT/a0RJi0ie15yxpFv4xFlH30hUJvQcvB9cFyiyJg1wPK9fQXOl55rx7LBvvHad1cWgzuuxF+Stv3LgLhpa5tXhwlmyR3eho6mNiunjsJOyGyCq7fcVsedJDKOFJE5pkR6BXLAZO8pU4NG20NdnWP553Po4dLFZqcsJ/GqcFbOvliqe1w0jDaOA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Introduce "pud_t pud" in the function, so the code won't dereference *pudp multiple time. Not only because that looks less straightforward, but also because if the dereference really happened, it's not clear whether there can be race to see different *pudp values if it's being modified at the same time. Acked-by: James Houghton Signed-off-by: Peter Xu Reviewed-by: Jason Gunthorpe --- mm/gup.c | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index b8a80e2bfe08..63845b3ec44f 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -753,26 +753,27 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, unsigned int flags, struct follow_page_context *ctx) { - pud_t *pud; + pud_t *pudp, pud; spinlock_t *ptl; struct page *page; struct mm_struct *mm = vma->vm_mm; - pud = pud_offset(p4dp, address); - if (pud_none(*pud)) + pudp = pud_offset(p4dp, address); + pud = READ_ONCE(*pudp); + if (pud_none(pud)) return no_page_table(vma, flags, address); - if (pud_devmap(*pud)) { - ptl = pud_lock(mm, pud); - page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap); + if (pud_devmap(pud)) { + ptl = pud_lock(mm, pudp); + page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap); spin_unlock(ptl); if (page) return page; return no_page_table(vma, flags, address); } - if (unlikely(pud_bad(*pud))) + if (unlikely(pud_bad(pud))) return no_page_table(vma, flags, address); - return follow_pmd_mask(vma, address, pud, flags, ctx); + return follow_pmd_mask(vma, address, pudp, flags, ctx); } static struct page *follow_p4d_mask(struct vm_area_struct *vma, From patchwork Wed Jan 3 09:14:20 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13509773 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 94E56C3DA6E for ; Wed, 3 Jan 2024 09:16:51 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 23CF16B00FB; Wed, 3 Jan 2024 04:16:51 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 1F8726B00FA; Wed, 3 Jan 2024 04:16:51 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 01AC28D0058; Wed, 3 Jan 2024 04:16:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id E26838D0053 for ; Wed, 3 Jan 2024 04:16:50 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id B8D701405D6 for ; Wed, 3 Jan 2024 09:16:50 +0000 (UTC) X-FDA: 81637444980.16.75D2FE0 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf18.hostedemail.com (Postfix) with ESMTP id E283D1C0004 for ; Wed, 3 Jan 2024 09:16:48 +0000 (UTC) Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=J25uQCX1; spf=pass (imf18.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704273409; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QzOVwYUc2Smk4Qva0KIv5/d0xcVhZ//s5msSCcZkJA4=; b=ySUDiNa6U/QiYGD4qOJjpdn53K1o/Vt1I1AtuqFOZYj5qE9hNuNPubWAgFN7jZmg0z6rT2 kUTfZHjS6TXcfKxzGrttLa7J2lBHnRvponG68Xv6VdAky2cr2h7kerBzu4L64696HMOSnm MCly5LjXuFS8j33H/4Jo0FvgaqKtXFQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704273409; a=rsa-sha256; cv=none; b=dMe7lle3YY29i+QnllW2RXf5XQnKaE99jUq3m/VPTctPMTDOy3s8XCNZiSrtiuWpdBqr4d usKXakJIDp8BfB2XEPPj50WepJ/5mxQrJ8nTEzr4wJ3EHB/6Hkz9uEoHFu9dDB+o05wDVZ 5B+K58W0FJ8psLgptU/7qucW5Vk9bt0= ARC-Authentication-Results: i=1; imf18.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=J25uQCX1; spf=pass (imf18.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704273408; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=QzOVwYUc2Smk4Qva0KIv5/d0xcVhZ//s5msSCcZkJA4=; b=J25uQCX12sD1CCNwyCWoMW+z8+XzRkwdBzpk7DF/v9oMB/fQsCQo0MGShTRMrSqOO1KGNC MlgUnUGcTDRpn6IufzWtuOOixNmhZBGuOHmObXsXa2lT8dvwlRrrVJVjmxGqhZxK/O0LrA etVSfTK6vAJ4szvU0lmuWjw300W2/U0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-117-3JInRm3gNpSknDs2l1d3Mg-1; Wed, 03 Jan 2024 04:16:40 -0500 X-MC-Unique: 3JInRm3gNpSknDs2l1d3Mg-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BD778101A58E; Wed, 3 Jan 2024 09:16:39 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.69]) by smtp.corp.redhat.com (Postfix) with ESMTP id 61C82492BFA; Wed, 3 Jan 2024 09:16:27 +0000 (UTC) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: James Houghton , David Hildenbrand , "Kirill A . Shutemov" , Yang Shi , peterx@redhat.com, linux-riscv@lists.infradead.org, Andrew Morton , "Aneesh Kumar K . V" , Rik van Riel , Andrea Arcangeli , Axel Rasmussen , Mike Rapoport , John Hubbard , Vlastimil Babka , Michael Ellerman , Christophe Leroy , Andrew Jones , linuxppc-dev@lists.ozlabs.org, Mike Kravetz , Muchun Song , linux-arm-kernel@lists.infradead.org, Jason Gunthorpe , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox Subject: [PATCH v2 10/13] mm/gup: Handle huge pud for follow_pud_mask() Date: Wed, 3 Jan 2024 17:14:20 +0800 Message-ID: <20240103091423.400294-11-peterx@redhat.com> In-Reply-To: <20240103091423.400294-1-peterx@redhat.com> References: <20240103091423.400294-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Rspamd-Queue-Id: E283D1C0004 X-Rspam-User: X-Stat-Signature: tgq3oxgx65ectixrwb6o3gc5ympkf5d5 X-Rspamd-Server: rspam03 X-HE-Tag: 1704273408-717410 X-HE-Meta: U2FsdGVkX1/BKAbA4n4K43hTjBojYS0UoGCP62T9kktx7l11xrzL/wA0XWmc3mVX2UazK4ohDap6hOUj0W+6Ql4NU5lEyzMet7rMaJa/Tjci22pA94BfD8t1tUeAWqEBD0nBPxXIAD8pgqyJ43OSUrwfKs0izOaWqxYUNQy3Fm7kw54LkkCfuO2T7j0rY8q/B4TrlgyjO3g0OqLxNg5hq1sZY7iFyPUBy7P2PNFkPwMV4xN79K0Dyo7AHPLbsdJ0MkJ2uG1JnwjebY7Wli+bYGzvQmSCmaAueCDrCRp3HrsWAi7HAuwcI8uLUtaaZ96NtST1lSOOqvotf8l/o7Rc3JrD6+R6Ipi8/BI66+TKyzqZqosTNc07FX5fqdHIgtIGFXsXMKNVksZ2uTQtYXdk+4X8jzsx7ckc/kj9aWsZgcx3nbv2jTvbxexflRCTirazN4uoNfQ0CED//UGP08n0FJDV85yuE0vHFXBhzzaKdNoNZKXNQCTnQ3dKI5SDgu2oXtwyi/tU55BQPOwyg82raqa53Srg3W2Uovsufca97Jfwr4cRu1qsYjZhk6BtMmiWnQiQHXtgCZNRNVlxJwhMJvljCF/bgzt4R3THxJy+SQEHdoLNI502CIpziUsfQ0wASOobZYNMfAcECqhGJTSK4IMQ2/BX8be+jz8bBEmdSRzIzLMoy4XbddKpifRHvblIJ1JdPvP972XeTpdAJj6Q6WBxiV21KvDg1sn4vLaxbu1k7E0vVdVdZoojE59ne6vWtpyUyC5Avj3ebRxcSCO1xJ9qk8+iYAKaCnOJwq1o0bV2W+RY6HvERhnJRJeenqztw/BwnaeNTY1Uqo0cUGe4C8RwJ3TINQJaWSK1R9VYEBKqmfUq7Y1uUw3/q8J5Jt2sUzbgW2yR1uH0ITt73AsRBWl8KlgTdkX6KkZlDpjQRCO1iv3mZNExJ/p2IgIW/wJGS1oqamXZO7TDFiKYc5a eKC8CzJ7 bqFcoogRLX12FB7QZgloPwKjUuDzNmYN4WH8of9gwFo0y5hMbo46Pq3T3rQUG1HNsah74efCoKgU/Rn6ednyNiuSi9rNvj8x4s6kGRHDWtm3yoQmbkTG90Seii/b2LwbYlTm2PP5Kathl/OdCPqPJMwqRwBBL/oez10U/CEstohccc8LGNgb9lWYTulPruw3UlB3UsV/Emr8/mm2BP+53lQvgRjijLk4dsa1f X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Teach follow_pud_mask() to be able to handle normal PUD pages like hugetlb. Rename follow_devmap_pud() to follow_huge_pud() so that it can process either huge devmap or hugetlb. Move it out of TRANSPARENT_HUGEPAGE_PUD and and huge_memory.c (which relies on CONFIG_THP). In the new follow_huge_pud(), taking care of possible CoR for hugetlb if necessary. touch_pud() needs to be moved out of huge_memory.c to be accessable from gup.c even if !THP. Since at it, optimize the non-present check by adding a pud_present() early check before taking the pgtable lock, failing the follow_page() early if PUD is not present: that is required by both devmap or hugetlb. Use pud_huge() to also cover the pud_devmap() case. One more trivial thing to mention is, introduce "pud_t pud" in the code paths along the way, so the code doesn't dereference *pudp multiple time. Not only because that looks less straightforward, but also because if the dereference really happened, it's not clear whether there can be race to see different *pudp values when it's being modified at the same time. Setting ctx->page_mask properly for a PUD entry. As a side effect, this patch should also be able to optimize devmap GUP on PUD to be able to jump over the whole PUD range, but not yet verified. Hugetlb already can do so prior to this patch. Signed-off-by: Peter Xu Reviewed-by: Jason Gunthorpe --- include/linux/huge_mm.h | 8 ----- mm/gup.c | 70 +++++++++++++++++++++++++++++++++++++++-- mm/huge_memory.c | 47 ++------------------------- mm/internal.h | 2 ++ 4 files changed, 71 insertions(+), 56 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 96bd4b5d027e..3b73d20d537e 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -345,8 +345,6 @@ static inline bool folio_test_pmd_mappable(struct folio *folio) struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, int flags, struct dev_pagemap **pgmap); -struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, int flags, struct dev_pagemap **pgmap); vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf); @@ -502,12 +500,6 @@ static inline struct page *follow_devmap_pmd(struct vm_area_struct *vma, return NULL; } -static inline struct page *follow_devmap_pud(struct vm_area_struct *vma, - unsigned long addr, pud_t *pud, int flags, struct dev_pagemap **pgmap) -{ - return NULL; -} - static inline bool thp_migration_supported(void) { return false; diff --git a/mm/gup.c b/mm/gup.c index 63845b3ec44f..760406180222 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -525,6 +525,70 @@ static struct page *no_page_table(struct vm_area_struct *vma, return NULL; } +#ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES +static struct page *follow_huge_pud(struct vm_area_struct *vma, + unsigned long addr, pud_t *pudp, + int flags, struct follow_page_context *ctx) +{ + struct mm_struct *mm = vma->vm_mm; + struct page *page; + pud_t pud = *pudp; + unsigned long pfn = pud_pfn(pud); + int ret; + + assert_spin_locked(pud_lockptr(mm, pudp)); + + if ((flags & FOLL_WRITE) && !pud_write(pud)) + return NULL; + + if (!pud_present(pud)) + return NULL; + + pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT; + +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD + if (pud_devmap(pud)) { + /* + * device mapped pages can only be returned if the caller + * will manage the page reference count. + * + * At least one of FOLL_GET | FOLL_PIN must be set, so + * assert that here: + */ + if (!(flags & (FOLL_GET | FOLL_PIN))) + return ERR_PTR(-EEXIST); + + if (flags & FOLL_TOUCH) + touch_pud(vma, addr, pudp, flags & FOLL_WRITE); + + ctx->pgmap = get_dev_pagemap(pfn, ctx->pgmap); + if (!ctx->pgmap) + return ERR_PTR(-EFAULT); + } +#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ + page = pfn_to_page(pfn); + + if (!pud_devmap(pud) && !pud_write(pud) && + gup_must_unshare(vma, flags, page)) + return ERR_PTR(-EMLINK); + + ret = try_grab_page(page, flags); + if (ret) + page = ERR_PTR(ret); + else + ctx->page_mask = HPAGE_PUD_NR - 1; + + return page; +} +#else /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ +static struct page *follow_huge_pud(struct vm_area_struct *vma, + unsigned long addr, pud_t *pudp, + int flags, struct follow_page_context *ctx) +{ + return NULL; +} +#endif /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ + static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, pte_t *pte, unsigned int flags) { @@ -760,11 +824,11 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, pudp = pud_offset(p4dp, address); pud = READ_ONCE(*pudp); - if (pud_none(pud)) + if (pud_none(pud) || !pud_present(pud)) return no_page_table(vma, flags, address); - if (pud_devmap(pud)) { + if (pud_huge(pud)) { ptl = pud_lock(mm, pudp); - page = follow_devmap_pud(vma, address, pudp, flags, &ctx->pgmap); + page = follow_huge_pud(vma, address, pudp, flags, ctx); spin_unlock(ptl); if (page) return page; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 94ef5c02b459..9993d2b18809 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1373,8 +1373,8 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, } #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD -static void touch_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, bool write) +void touch_pud(struct vm_area_struct *vma, unsigned long addr, + pud_t *pud, bool write) { pud_t _pud; @@ -1386,49 +1386,6 @@ static void touch_pud(struct vm_area_struct *vma, unsigned long addr, update_mmu_cache_pud(vma, addr, pud); } -struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, int flags, struct dev_pagemap **pgmap) -{ - unsigned long pfn = pud_pfn(*pud); - struct mm_struct *mm = vma->vm_mm; - struct page *page; - int ret; - - assert_spin_locked(pud_lockptr(mm, pud)); - - if (flags & FOLL_WRITE && !pud_write(*pud)) - return NULL; - - if (pud_present(*pud) && pud_devmap(*pud)) - /* pass */; - else - return NULL; - - if (flags & FOLL_TOUCH) - touch_pud(vma, addr, pud, flags & FOLL_WRITE); - - /* - * device mapped pages can only be returned if the - * caller will manage the page reference count. - * - * At least one of FOLL_GET | FOLL_PIN must be set, so assert that here: - */ - if (!(flags & (FOLL_GET | FOLL_PIN))) - return ERR_PTR(-EEXIST); - - pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT; - *pgmap = get_dev_pagemap(pfn, *pgmap); - if (!*pgmap) - return ERR_PTR(-EFAULT); - page = pfn_to_page(pfn); - - ret = try_grab_page(page, flags); - if (ret) - page = ERR_PTR(ret); - - return page; -} - int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm, pud_t *dst_pud, pud_t *src_pud, unsigned long addr, struct vm_area_struct *vma) diff --git a/mm/internal.h b/mm/internal.h index f309a010d50f..5821b7a14b62 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1007,6 +1007,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags); /* * mm/huge_memory.c */ +void touch_pud(struct vm_area_struct *vma, unsigned long addr, + pud_t *pud, bool write); struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, unsigned int flags); From patchwork Wed Jan 3 09:14:21 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13509774 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F0C17C3DA6E for ; Wed, 3 Jan 2024 09:16:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7E3168D0058; Wed, 3 Jan 2024 04:16:59 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 76ADE8D0053; Wed, 3 Jan 2024 04:16:59 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 5E4F28D0058; Wed, 3 Jan 2024 04:16:59 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 4A26F8D0053 for ; Wed, 3 Jan 2024 04:16:59 -0500 (EST) Received: from smtpin01.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 24A271A02B2 for ; Wed, 3 Jan 2024 09:16:59 +0000 (UTC) X-FDA: 81637445358.01.B471EB8 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf11.hostedemail.com (Postfix) with ESMTP id 65B9E40014 for ; Wed, 3 Jan 2024 09:16:56 +0000 (UTC) Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ViN1IJH7; spf=pass (imf11.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704273416; a=rsa-sha256; cv=none; b=A+gHSlEKyeJUsC6J3oeILbNVqqv+hlmgV34qvhaddRFScWRCu4P1KDD2awOqayTFus7rLi ipEH2YMdxcfhzH+KirgCi4CTXqUWAbaUYyNzZ7r2/595NNZFjvsbCHNYaFpoz8X76O+kyB t1yEqhCTnplKIKUWstqCghJO2jGBQBw= ARC-Authentication-Results: i=1; imf11.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=ViN1IJH7; spf=pass (imf11.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704273416; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=08fh3W0fLOF7uiwfuql/fhnzNEx5JUzK5pzA1eiVeOg=; b=CcKAzJTmIduore14gx2GdDtlpzt3ePyIAxVsfUV7DSkzteWaHsQgBZx1fbyJwKAjRsoUcB WtCkf+rsik99ldoeVgA1Ale6xdcgxkQuJnv1zRmgS7J4F7B2PiVDj7JH4gaSShAHBsCT7G AgEyORQTiUZZ1QSxbJOvG5B+zfdiqto= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704273415; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=08fh3W0fLOF7uiwfuql/fhnzNEx5JUzK5pzA1eiVeOg=; b=ViN1IJH731Vrdld1H/ADQRETzxPOQdxzvgE59qcHVS1pDMS9nRgUVjGg4Nli4jBRMIurnh doD/OFURDz/ExbZqrAMIkINqcB1zUYKhs916TxnQOH2o9K3nYEaZIM0+fnd+0LIpFmepPj Lq5fRtmDz9TYvTtmDH7qdUiXUvn+zi8= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-325-_WTwnjDAMc2GdAFTOXr2IA-1; Wed, 03 Jan 2024 04:16:53 -0500 X-MC-Unique: _WTwnjDAMc2GdAFTOXr2IA-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D792C29AA38D; Wed, 3 Jan 2024 09:16:52 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.69]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8CE8D492BF0; Wed, 3 Jan 2024 09:16:40 +0000 (UTC) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: James Houghton , David Hildenbrand , "Kirill A . Shutemov" , Yang Shi , peterx@redhat.com, linux-riscv@lists.infradead.org, Andrew Morton , "Aneesh Kumar K . V" , Rik van Riel , Andrea Arcangeli , Axel Rasmussen , Mike Rapoport , John Hubbard , Vlastimil Babka , Michael Ellerman , Christophe Leroy , Andrew Jones , linuxppc-dev@lists.ozlabs.org, Mike Kravetz , Muchun Song , linux-arm-kernel@lists.infradead.org, Jason Gunthorpe , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox Subject: [PATCH v2 11/13] mm/gup: Handle huge pmd for follow_pmd_mask() Date: Wed, 3 Jan 2024 17:14:21 +0800 Message-ID: <20240103091423.400294-12-peterx@redhat.com> In-Reply-To: <20240103091423.400294-1-peterx@redhat.com> References: <20240103091423.400294-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 65B9E40014 X-Stat-Signature: 9gkoj8uj9ca86wpwtmu5jara4nwz6co3 X-Rspam-User: X-HE-Tag: 1704273416-34624 X-HE-Meta: U2FsdGVkX18eCXuNtz2rB8Qu4+vd3i3Iufx1x/DJ8rkXpUiIdVrcQe6Nfp1zxnKoSonDZTUdpLLpP7I/xgLXjOUtYvQ0EqpUb8dpnetzf2zE9DkOJbD84U2G1dvLWYdgva4JHsc48t422jhxoGtQRn9VyQd3wU/Nri4SNrwo5bwLcPhQbC+xhGLM3JL1LGVAe7RNR68/naip1o8agl+2XWfKCNXbAfl7eoUY7KTX8T8sr4wvy0IM+2KkzyL5R7MW1lBc9zTtvY09stj87tYTGQgUeJXOV49U0I2xJ+L59p+i0pfbB2QV7UtV+btY2vqK3Hp+B9kVbnw/Xi2D9p5QLx43aYq8fRe0DRZS3gRmqyxuPKdKRO/yEUG64eZqb+u/9My34JPpnCn+tOgXevQEd262D/AErYO6P8CRUOlD/Hc5OdeNT/oV7bkUMk8jQ24zMf2SlmW+FRW14hvyUtoWlnBP/rkAm/X9HdQ9ieW0qkrVaOiw50IntRlqIuCiyooAVCxoOLt3yIh3PhcuYVaIenaxnrGDfLFYyT35Z+iTKuAAqdk+AZyyDv1uKSO7wqUI04Mv/3GviYPwa5hdBHUOn/qmbRrxxbd/IH3zOE50dFb5k6wZFJAhfEdNR6zSeSpjJ8DsZr7XE2qDULpnWvRpLzOzphNL7HOnGBUiBSDWizd0Onrd7WDNVGuNafrHLq0sb2bOIPlmQom9ywNiil/Hon1JgUtoJFG5t0YXL1aVGoSjYaDqKROLf48hPvUN1uUhR1K7ssGW+2s/NFxRGdk1l03AjU5f1oU8VIFT8y/71EN4Tmz3Xa+7XRsMVebqPayWQU76MFsRPIYA7PkR9cpYeVISxhQjHwuV25WZTaFQQuaYBxSsCb2OatSfxcAARxw1KKkaej8ppqVpsOhrkzQpC7jiBBK9IPEjdf+4V9P9Fthffc8rlwn3Z4Skaf3yaeTlZ68ad0v5rW/gg9IffEY dcsUzjAF otaRvFTwUi7qEbgSfDDe7sl/1lLbNEaEOdI9tU+uad63Az71S10bh+cQ67K8AUWMFRlkwMiMqnyx2xPC/oIw3WfiszkZ44SJpH+O0sXA0y7JoqX/rWuW7/mXgGqNyJxFvF1AXGxsbQj66rG585ASzSPymW4K/+wXxH32J58Q5p6NHtf6C7R8W4dcST+DZzWSVAE7MdMpF7AiOW+n/B+9jvRbvpMnxKyvoPsqU X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Replace pmd_trans_huge() with pmd_thp_or_huge() to also cover pmd_huge() as long as enabled. FOLL_TOUCH and FOLL_SPLIT_PMD only apply to THP, not yet huge. Since now follow_trans_huge_pmd() can process hugetlb pages, renaming it into follow_huge_pmd() to match what it does. Move it into gup.c so not depend on CONFIG_THP. When at it, move the ctx->page_mask setup into follow_huge_pmd(), only set it when the page is valid. It was not a bug to set it before even if GUP failed (page==NULL), because follow_page_mask() callers always ignores page_mask if so. But doing so makes the code cleaner. Signed-off-by: Peter Xu Reviewed-by: Jason Gunthorpe --- mm/gup.c | 107 ++++++++++++++++++++++++++++++++++++++++++++--- mm/huge_memory.c | 86 +------------------------------------ mm/internal.h | 5 +-- 3 files changed, 105 insertions(+), 93 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index 760406180222..d96429b6fc55 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -580,6 +580,93 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma, return page; } + +/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */ +static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page, + struct vm_area_struct *vma, + unsigned int flags) +{ + /* If the pmd is writable, we can write to the page. */ + if (pmd_write(pmd)) + return true; + + /* Maybe FOLL_FORCE is set to override it? */ + if (!(flags & FOLL_FORCE)) + return false; + + /* But FOLL_FORCE has no effect on shared mappings */ + if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED)) + return false; + + /* ... or read-only private ones */ + if (!(vma->vm_flags & VM_MAYWRITE)) + return false; + + /* ... or already writable ones that just need to take a write fault */ + if (vma->vm_flags & VM_WRITE) + return false; + + /* + * See can_change_pte_writable(): we broke COW and could map the page + * writable if we have an exclusive anonymous page ... + */ + if (!page || !PageAnon(page) || !PageAnonExclusive(page)) + return false; + + /* ... and a write-fault isn't required for other reasons. */ + if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd)) + return false; + return !userfaultfd_huge_pmd_wp(vma, pmd); +} + +static struct page *follow_huge_pmd(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmd, + unsigned int flags, + struct follow_page_context *ctx) +{ + struct mm_struct *mm = vma->vm_mm; + pmd_t pmdval = *pmd; + struct page *page; + int ret; + + assert_spin_locked(pmd_lockptr(mm, pmd)); + + page = pmd_page(pmdval); + VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page); + + if ((flags & FOLL_WRITE) && + !can_follow_write_pmd(pmdval, page, vma, flags)) + return NULL; + + /* Avoid dumping huge zero page */ + if ((flags & FOLL_DUMP) && is_huge_zero_pmd(pmdval)) + return ERR_PTR(-EFAULT); + + if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags)) + return NULL; + + if (!pmd_write(pmdval) && gup_must_unshare(vma, flags, page)) + return ERR_PTR(-EMLINK); + + VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && + !PageAnonExclusive(page), page); + + ret = try_grab_page(page, flags); + if (ret) + return ERR_PTR(ret); + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + if (pmd_trans_huge(pmdval) && (flags & FOLL_TOUCH)) + touch_pmd(vma, addr, pmd, flags & FOLL_WRITE); +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ + + page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT; + ctx->page_mask = HPAGE_PMD_NR - 1; + VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page); + + return page; +} + #else /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ static struct page *follow_huge_pud(struct vm_area_struct *vma, unsigned long addr, pud_t *pudp, @@ -587,6 +674,14 @@ static struct page *follow_huge_pud(struct vm_area_struct *vma, { return NULL; } + +static struct page *follow_huge_pmd(struct vm_area_struct *vma, + unsigned long addr, pmd_t *pmd, + unsigned int flags, + struct follow_page_context *ctx) +{ + return NULL; +} #endif /* CONFIG_PGTABLE_HAS_HUGE_LEAVES */ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address, @@ -784,31 +879,31 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, return page; return no_page_table(vma, flags, address); } - if (likely(!pmd_trans_huge(pmdval))) + if (likely(!pmd_thp_or_huge(pmdval))) return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); if (pmd_protnone(pmdval) && !gup_can_follow_protnone(vma, flags)) return no_page_table(vma, flags, address); ptl = pmd_lock(mm, pmd); - if (unlikely(!pmd_present(*pmd))) { + pmdval = *pmd; + if (unlikely(!pmd_present(pmdval))) { spin_unlock(ptl); return no_page_table(vma, flags, address); } - if (unlikely(!pmd_trans_huge(*pmd))) { + if (unlikely(!pmd_thp_or_huge(pmdval))) { spin_unlock(ptl); return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); } - if (flags & FOLL_SPLIT_PMD) { + if (pmd_trans_huge(pmdval) && (flags & FOLL_SPLIT_PMD)) { spin_unlock(ptl); split_huge_pmd(vma, pmd, address); /* If pmd was left empty, stuff a page table in there quickly */ return pte_alloc(mm, pmd) ? ERR_PTR(-ENOMEM) : follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); } - page = follow_trans_huge_pmd(vma, address, pmd, flags); + page = follow_huge_pmd(vma, address, pmd, flags, ctx); spin_unlock(ptl); - ctx->page_mask = HPAGE_PMD_NR - 1; return page; } diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 9993d2b18809..317cb445c442 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1216,8 +1216,8 @@ vm_fault_t vmf_insert_pfn_pud(struct vm_fault *vmf, pfn_t pfn, bool write) EXPORT_SYMBOL_GPL(vmf_insert_pfn_pud); #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */ -static void touch_pmd(struct vm_area_struct *vma, unsigned long addr, - pmd_t *pmd, bool write) +void touch_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, bool write) { pmd_t _pmd; @@ -1572,88 +1572,6 @@ static inline bool can_change_pmd_writable(struct vm_area_struct *vma, return pmd_dirty(pmd); } -/* FOLL_FORCE can write to even unwritable PMDs in COW mappings. */ -static inline bool can_follow_write_pmd(pmd_t pmd, struct page *page, - struct vm_area_struct *vma, - unsigned int flags) -{ - /* If the pmd is writable, we can write to the page. */ - if (pmd_write(pmd)) - return true; - - /* Maybe FOLL_FORCE is set to override it? */ - if (!(flags & FOLL_FORCE)) - return false; - - /* But FOLL_FORCE has no effect on shared mappings */ - if (vma->vm_flags & (VM_MAYSHARE | VM_SHARED)) - return false; - - /* ... or read-only private ones */ - if (!(vma->vm_flags & VM_MAYWRITE)) - return false; - - /* ... or already writable ones that just need to take a write fault */ - if (vma->vm_flags & VM_WRITE) - return false; - - /* - * See can_change_pte_writable(): we broke COW and could map the page - * writable if we have an exclusive anonymous page ... - */ - if (!page || !PageAnon(page) || !PageAnonExclusive(page)) - return false; - - /* ... and a write-fault isn't required for other reasons. */ - if (vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd)) - return false; - return !userfaultfd_huge_pmd_wp(vma, pmd); -} - -struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, - unsigned long addr, - pmd_t *pmd, - unsigned int flags) -{ - struct mm_struct *mm = vma->vm_mm; - struct page *page; - int ret; - - assert_spin_locked(pmd_lockptr(mm, pmd)); - - page = pmd_page(*pmd); - VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page); - - if ((flags & FOLL_WRITE) && - !can_follow_write_pmd(*pmd, page, vma, flags)) - return NULL; - - /* Avoid dumping huge zero page */ - if ((flags & FOLL_DUMP) && is_huge_zero_pmd(*pmd)) - return ERR_PTR(-EFAULT); - - if (pmd_protnone(*pmd) && !gup_can_follow_protnone(vma, flags)) - return NULL; - - if (!pmd_write(*pmd) && gup_must_unshare(vma, flags, page)) - return ERR_PTR(-EMLINK); - - VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && - !PageAnonExclusive(page), page); - - ret = try_grab_page(page, flags); - if (ret) - return ERR_PTR(ret); - - if (flags & FOLL_TOUCH) - touch_pmd(vma, addr, pmd, flags & FOLL_WRITE); - - page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT; - VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page); - - return page; -} - /* NUMA hinting page fault entry point for trans huge pmds */ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) { diff --git a/mm/internal.h b/mm/internal.h index 5821b7a14b62..99994b41a220 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -1009,9 +1009,8 @@ int __must_check try_grab_page(struct page *page, unsigned int flags); */ void touch_pud(struct vm_area_struct *vma, unsigned long addr, pud_t *pud, bool write); -struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, - unsigned long addr, pmd_t *pmd, - unsigned int flags); +void touch_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, bool write); /* * mm/mmap.c From patchwork Wed Jan 3 09:14:22 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13509775 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5AF25C47074 for ; Wed, 3 Jan 2024 09:17:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DD4EF8D0059; Wed, 3 Jan 2024 04:17:15 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id D38B38D0053; Wed, 3 Jan 2024 04:17:15 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B63AB8D0059; Wed, 3 Jan 2024 04:17:15 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 9C62B8D0053 for ; Wed, 3 Jan 2024 04:17:15 -0500 (EST) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 5C0AE120337 for ; Wed, 3 Jan 2024 09:17:15 +0000 (UTC) X-FDA: 81637446030.23.573DB59 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf10.hostedemail.com (Postfix) with ESMTP id 995B7C000B for ; Wed, 3 Jan 2024 09:17:13 +0000 (UTC) Authentication-Results: imf10.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=VWRWNSFk; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf10.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704273433; a=rsa-sha256; cv=none; b=cmchrhw+t4GSryszEFHAqUdy1C0VawqUVRliCUAugbgvczbJ6LPG1Mqd06rmZz0SGV8Ubf QQj7gBU7uv/6BL+Rs/JdWYPwMhYxFImhV/6U7i5u6EAcayfCMSp84IB6kQ2VpU2MXmYOU2 YBA8Umz2y2Yl50HkGzb5iWLOQPgKvuY= ARC-Authentication-Results: i=1; imf10.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=VWRWNSFk; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf10.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704273433; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=8AcO9+0z+3vykJ/cwcx3VMGKR3J7wY17yFKVVVWLkvk=; b=wPna0z4Hyxp3JuI53ORAnv+irQi7dogVS1uONRF7vtA8vUuWKNjXd7je+BzGgdDHJeLLOv wNtT32lsYVldsgC28fBCWm0XsOKyqbM4ZUD+OfqXWwWFSM1Ma5DO5xZS+6s1EzzUIAqnDo 4H7s8TnS5SjPJYLbY0VSzCifWf3z2CU= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704273432; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8AcO9+0z+3vykJ/cwcx3VMGKR3J7wY17yFKVVVWLkvk=; b=VWRWNSFkEINMtUmqAzNid11ZeFNWSKxWkgjiAGdvj+3xOusCCqNtvJMuxazzhSr4pl0nzK Qg07KJM0cu2I2f2pb3hQ3CqP0EwguLdA3L7U6ySkiq0f05Z3TDsQ96CY3KwqBcFozbcSE9 Ku6DiQGFYUC0vXavUBdaOoudEu038jI= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-628-whtNjjhbNK66iMLnbmrOvA-1; Wed, 03 Jan 2024 04:17:06 -0500 X-MC-Unique: whtNjjhbNK66iMLnbmrOvA-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 744CD3813BC1; Wed, 3 Jan 2024 09:17:05 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.69]) by smtp.corp.redhat.com (Postfix) with ESMTP id A7283492BF0; Wed, 3 Jan 2024 09:16:53 +0000 (UTC) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: James Houghton , David Hildenbrand , "Kirill A . Shutemov" , Yang Shi , peterx@redhat.com, linux-riscv@lists.infradead.org, Andrew Morton , "Aneesh Kumar K . V" , Rik van Riel , Andrea Arcangeli , Axel Rasmussen , Mike Rapoport , John Hubbard , Vlastimil Babka , Michael Ellerman , Christophe Leroy , Andrew Jones , linuxppc-dev@lists.ozlabs.org, Mike Kravetz , Muchun Song , linux-arm-kernel@lists.infradead.org, Jason Gunthorpe , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox Subject: [PATCH v2 12/13] mm/gup: Handle hugepd for follow_page() Date: Wed, 3 Jan 2024 17:14:22 +0800 Message-ID: <20240103091423.400294-13-peterx@redhat.com> In-Reply-To: <20240103091423.400294-1-peterx@redhat.com> References: <20240103091423.400294-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 995B7C000B X-Stat-Signature: 8nwf8cpau6pf1iyzfbc94sw61bi3faca X-HE-Tag: 1704273433-426792 X-HE-Meta: U2FsdGVkX1/4wX1sUPMZq4ghX5urdKvF5LridcqdDRXCHwFT7DT2UOPv6HvefgXw+ABdTWkvd/Mb+njh/aQ7nxoOqxAunoEMCieVd6h8qIQu6W2yqlUJbKl2CPnS8sY0Ucw/M7ZiqjOg1b/73xrKDF9dv2exxOFfWFmbD6DP3/HgvRSkooZgtOwIRXdGoPLo/ewOvcDs6yo9kMhBI0WwSTjOD5EKTc0uCmQGCXUrPvWblV7wfuu2ZxCi+ORI7wzZuSQbpTxfoYwNb4owFui3nNWA73Qb0Dv9vUv+yWVfbYKkIlYQCXZjOzzsDwcBAofr7n5E+n7ek+rfzOaGd9g+VCK6sm1RyBrVXmwFBm1qIR65vs5vFP2+Oq4mO7dbymbZrINZo3nW+HgKHnE/pf1LFTGT7hitMn3Y/4Qpglks1mNry/YmzEh5DbrdCEC/4/C/nXw4t1trxy1SurHmMAxEXUHDCVx21k4S7oiLflEM2R2LCq06MCf+ed4uIdHXaea/uIMS3bHBc09ZMhFdCsFX79htCS8x2vOOB7V4JRbSMV4MOZwfwvfrFoY0q+joimn8psrSvcPYYj+NSU8Q9MhRWOvXVDMMsCe/JH5mTNzwQuqEWqpDzj9oV/TldHMNiT28P6QHmDwlEQRiwZtryd8Uj3hKHVulmsBVpRu1DXP7T6ohxNVbMM799NI295hRqX7Zd83f5KmvCoHSxk285EHaEuv/ox6A43xdA+seQwChlGuHiXpCEL3D7krjDSwjBYdidepOCydCZbDwXg0zYyZkEzqWfVeSMgDTZ7G6rCwQWJET7MBUki278ehVltQMnrAQ/GV6y72vskE2zFvQvFgTqfsaG4v7aSxe+wBfd08xzLw6HEwUSlRKEFha1Nn4DvlSThppuNk4+1C2LhcDTyAb0ucyy81/iUoIdGdqzfYbofcchWC0kdR4yJP58ej+pfCskcLXAOflTECAa9oTgb3 9QxAAvIh SMpPe+GYvSP9VZY0DVMswpa7gKa97p54l1NBcdU/5U9xBhJwVFAKPlcUsfFWAzF2Ypir95EU1NTEFyfN6FuVNo9Tfou+xcWq/HAeGu5jh6JY4t++UK2F3nnXm0rDmo1K/0jZCDg3f/bAVB9Ad+BTw6XGKDxnuM4d8+ohlZp71qr1Z55ZS8fvR1kxpwFGwwUy/+PS+zk543MXuYJRAbpJOCHPiTvWZ3hCRU0xK X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Hugepd is only used in PowerPC so far on 4K page size kernels where hash mmu is used. follow_page_mask() used to leverage hugetlb APIs to access hugepd entries. Teach follow_page_mask() itself on hugepd. With previous refactors on fast-gup gup_huge_pd(), most of the code can be easily leveraged. There's something not needed for follow page, for example, gup_hugepte() tries to detect pgtable entry change which will never happen with slow gup (which has the pgtable lock held), but that's not a problem to check. Since follow_page() always only fetch one page, set the end to "address + PAGE_SIZE" should suffice. We will still do the pgtable walk once for each hugetlb page by setting ctx->page_mask properly. One thing worth mentioning is that some level of pgtable's _bad() helper will report is_hugepd() entries as TRUE on Power8 hash MMUs. I think it at least applies to PUD on Power8 with 4K pgsize. It means feeding a hugepd entry to pud_bad() will report a false positive. Let's leave that for now because it can be arch-specific where I am a bit declined to touch. In this patch it's not a problem as long as hugepd is detected before any bad pgtable entries. Signed-off-by: Peter Xu --- mm/gup.c | 78 +++++++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 69 insertions(+), 9 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index d96429b6fc55..245214b64108 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -30,6 +30,11 @@ struct follow_page_context { unsigned int page_mask; }; +static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd, + unsigned long addr, unsigned int pdshift, + unsigned int flags, + struct follow_page_context *ctx); + static inline void sanity_check_pinned_pages(struct page **pages, unsigned long npages) { @@ -871,6 +876,9 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma, return no_page_table(vma, flags, address); if (!pmd_present(pmdval)) return no_page_table(vma, flags, address); + if (unlikely(is_hugepd(__hugepd(pmd_val(pmdval))))) + return follow_hugepd(vma, __hugepd(pmd_val(pmdval)), + address, PMD_SHIFT, flags, ctx); if (pmd_devmap(pmdval)) { ptl = pmd_lock(mm, pmd); page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap); @@ -921,6 +929,9 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, pud = READ_ONCE(*pudp); if (pud_none(pud) || !pud_present(pud)) return no_page_table(vma, flags, address); + if (unlikely(is_hugepd(__hugepd(pud_val(pud))))) + return follow_hugepd(vma, __hugepd(pud_val(pud)), + address, PUD_SHIFT, flags, ctx); if (pud_huge(pud)) { ptl = pud_lock(mm, pudp); page = follow_huge_pud(vma, address, pudp, flags, ctx); @@ -940,13 +951,17 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma, unsigned int flags, struct follow_page_context *ctx) { - p4d_t *p4d; + p4d_t *p4d, p4dval; p4d = p4d_offset(pgdp, address); - if (p4d_none(*p4d)) - return no_page_table(vma, flags, address); - BUILD_BUG_ON(p4d_huge(*p4d)); - if (unlikely(p4d_bad(*p4d))) + p4dval = *p4d; + BUILD_BUG_ON(p4d_huge(p4dval)); + + if (unlikely(is_hugepd(__hugepd(p4d_val(p4dval))))) + return follow_hugepd(vma, __hugepd(p4d_val(p4dval)), + address, P4D_SHIFT, flags, ctx); + + if (p4d_none(p4dval) || unlikely(p4d_bad(p4dval))) return no_page_table(vma, flags, address); return follow_pud_mask(vma, address, p4d, flags, ctx); @@ -980,7 +995,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, unsigned long address, unsigned int flags, struct follow_page_context *ctx) { - pgd_t *pgd; + pgd_t *pgd, pgdval; struct mm_struct *mm = vma->vm_mm; ctx->page_mask = 0; @@ -995,11 +1010,17 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, &ctx->page_mask); pgd = pgd_offset(mm, address); + pgdval = *pgd; - if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) - return no_page_table(vma, flags, address); + if (unlikely(is_hugepd(__hugepd(pgd_val(pgdval))))) + page = follow_hugepd(vma, __hugepd(pgd_val(pgdval)), + address, PGDIR_SHIFT, flags, ctx); + else if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) + page = no_page_table(vma, flags, address); + else + page = follow_p4d_mask(vma, address, pgd, flags, ctx); - return follow_p4d_mask(vma, address, pgd, flags, ctx); + return page; } struct page *follow_page(struct vm_area_struct *vma, unsigned long address, @@ -3026,6 +3047,37 @@ static int gup_huge_pd(hugepd_t hugepd, unsigned long addr, return 1; } + +static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd, + unsigned long addr, unsigned int pdshift, + unsigned int flags, + struct follow_page_context *ctx) +{ + struct page *page; + struct hstate *h; + spinlock_t *ptl; + int nr = 0, ret; + pte_t *ptep; + + /* Only hugetlb supports hugepd */ + if (WARN_ON_ONCE(!is_vm_hugetlb_page(vma))) + return ERR_PTR(-EFAULT); + + h = hstate_vma(vma); + ptep = hugepte_offset(hugepd, addr, pdshift); + ptl = huge_pte_lock(h, vma->vm_mm, ptep); + ret = gup_huge_pd(hugepd, addr, pdshift, addr + PAGE_SIZE, + flags, &page, &nr); + spin_unlock(ptl); + + if (ret) { + WARN_ON_ONCE(nr != 1); + ctx->page_mask = (1U << huge_page_order(h)) - 1; + return page; + } + + return NULL; +} #else static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr, unsigned int pdshift, unsigned long end, unsigned int flags, @@ -3033,6 +3085,14 @@ static inline int gup_huge_pd(hugepd_t hugepd, unsigned long addr, { return 0; } + +static struct page *follow_hugepd(struct vm_area_struct *vma, hugepd_t hugepd, + unsigned long addr, unsigned int pdshift, + unsigned int flags, + struct follow_page_context *ctx) +{ + return NULL; +} #endif /* CONFIG_ARCH_HAS_HUGEPD */ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, From patchwork Wed Jan 3 09:14:23 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13509776 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id B33A8C47074 for ; Wed, 3 Jan 2024 09:17:25 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 43A238D005A; Wed, 3 Jan 2024 04:17:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3749F8D0053; Wed, 3 Jan 2024 04:17:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1CA158D005A; Wed, 3 Jan 2024 04:17:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 04EF58D0053 for ; Wed, 3 Jan 2024 04:17:25 -0500 (EST) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id D2ACFC086E for ; Wed, 3 Jan 2024 09:17:24 +0000 (UTC) X-FDA: 81637446408.15.812EC3D Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf21.hostedemail.com (Postfix) with ESMTP id 1D7A91C0002 for ; Wed, 3 Jan 2024 09:17:22 +0000 (UTC) Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=SMjl682N; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf21.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1704273443; a=rsa-sha256; cv=none; b=lYyABNo8ZJoMxOGhFNm9bSPYIIxbzqla16ECU3fhsNKvQAoaq0xpa0hAyKSsyCUHILje5I 3mJOPsxtV9guymiu1diDuOwtd61h/evt4TFrQ/OReoSb696HPED3RrQV7za703D0AjfgL1 0X6U0s/GrFL2FNT2CY3saI5FNrgyLGQ= ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=SMjl682N; dmarc=pass (policy=none) header.from=redhat.com; spf=pass (imf21.hostedemail.com: domain of peterx@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1704273443; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=1p2LWjy53LBaoBTc753xtp3ZFESBZSj8dXlH3kmoDms=; b=SWAn6huB8jcJ07ZnbSj6k6ma/xUmxFxqJO9NtK63QfAUW7yslwrTsRQ23gBIsIgkBGgGVP 5EV9V20IBcU9eLtwhuYFJgfGhnCO66j3JCk9QHciPz67H1wc/b//gKrD2QfflnFlWZAxWs AIruNO4fnGI0XTtRX3JrcOHx27/CsLs= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1704273442; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1p2LWjy53LBaoBTc753xtp3ZFESBZSj8dXlH3kmoDms=; b=SMjl682N8IjoNKoD01jTAAGDpllo5MsqYaN36YwrFOsppQQPITcXZvALph4IPICmA/ZCWM HXO5KqxYrgScwS0pFTOac4xfwHmwl7/HG1uRb9ePN+cXubG11WJCc5IxFn4xOnXD6oo0mW w4Rwasix5N0F82KR/h9wEJzpnfIPRcc= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-595-K7snBKXGMluKlA4u3ja1Zg-1; Wed, 03 Jan 2024 04:17:19 -0500 X-MC-Unique: K7snBKXGMluKlA4u3ja1Zg-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 26F8783B86A; Wed, 3 Jan 2024 09:17:18 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.69]) by smtp.corp.redhat.com (Postfix) with ESMTP id 4404B492BE6; Wed, 3 Jan 2024 09:17:05 +0000 (UTC) From: peterx@redhat.com To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: James Houghton , David Hildenbrand , "Kirill A . Shutemov" , Yang Shi , peterx@redhat.com, linux-riscv@lists.infradead.org, Andrew Morton , "Aneesh Kumar K . V" , Rik van Riel , Andrea Arcangeli , Axel Rasmussen , Mike Rapoport , John Hubbard , Vlastimil Babka , Michael Ellerman , Christophe Leroy , Andrew Jones , linuxppc-dev@lists.ozlabs.org, Mike Kravetz , Muchun Song , linux-arm-kernel@lists.infradead.org, Jason Gunthorpe , Christoph Hellwig , Lorenzo Stoakes , Matthew Wilcox Subject: [PATCH v2 13/13] mm/gup: Handle hugetlb in the generic follow_page_mask code Date: Wed, 3 Jan 2024 17:14:23 +0800 Message-ID: <20240103091423.400294-14-peterx@redhat.com> In-Reply-To: <20240103091423.400294-1-peterx@redhat.com> References: <20240103091423.400294-1-peterx@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 1D7A91C0002 X-Stat-Signature: fpg31zhjiype7f3jkcjcrhimhwz9srhr X-HE-Tag: 1704273442-327122 X-HE-Meta: U2FsdGVkX18O8FDJQR/3du4Ab5t/bUd0+1DCAKRoecAdeRftP1JeY1ibXNJQCdEI3xkkKmi9kg2i4WHULAba/wqWWtSR//PPhFhMtjKkX63kAkBdVSi0SuZ8Js4bh30f+8fLOAkHs5FTd65RLNko5qdn0ZcUX0DjZHHed6dmYXge7rY/8nFEpPGltsxouRMirXT5EOp7xbilIT17f+dHkRxqwmlC8gSHqwKHrONAAp2IRakO4qIrNDUB/Y47zDiR1gdjT2OCBm2NFzukLGGW8hXxJn1SK4PWlmpDmXPy9p8roP2RLfEatus9GDAdgNq8pUoPndGZ5S8c93XKn2O4eKs/YBgQ3KQVXdSphhZBGjHF7OfYbpYZVW+E17AihMFBEY6HXSSBI8RYuE/ATCKuUN4i4G9dJTBiyL0+erwx5CzOoe7wQeYi2ks4hD2WtdYxaCghuLyNSxi8HaoZOJH3pupbj7Q6uOX9aSGQUA9Zn3S+1ruhbvRjnyXR24DAs0kIzw11RDuAALLXSjMYk6LYtPFg8mVBqpwDIqV0z1gtWO0Q85xYRymq6PSpuiPir1CqXuX1DAbBcNB2JhCMejbuJl2H6Nwahdc18UmmCKg19Y2I4WuaWtmJXqjjL62M2gjIFEhInoJc5rkIYA4trjFfPMn8D8dxA6XM6SZglX6Vg7C2krQ1RDXpeBT0ymoGaFtmLmAcIepKhkHAwZlwRZZB3/GbpLTkyZXE0Lp+rGfOmPOOvX+BMpl1UzUzkENw8m+fK8SL+hhSnDHR08mnZT8bkOsw/5jA/kvEl0o/r7ybXaMFcaOd0wkU3/A+mHEQ6/aZASStXoshtDoff5pmcJxPHs45jfnbnzbQewkxBJOHDaHVkgef8WKnKKlBT34qqjX90b/0qOZKmVvz49mu/QqqNFZZCR1RyUxJ3TsQ127IZ0SgCaQy8pWXnzq86w4mmIg5unz1ay7iJ3ZNwfP+5uG EJmRUhAA JbaPlHsor7dxIp/obWk/e/L6r7dCB0j5WoJ69iaaeuIeFJ6KmNkS+gKRRFsZORiF/8cK58JgwBrD9yOq6C0ma2BojwDcjC1AUPCFI23tuTD6dCPngo+aRYemLsjG9z702kqks7mSDU6RJbsqQzHYHROowS0tdxX9SSGWOZcjO3eAth06l6+ejKa7oZgK/ig1uCZOshwMNfR9dPgMI9/5WZ/hTNA3yFYtMXroo X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Peter Xu Now follow_page() is ready to handle hugetlb pages in whatever form, and over all architectures. Switch to the generic code path. Time to retire hugetlb_follow_page_mask(), following the previous retirement of follow_hugetlb_page() in 4849807114b8. There may be a slight difference of how the loops run when processing slow GUP over a large hugetlb range on cont_pte/cont_pmd supported archs: each loop of __get_user_pages() will resolve one pgtable entry with the patch applied, rather than relying on the size of hugetlb hstate, the latter may cover multiple entries in one loop. A quick performance test on an aarch64 VM on M1 chip shows 15% degrade over a tight loop of slow gup after the path switched. That shouldn't be a problem because slow-gup should not be a hot path for GUP in general: when page is commonly present, fast-gup will already succeed, while when the page is indeed missing and require a follow up page fault, the slow gup degrade will probably buried in the fault paths anyway. It also explains why slow gup for THP used to be very slow before 57edfcfd3419 ("mm/gup: accelerate thp gup even for "pages != NULL"") lands, the latter not part of a performance analysis but a side benefit. If the performance will be a concern, we can consider handle CONT_PTE in follow_page(). Before that is justified to be necessary, keep everything clean and simple. Signed-off-by: Peter Xu --- include/linux/hugetlb.h | 7 ---- mm/gup.c | 15 +++------ mm/hugetlb.c | 71 ----------------------------------------- 3 files changed, 5 insertions(+), 88 deletions(-) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index e8eddd51fc17..cdbb53407722 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -332,13 +332,6 @@ static inline void hugetlb_zap_end( { } -static inline struct page *hugetlb_follow_page_mask( - struct vm_area_struct *vma, unsigned long address, unsigned int flags, - unsigned int *page_mask) -{ - BUILD_BUG(); /* should never be compiled in if !CONFIG_HUGETLB_PAGE*/ -} - static inline int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src, struct vm_area_struct *dst_vma, diff --git a/mm/gup.c b/mm/gup.c index 245214b64108..4f8a3dc287c9 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -997,18 +997,11 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, { pgd_t *pgd, pgdval; struct mm_struct *mm = vma->vm_mm; + struct page *page; - ctx->page_mask = 0; - - /* - * Call hugetlb_follow_page_mask for hugetlb vmas as it will use - * special hugetlb page table walking code. This eliminates the - * need to check for hugetlb entries in the general walking code. - */ - if (is_vm_hugetlb_page(vma)) - return hugetlb_follow_page_mask(vma, address, flags, - &ctx->page_mask); + vma_pgtable_walk_begin(vma); + ctx->page_mask = 0; pgd = pgd_offset(mm, address); pgdval = *pgd; @@ -1020,6 +1013,8 @@ static struct page *follow_page_mask(struct vm_area_struct *vma, else page = follow_p4d_mask(vma, address, pgd, flags, ctx); + vma_pgtable_walk_end(vma); + return page; } diff --git a/mm/hugetlb.c b/mm/hugetlb.c index bfb52bb8b943..e13b4e038c2c 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6782,77 +6782,6 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte, } #endif /* CONFIG_USERFAULTFD */ -struct page *hugetlb_follow_page_mask(struct vm_area_struct *vma, - unsigned long address, unsigned int flags, - unsigned int *page_mask) -{ - struct hstate *h = hstate_vma(vma); - struct mm_struct *mm = vma->vm_mm; - unsigned long haddr = address & huge_page_mask(h); - struct page *page = NULL; - spinlock_t *ptl; - pte_t *pte, entry; - int ret; - - hugetlb_vma_lock_read(vma); - pte = hugetlb_walk(vma, haddr, huge_page_size(h)); - if (!pte) - goto out_unlock; - - ptl = huge_pte_lock(h, mm, pte); - entry = huge_ptep_get(pte); - if (pte_present(entry)) { - page = pte_page(entry); - - if (!huge_pte_write(entry)) { - if (flags & FOLL_WRITE) { - page = NULL; - goto out; - } - - if (gup_must_unshare(vma, flags, page)) { - /* Tell the caller to do unsharing */ - page = ERR_PTR(-EMLINK); - goto out; - } - } - - page = nth_page(page, ((address & ~huge_page_mask(h)) >> PAGE_SHIFT)); - - /* - * Note that page may be a sub-page, and with vmemmap - * optimizations the page struct may be read only. - * try_grab_page() will increase the ref count on the - * head page, so this will be OK. - * - * try_grab_page() should always be able to get the page here, - * because we hold the ptl lock and have verified pte_present(). - */ - ret = try_grab_page(page, flags); - - if (WARN_ON_ONCE(ret)) { - page = ERR_PTR(ret); - goto out; - } - - *page_mask = (1U << huge_page_order(h)) - 1; - } -out: - spin_unlock(ptl); -out_unlock: - hugetlb_vma_unlock_read(vma); - - /* - * Fixup retval for dump requests: if pagecache doesn't exist, - * don't try to allocate a new page but just skip it. - */ - if (!page && (flags & FOLL_DUMP) && - !hugetlbfs_pagecache_present(h, vma, address)) - page = ERR_PTR(-EFAULT); - - return page; -} - long hugetlb_change_protection(struct vm_area_struct *vma, unsigned long address, unsigned long end, pgprot_t newprot, unsigned long cp_flags)