From patchwork Wed Jul 17 22:02:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 13735849 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 84AC9C3DA62 for ; Wed, 17 Jul 2024 22:02:42 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 3B84B6B00BB; Wed, 17 Jul 2024 18:02:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 369B66B00BC; Wed, 17 Jul 2024 18:02:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 195306B00BD; Wed, 17 Jul 2024 18:02:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id EAC516B00BB for ; Wed, 17 Jul 2024 18:02:36 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 933FEC0D1E for ; Wed, 17 Jul 2024 22:02:36 +0000 (UTC) X-FDA: 82350619512.13.C13F6C4 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf28.hostedemail.com (Postfix) with ESMTP id 799F1C0005 for ; Wed, 17 Jul 2024 22:02:34 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UZJZGZQI; spf=pass (imf28.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721253715; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=zJ8eA1hPxoK8G44ES49PoyN6H9fuen7ZiOJ63tJzju0=; b=dayGkne0JyaBZE7NERiL9JpX1WfqgFN8WBJI1cRo7mbhHUU+FnPx6BAjENHCbkskxJ/2Zd 210OwzZhpBhU2KcUFYFiimfnPOBCBaZjY6RcofRLsgFSH2v4pz0DVW0AdGollViqYwuSSf w3iAZZLFfiEWT9u98mseCL6L01kOHgo= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721253715; a=rsa-sha256; cv=none; b=jO2TdCHkIiCj/gEiaFYcd1KAlnNL7Mrv/GoMfpk6bq4emHso7zzrs7zIFR9Fxsuli4DuQH EBb9BYBoN9y2+7umws9dt9oNc/MRO4x/Xl0Dfwd2AAY68kVYJ+xNpMW0thIbZIOMQQ9r91 j+g42V6WFzBYzNidXKturxC3R5bI31o= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=UZJZGZQI; spf=pass (imf28.hostedemail.com: domain of peterx@redhat.com designates 170.10.133.124 as permitted sender) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1721253753; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=zJ8eA1hPxoK8G44ES49PoyN6H9fuen7ZiOJ63tJzju0=; b=UZJZGZQI5N1saBiwJ6j2rxTh3Yq5/Dk89IZ2y4ngk+myHoaK8F2cL9hwSfgTA3IrXvlp7v jh7NHmDlvCue842vJv/i6yQkbqPYA5fQ7/N5A5qIy0XKGdzoaIXkiCEsDNuRyrUbM7UXsi 7SARwxd+DhCPjKxII/5Fha8mHP0ek8Q= Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com [209.85.219.69]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-44-RxbiT6JLO2-A8zkYGrhfag-1; Wed, 17 Jul 2024 18:02:32 -0400 X-MC-Unique: RxbiT6JLO2-A8zkYGrhfag-1 Received: by mail-qv1-f69.google.com with SMTP id 6a1803df08f44-6b79c5c972eso514196d6.1 for ; Wed, 17 Jul 2024 15:02:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721253751; x=1721858551; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zJ8eA1hPxoK8G44ES49PoyN6H9fuen7ZiOJ63tJzju0=; b=tTVm71WwErVzl5Pm0WN/F1vNlp+/PMjriI0qZ3rcHvuA3X9qc3Yj7mN+giEij4U0Ag /plhLe1ufznwyLCyxDCJQjaLIu6j/qc6NJbR4F2/7AGWpkQLKwVeaSJsC4BkenU/Yap2 85skWkYiij1U6etxOgkFWpdwhuNYqJWIcyT5qmj+C8HgSpUZ59iM6iClzTqnMtYddA7G 5TJbuTGRCPcaaRrmBznIfjtqDXz9yvqkoLby/SDPgtR0rkLOryP9lNWtGB1WF8j51vDB zw2k/swmLyFz7kO2bDFKHbAKlpuwg1uEGlDl5tmsnjk/5LiRELWngOPr1ehQZsuedu+w 9dlg== X-Forwarded-Encrypted: i=1; AJvYcCWCWydnjlShiuG0SYk7Cumk2FAEVC8yBr/8YDH91e32aHuS6VohYRFcbkZa/4ms6co6ubocbsWVd3fwQYizneDNLbc= X-Gm-Message-State: AOJu0Yzc1pIzhD170v/15dTKLeJ7NEc3F79bKOqH/kO6U2kR3dcExe5m u99JUHkfJAXXi/LHKNFlzZgia2/TS6+qDFiee6vujh6hqaif858LLGTqJVwih6nct7Jgcw16rO6 wUd38+HyuKY1H0UJ3DyC1T0kJ/ZYmzKPEN0Y22+NyuUYIxHKJ X-Received: by 2002:a05:622a:164b:b0:44e:cff7:3743 with SMTP id d75a77b69052e-44f86e7339emr21651811cf.9.1721253750856; Wed, 17 Jul 2024 15:02:30 -0700 (PDT) X-Google-Smtp-Source: AGHT+IHgaCdw1UOkSL+KIFT2eMjRU01W5DATHpkzikOyR2irQlfZJvXc4O4/CXTqU/rdsuwB1sVYnA== X-Received: by 2002:a05:622a:164b:b0:44e:cff7:3743 with SMTP id d75a77b69052e-44f86e7339emr21651491cf.9.1721253750364; Wed, 17 Jul 2024 15:02:30 -0700 (PDT) Received: from x1n.redhat.com (pool-99-254-121-117.cpe.net.cable.rogers.com. [99.254.121.117]) by smtp.gmail.com with ESMTPSA id d75a77b69052e-44f5b83f632sm53071651cf.85.2024.07.17.15.02.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 Jul 2024 15:02:29 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Vlastimil Babka , peterx@redhat.com, David Hildenbrand , Oscar Salvador , linux-s390@vger.kernel.org, Andrew Morton , Matthew Wilcox , Dan Williams , Michal Hocko , linux-riscv@lists.infradead.org, sparclinux@vger.kernel.org, Alex Williamson , Jason Gunthorpe , x86@kernel.org, Alistair Popple , linuxppc-dev@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org, Ryan Roberts , Hugh Dickins , Axel Rasmussen Subject: [PATCH RFC 4/6] mm: Move huge mapping declarations from internal.h to huge_mm.h Date: Wed, 17 Jul 2024 18:02:17 -0400 Message-ID: <20240717220219.3743374-5-peterx@redhat.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20240717220219.3743374-1-peterx@redhat.com> References: <20240717220219.3743374-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Rspamd-Queue-Id: 799F1C0005 X-Stat-Signature: q13zamqkz5dwowkntxbrz3tc3cciww6w X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1721253754-784308 X-HE-Meta: U2FsdGVkX18jCpL2LuTr410qHsquhRe/I88w64i7HViEQqN4x3zZ/hwX7yrcZYgFZaIqKUYHoJighXAQrksmpnoYBXWQtSf7T2CSiHST9t/+UDzHmYl0m7MyJAOq5nLRLu1Wv8mGHqKAiSPc5Qstbi70aVilMjug8o5GedK1xw99gG9U4HY7xuZONj+CSkcjlvNVlBym8aU0ugDLwWE8T6bEtyaFT2azKWhNuW+uP5yNt3jqAonH7y/9odU+kxOSc8P9mAhjKSBG988tcn58rp1QnepjteIIX7dv7RdOutDjPpiu5zuGh60H3muUiF6b0zPuBk6F8sCj3yvOwETG2RlMncY3Ovagl45oxDQHO2ntBYjeuT8eWrZy3Tbddg0kcz5amkNukSHXr6fAzZXMBck+/BJfDdAuzUf3OI27B9+03AFZDJN62GZBlnRQWi4CC9Eb8/TjoV6Hy996qLZb7xHyQ8apFQB9HtpILYfSEC4OUu0S3Bh/pd/UvrKSufXm45IPmo0tNwXWLuHlMn3onwpclLnSqDjZXuEwLWeKOEPf+DcEu97+Vs+zfeGEUOWd7LJAtGq3KkVBTeFc0+uwCjttHo6tLGFGoB3hzh4Xr5IN254Hrgw+kJGYS+9sSOFjUG4Hr0Ho9FoZEgVmqf26ZGEpESp5s40CtQXOi84EaJS0qfisbOKR+qrm4hS4+93uesDZXmnhGiar9YkKdTAIu1GvEIlrqNPxCrFIVHfdgjX6pZg0oJpEY9mQ7r7PBvITyew3/Y3HfhxR4oLDQ92E6rpmFogsFSAw2GH+WYU2NnEyi+vDa0HUKbdBJt+irvOp1k2TPRYCQemM3SJJWPjq+o4hvESfmOhnlfAO0qU+6GXrKboy4TAZiNnZVXUk5XyhcSPq7/+/1qHOqsNrtOduBFkgl6bMaGVLql3hnKI0ix5PnY5OSN+j8miFEqJEQf9i6Jz41vPimpZcCCxCznv p3RGTxdh jpomkx2O+X6zNid7T+rGtazbXHLBbeBzg/diMD9xDVcrzqKx8IaSpEKfVEMdrs+mMTdUNNvzUdoA9sYCbLvzV3zB/DyqTbfkRyKiV0sbqEaBCJRHc8DU5cZZLXq9pJHn6BaREIXDm3yQ9f4eWrVCyESPhROWL3+veXjNOA328FS9nJ8N76QZwLrvh9LROicbjfYi/PSWbJCP4vjePNEhU2DgriTBHzef+SFbFzbznfY9fmu3K6wLO1AWudlmb6YK5BeMqXHyAYAI9G2XpepEbGAs9/nhF6rg8oGd/1/XUZ7TfFJnj3y88WdAvIrmdnCCjcj3cUGt12WTsW+D7tbujVfy3iw== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Most of the huge mapping relevant helpers are declared in huge_mm.h, not internal.h. Move the only few from internal.h into huge_mm.h. Here to move pmd_needs_soft_dirty_wp() over, we'll also need to move vma_soft_dirty_enabled() into mm.h as it'll be needed in two headers later (internal.h, huge_mm.h). Signed-off-by: Peter Xu --- include/linux/huge_mm.h | 10 ++++++++++ include/linux/mm.h | 18 ++++++++++++++++++ mm/internal.h | 33 --------------------------------- 3 files changed, 28 insertions(+), 33 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 37482c8445d1..d8b642ad512d 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -8,6 +8,11 @@ #include /* only for vma_is_dax() */ #include +void touch_pud(struct vm_area_struct *vma, unsigned long addr, + pud_t *pud, bool write); +void touch_pmd(struct vm_area_struct *vma, unsigned long addr, + pmd_t *pmd, bool write); +pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma); vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf); int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm, pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr, @@ -629,4 +634,9 @@ static inline int split_folio_to_order(struct folio *folio, int new_order) #define split_folio_to_list(f, l) split_folio_to_list_to_order(f, l, 0) #define split_folio(f) split_folio_to_order(f, 0) +static inline bool pmd_needs_soft_dirty_wp(struct vm_area_struct *vma, pmd_t pmd) +{ + return vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd); +} + #endif /* _LINUX_HUGE_MM_H */ diff --git a/include/linux/mm.h b/include/linux/mm.h index 5f1075d19600..fa10802d8faa 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1117,6 +1117,24 @@ static inline unsigned int folio_order(struct folio *folio) return folio->_flags_1 & 0xff; } +static inline bool vma_soft_dirty_enabled(struct vm_area_struct *vma) +{ + /* + * NOTE: we must check this before VM_SOFTDIRTY on soft-dirty + * enablements, because when without soft-dirty being compiled in, + * VM_SOFTDIRTY is defined as 0x0, then !(vm_flags & VM_SOFTDIRTY) + * will be constantly true. + */ + if (!IS_ENABLED(CONFIG_MEM_SOFT_DIRTY)) + return false; + + /* + * Soft-dirty is kind of special: its tracking is enabled when the + * vma flags not set. + */ + return !(vma->vm_flags & VM_SOFTDIRTY); +} + #include /* diff --git a/mm/internal.h b/mm/internal.h index b4d86436565b..e49941747749 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -917,8 +917,6 @@ bool need_mlock_drain(int cpu); void mlock_drain_local(void); void mlock_drain_remote(int cpu); -extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma); - /** * vma_address - Find the virtual address a page range is mapped at * @vma: The vma which maps this object. @@ -1229,14 +1227,6 @@ int migrate_device_coherent_page(struct page *page); int __must_check try_grab_folio(struct folio *folio, int refs, unsigned int flags); -/* - * mm/huge_memory.c - */ -void touch_pud(struct vm_area_struct *vma, unsigned long addr, - pud_t *pud, bool write); -void touch_pmd(struct vm_area_struct *vma, unsigned long addr, - pmd_t *pmd, bool write); - /* * mm/mmap.c */ @@ -1342,29 +1332,6 @@ static __always_inline void vma_set_range(struct vm_area_struct *vma, vma->vm_pgoff = pgoff; } -static inline bool vma_soft_dirty_enabled(struct vm_area_struct *vma) -{ - /* - * NOTE: we must check this before VM_SOFTDIRTY on soft-dirty - * enablements, because when without soft-dirty being compiled in, - * VM_SOFTDIRTY is defined as 0x0, then !(vm_flags & VM_SOFTDIRTY) - * will be constantly true. - */ - if (!IS_ENABLED(CONFIG_MEM_SOFT_DIRTY)) - return false; - - /* - * Soft-dirty is kind of special: its tracking is enabled when the - * vma flags not set. - */ - return !(vma->vm_flags & VM_SOFTDIRTY); -} - -static inline bool pmd_needs_soft_dirty_wp(struct vm_area_struct *vma, pmd_t pmd) -{ - return vma_soft_dirty_enabled(vma) && !pmd_soft_dirty(pmd); -} - static inline bool pte_needs_soft_dirty_wp(struct vm_area_struct *vma, pte_t pte) { return vma_soft_dirty_enabled(vma) && !pte_soft_dirty(pte);