From patchwork Fri Aug 30 03:54:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rik van Riel X-Patchwork-Id: 13784180 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CF294CA0ED2 for ; Fri, 30 Aug 2024 03:54:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 40E3A6B0089; Thu, 29 Aug 2024 23:54:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3BD096B008A; Thu, 29 Aug 2024 23:54:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2B6B16B008C; Thu, 29 Aug 2024 23:54:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 0D01A6B0089 for ; Thu, 29 Aug 2024 23:54:32 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 8E9BB1C64C7 for ; Fri, 30 Aug 2024 03:54:31 +0000 (UTC) X-FDA: 82507544742.22.86B7B6F Received: from shelob.surriel.com (shelob.surriel.com [96.67.55.147]) by imf22.hostedemail.com (Postfix) with ESMTP id 0D896C0006 for ; Fri, 30 Aug 2024 03:54:28 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf22.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1724989980; h=from:from:sender:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding:in-reply-to: references; bh=qgYFJPxxgdcNUzPfoZazpuT9DX/moW0cfXDmP0eS7co=; b=D1fHop7R06IBadYU8rK0VqqG8kwSyeD4skxzT3LE3CxHQcV6X+hotLsvnD1FkfnR08YtlW 5H2W9yptYGg7K8cfcwwGAADzkIGInVzWbNwyVDlVdkyvCyUcTSkZXa0fnAPC45LWqVIMPh PMba3tKGDdUYuZSfnKF6aj3vyof4LEE= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1724989980; a=rsa-sha256; cv=none; b=J5/tpHSchxvC+Fzt2Dwdb1VgIZCPwWiPuqiGZuY5RAh/mcEd1o+BetRxL0WTiyJOcLApJS uBYIKAinPWRarqy1sz2fWWHmnUMTO+mlk9Y3sTjS8zreCgKA6p1smxz8qeZcrlgTb1l8b6 C9L0Tkf+0Jz0EYTt+RflKvgfHZBjWu4= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=none; dmarc=none; spf=pass (imf22.hostedemail.com: domain of riel@shelob.surriel.com designates 96.67.55.147 as permitted sender) smtp.mailfrom=riel@shelob.surriel.com Received: from [2601:18c:9101:a8b6:6e0b:84ff:fee2:98bb] (helo=imladris.surriel.com) by shelob.surriel.com with esmtpsa (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.97.1) (envelope-from ) id 1sjsiF-000000004XN-2JIY; Thu, 29 Aug 2024 23:54:15 -0400 Date: Thu, 29 Aug 2024 23:54:15 -0400 From: Rik van Riel To: Hugh Dickins Cc: kernel-team@meta.com, Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Dave Chinner , "Darrick J. Wong" , Vlastimil Babka Subject: [PATCH] mm,tmpfs: consider end of file write in shmem_is_huge Message-ID: <20240829235415.57374fc3@imladris.surriel.com> X-Mailer: Claws Mail 4.3.0 (GTK 3.24.41; x86_64-redhat-linux-gnu) MIME-Version: 1.0 X-Rspamd-Queue-Id: 0D896C0006 X-Stat-Signature: b9p8uzwppiihoup3e1wrnh64zicup15g X-Rspamd-Server: rspam09 X-Rspam-User: X-HE-Tag: 1724990068-590355 X-HE-Meta: U2FsdGVkX19Ilv3P+P1yG6JfSxwpIuKoeUwWFsZ6vOJ/yqDzDzK6tNEH3mPSbaAF52Q6ZFhPyQN9jyo5uONGGvIX104E6NEd3FOmGHwVwo5nU2plB8sNWOoO0e5L0JpU4pzVQHEBy1BKlRzVczDAGVrG+J38Rp8kSb/tyuNHss3g/q1XpIBdO7d0Sw1blc8dLUouhpzUaxzaXumkGFCkbDYK+bFgJkE8LPwVzQ7ZuJmimqFesAztYFGgbxJ2HfPZO0Keipbuq5VR0Azl+Azkcmu/mmYWxy+8DeEGCf2gEjiZgm1hXhhWqMYmqp8Jb3Iht7+JUwC0qoOAABiMHI9kuNnfqIwbBK2QY/5rgbcEQKwmrBfQnNDIgU6RQsQS+JVgNk7bk/SpEIvfeiiHe3rXrRQ5QdGgEaZ7fGwHRVSwzLUGIUmmmtRL+o1/yds3iKHiasGRT1zK16fdBZtumMWaYqzdZzzxtoDmj/jjhvuxQqvFNqL10+ZqQBVjfEtWdfWdWfE/Zn6S1JUb/v+NFO9AxIFxf9sghSszj6muPubKyS2lPSw79Ng7+qumJ0QlZe/GFOCnHe7nQKVKunMAIZHw1+x6W0NncHTpQspSbFPGZiJJH01pUBLkEx6GEVsArH8A73fI6Zb7IYOSjFAOEMdy6Kjh7oVkPpPdtbM8GvyJe0Q+GyOnHN/kno61qH7fCRxWAEw0OkA2nF+Ol1mk372u3azIv14YwneY22J5z5CbEuBoOpFpuoRbzhx0zeTmX38KHqu/axtbXwY3+D3sEklveX8Iu/krQtxDVEYHvJPaEm8hR8UC+MhkxVFFAdd6Lngt9Xz//1mCUeGUV+dQaYuLfExs0gj+8s0LrvgTzh6aP6XLdtDVyFSXW7N37RuP0MktzNKzNRg907LLYgiIp93aVz71r6kyK1qRASw5gwDg4wbvcBLCZAcereqYNMLhKqkSVsO9I8y1/FbR0UqYFtB uykjxmV4 XmRwa2TEj3cQRIEGy4tP0hrzVrdvTSyEb+bipUxLOIIk+RGvPc0vU7SJywrgHOyRJ/Na31o7dhUTkOLsPpOuX3aXvWjdvmbR2h2wL5ac9UgGXL69gVu6mGse39Q7yantQvF0CnEYzMxTH099wiKzO05x0IBTB5ylDjhpNlq8mGaaI55QtNeU9wUb9U3lJqH2eRIbI4r52uXH7m13Y/V0BeAc2TCCNLqw6xi2XKV1VnzIzIpIQBAy8N9RQovoEujlzvAxVv3ZK8p2SNBvcxtqQyqXEX6AseDPijJrBoIUxWEHFhciTHOSb7sjChvWYcPuabNAP3k0rFxODPehxJLVCpsnOIzONjGhmKmpMdnN3aAj6Rh/ci7XJz1qhcZJOaVPWkWFQgwTDYIM3W+A= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Take the end of a file write into consideration when deciding whether or not to use huge folios for tmpfs files when the tmpfs filesystem is mounted with huge=within_size This allows large writes that append to the end of a file to automatically use large folios. Doing 4MB squential writes without fallocate to a 16GB tmpfs file: - 4kB pages: 1560 MB/s - huge=within_size 4720 MB/s - huge=always: 4720 MB/s Signed-off-by: Rik van Riel --- fs/xfs/scrub/xfile.c | 6 +++--- fs/xfs/xfs_buf_mem.c | 2 +- include/linux/shmem_fs.h | 12 ++++++----- mm/huge_memory.c | 2 +- mm/khugepaged.c | 2 +- mm/shmem.c | 44 +++++++++++++++++++++------------------- mm/userfaultfd.c | 2 +- 7 files changed, 37 insertions(+), 33 deletions(-) diff --git a/fs/xfs/scrub/xfile.c b/fs/xfs/scrub/xfile.c index d848222f802b..e6e1c1fd23cb 100644 --- a/fs/xfs/scrub/xfile.c +++ b/fs/xfs/scrub/xfile.c @@ -126,7 +126,7 @@ xfile_load( unsigned int len; unsigned int offset; - if (shmem_get_folio(inode, pos >> PAGE_SHIFT, &folio, + if (shmem_get_folio(inode, pos >> PAGE_SHIFT, 0, &folio, SGP_READ) < 0) break; if (!folio) { @@ -196,7 +196,7 @@ xfile_store( unsigned int len; unsigned int offset; - if (shmem_get_folio(inode, pos >> PAGE_SHIFT, &folio, + if (shmem_get_folio(inode, pos >> PAGE_SHIFT, 0, &folio, SGP_CACHE) < 0) break; if (filemap_check_wb_err(inode->i_mapping, 0)) { @@ -267,7 +267,7 @@ xfile_get_folio( i_size_write(inode, pos + len); pflags = memalloc_nofs_save(); - error = shmem_get_folio(inode, pos >> PAGE_SHIFT, &folio, + error = shmem_get_folio(inode, pos >> PAGE_SHIFT, 0, &folio, (flags & XFILE_ALLOC) ? SGP_CACHE : SGP_READ); memalloc_nofs_restore(pflags); if (error) diff --git a/fs/xfs/xfs_buf_mem.c b/fs/xfs/xfs_buf_mem.c index 9bb2d24de709..07bebbfb16ee 100644 --- a/fs/xfs/xfs_buf_mem.c +++ b/fs/xfs/xfs_buf_mem.c @@ -149,7 +149,7 @@ xmbuf_map_page( return -ENOMEM; } - error = shmem_get_folio(inode, pos >> PAGE_SHIFT, &folio, SGP_CACHE); + error = shmem_get_folio(inode, pos >> PAGE_SHIFT, 0, &folio, SGP_CACHE); if (error) return error; diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 1d06b1e5408a..846c1ea91f50 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -111,13 +111,15 @@ extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end); int shmem_unuse(unsigned int type); #ifdef CONFIG_TRANSPARENT_HUGEPAGE -extern bool shmem_is_huge(struct inode *inode, pgoff_t index, bool shmem_huge_force, - struct mm_struct *mm, unsigned long vm_flags); +extern bool shmem_is_huge(struct inode *inode, pgoff_t index, loff_t write_end, + bool shmem_huge_force, struct mm_struct *mm, + unsigned long vm_flags); unsigned long shmem_allowable_huge_orders(struct inode *inode, struct vm_area_struct *vma, pgoff_t index, bool global_huge); #else -static __always_inline bool shmem_is_huge(struct inode *inode, pgoff_t index, bool shmem_huge_force, +static __always_inline bool shmem_is_huge(struct inode *inode, pgoff_t index, + loff_t write_end, bool shmem_huge_force, struct mm_struct *mm, unsigned long vm_flags) { return false; @@ -150,8 +152,8 @@ enum sgp_type { SGP_FALLOC, /* like SGP_WRITE, but make existing page Uptodate */ }; -int shmem_get_folio(struct inode *inode, pgoff_t index, struct folio **foliop, - enum sgp_type sgp); +int shmem_get_folio(struct inode *inode, pgoff_t index, loff_t write_end, + struct folio **foliop, enum sgp_type sgp); struct folio *shmem_read_folio_gfp(struct address_space *mapping, pgoff_t index, gfp_t gfp); diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 67c86a5d64a6..8c09071e78cd 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -160,7 +160,7 @@ unsigned long __thp_vma_allowable_orders(struct vm_area_struct *vma, * own flags. */ if (!in_pf && shmem_file(vma->vm_file)) { - bool global_huge = shmem_is_huge(file_inode(vma->vm_file), vma->vm_pgoff, + bool global_huge = shmem_is_huge(file_inode(vma->vm_file), vma->vm_pgoff, 0, !enforce_sysfs, vma->vm_mm, vm_flags); if (!vma_is_anon_shmem(vma)) diff --git a/mm/khugepaged.c b/mm/khugepaged.c index cdd1d8655a76..0ebabff10f97 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1866,7 +1866,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, if (xa_is_value(folio) || !folio_test_uptodate(folio)) { xas_unlock_irq(&xas); /* swap in or instantiate fallocated page */ - if (shmem_get_folio(mapping->host, index, + if (shmem_get_folio(mapping->host, index, 0, &folio, SGP_NOALLOC)) { result = SCAN_FAIL; goto xa_unlocked; diff --git a/mm/shmem.c b/mm/shmem.c index 5a77acf6ac6a..964c24fc480f 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -548,7 +548,7 @@ static bool shmem_confirm_swap(struct address_space *mapping, static int shmem_huge __read_mostly = SHMEM_HUGE_NEVER; -static bool __shmem_is_huge(struct inode *inode, pgoff_t index, +static bool __shmem_is_huge(struct inode *inode, pgoff_t index, loff_t write_end, bool shmem_huge_force, struct mm_struct *mm, unsigned long vm_flags) { @@ -568,7 +568,8 @@ static bool __shmem_is_huge(struct inode *inode, pgoff_t index, return true; case SHMEM_HUGE_WITHIN_SIZE: index = round_up(index + 1, HPAGE_PMD_NR); - i_size = round_up(i_size_read(inode), PAGE_SIZE); + i_size = max(write_end, i_size_read(inode)); + i_size = round_up(i_size, PAGE_SIZE); if (i_size >> PAGE_SHIFT >= index) return true; fallthrough; @@ -581,14 +582,14 @@ static bool __shmem_is_huge(struct inode *inode, pgoff_t index, } } -bool shmem_is_huge(struct inode *inode, pgoff_t index, +bool shmem_is_huge(struct inode *inode, pgoff_t index, loff_t write_end, bool shmem_huge_force, struct mm_struct *mm, unsigned long vm_flags) { if (HPAGE_PMD_ORDER > MAX_PAGECACHE_ORDER) return false; - return __shmem_is_huge(inode, index, shmem_huge_force, mm, vm_flags); + return __shmem_is_huge(inode, index, write_end, shmem_huge_force, mm, vm_flags); } #if defined(CONFIG_SYSFS) @@ -971,7 +972,7 @@ static struct folio *shmem_get_partial_folio(struct inode *inode, pgoff_t index) * (although in some cases this is just a waste of time). */ folio = NULL; - shmem_get_folio(inode, index, &folio, SGP_READ); + shmem_get_folio(inode, index, 0, &folio, SGP_READ); return folio; } @@ -1156,7 +1157,7 @@ static int shmem_getattr(struct mnt_idmap *idmap, STATX_ATTR_NODUMP); generic_fillattr(idmap, request_mask, inode, stat); - if (shmem_is_huge(inode, 0, false, NULL, 0)) + if (shmem_is_huge(inode, 0, 0, false, NULL, 0)) stat->blksize = HPAGE_PMD_SIZE; if (request_mask & STATX_BTIME) { @@ -2078,8 +2079,8 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, * vmf and fault_type are only supplied by shmem_fault: otherwise they are NULL. */ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, - struct folio **foliop, enum sgp_type sgp, gfp_t gfp, - struct vm_fault *vmf, vm_fault_t *fault_type) + loff_t write_end, struct folio **foliop, enum sgp_type sgp, + gfp_t gfp, struct vm_fault *vmf, vm_fault_t *fault_type) { struct vm_area_struct *vma = vmf ? vmf->vma : NULL; struct mm_struct *fault_mm; @@ -2158,7 +2159,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, return 0; } - huge = shmem_is_huge(inode, index, false, fault_mm, + huge = shmem_is_huge(inode, index, write_end, false, fault_mm, vma ? vma->vm_flags : 0); /* Find hugepage orders that are allowed for anonymous shmem. */ if (vma && vma_is_anon_shmem(vma)) @@ -2268,6 +2269,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, * shmem_get_folio - find, and lock a shmem folio. * @inode: inode to search * @index: the page index. + * @write_end: end of a write, could extend inode size. * @foliop: pointer to the folio if found * @sgp: SGP_* flags to control behavior * @@ -2287,10 +2289,10 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, * Context: May sleep. * Return: 0 if successful, else a negative error code. */ -int shmem_get_folio(struct inode *inode, pgoff_t index, struct folio **foliop, - enum sgp_type sgp) +int shmem_get_folio(struct inode *inode, pgoff_t index, loff_t write_end, + struct folio **foliop, enum sgp_type sgp) { - return shmem_get_folio_gfp(inode, index, foliop, sgp, + return shmem_get_folio_gfp(inode, index, write_end, foliop, sgp, mapping_gfp_mask(inode->i_mapping), NULL, NULL); } EXPORT_SYMBOL_GPL(shmem_get_folio); @@ -2385,7 +2387,7 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf) } WARN_ON_ONCE(vmf->page != NULL); - err = shmem_get_folio_gfp(inode, vmf->pgoff, &folio, SGP_CACHE, + err = shmem_get_folio_gfp(inode, vmf->pgoff, 0, &folio, SGP_CACHE, gfp, vmf, &ret); if (err) return vmf_error(err); @@ -2895,7 +2897,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping, return -EPERM; } - ret = shmem_get_folio(inode, index, &folio, SGP_WRITE); + ret = shmem_get_folio(inode, index, pos + len, &folio, SGP_WRITE); if (ret) return ret; @@ -2966,7 +2968,7 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to) break; } - error = shmem_get_folio(inode, index, &folio, SGP_READ); + error = shmem_get_folio(inode, index, 0, &folio, SGP_READ); if (error) { if (error == -EINVAL) error = 0; @@ -3142,7 +3144,7 @@ static ssize_t shmem_file_splice_read(struct file *in, loff_t *ppos, if (*ppos >= i_size_read(inode)) break; - error = shmem_get_folio(inode, *ppos / PAGE_SIZE, &folio, + error = shmem_get_folio(inode, *ppos / PAGE_SIZE, 0, &folio, SGP_READ); if (error) { if (error == -EINVAL) @@ -3332,8 +3334,8 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, else if (shmem_falloc.nr_unswapped > shmem_falloc.nr_falloced) error = -ENOMEM; else - error = shmem_get_folio(inode, index, &folio, - SGP_FALLOC); + error = shmem_get_folio(inode, index, offset + len, + &folio, SGP_FALLOC); if (error) { info->fallocend = undo_fallocend; /* Remove the !uptodate folios we added */ @@ -3684,7 +3686,7 @@ static int shmem_symlink(struct mnt_idmap *idmap, struct inode *dir, } else { inode_nohighmem(inode); inode->i_mapping->a_ops = &shmem_aops; - error = shmem_get_folio(inode, 0, &folio, SGP_WRITE); + error = shmem_get_folio(inode, 0, 0, &folio, SGP_WRITE); if (error) goto out_remove_offset; inode->i_op = &shmem_symlink_inode_operations; @@ -3730,7 +3732,7 @@ static const char *shmem_get_link(struct dentry *dentry, struct inode *inode, return ERR_PTR(-ECHILD); } } else { - error = shmem_get_folio(inode, 0, &folio, SGP_READ); + error = shmem_get_folio(inode, 0, 0, &folio, SGP_READ); if (error) return ERR_PTR(error); if (!folio) @@ -5198,7 +5200,7 @@ struct folio *shmem_read_folio_gfp(struct address_space *mapping, struct folio *folio; int error; - error = shmem_get_folio_gfp(inode, index, &folio, SGP_CACHE, + error = shmem_get_folio_gfp(inode, index, 0, &folio, SGP_CACHE, gfp, NULL, NULL); if (error) return ERR_PTR(error); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index e54e5c8907fa..cb8c76f8f118 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -391,7 +391,7 @@ static int mfill_atomic_pte_continue(pmd_t *dst_pmd, struct page *page; int ret; - ret = shmem_get_folio(inode, pgoff, &folio, SGP_NOALLOC); + ret = shmem_get_folio(inode, pgoff, 0, &folio, SGP_NOALLOC); /* Our caller expects us to return -EFAULT if we failed to find folio */ if (ret == -ENOENT) ret = -EFAULT;