From patchwork Wed Jul 24 07:03:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Baolin Wang X-Patchwork-Id: 13740624 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3FC7C3DA61 for ; Wed, 24 Jul 2024 07:04:28 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B8D2F6B0082; Wed, 24 Jul 2024 03:04:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B3BF56B0083; Wed, 24 Jul 2024 03:04:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A03F06B0085; Wed, 24 Jul 2024 03:04:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 7B6E26B0083 for ; Wed, 24 Jul 2024 03:04:26 -0400 (EDT) Received: from smtpin05.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 17858C06E4 for ; Wed, 24 Jul 2024 07:04:26 +0000 (UTC) X-FDA: 82373757732.05.E9DC23F Received: from out30-112.freemail.mail.aliyun.com (out30-112.freemail.mail.aliyun.com [115.124.30.112]) by imf05.hostedemail.com (Postfix) with ESMTP id 177A8100002 for ; Wed, 24 Jul 2024 07:04:23 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=T4ozr6xV; spf=pass (imf05.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.112 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1721804641; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IgdtO0YJm1XtjSUXDNrZ8xElYsmGF7utZpmG/kiJcWg=; b=Pxf8pnY4b9y/Sn7KJqJct4Fju9KmktNuG3wg3K02yyyF8fooom22H77Fc/5W8peA+GBu03 wAnm799wrDgbZySTHG8Yx/SKAkpBxY5tlHynpJQ8ljDA2XoOSwDxba3a65Xx0Fd0oPz5p6 82dFBw95YGJFRV06CmwMe/3xYn9rBOw= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=linux.alibaba.com header.s=default header.b=T4ozr6xV; spf=pass (imf05.hostedemail.com: domain of baolin.wang@linux.alibaba.com designates 115.124.30.112 as permitted sender) smtp.mailfrom=baolin.wang@linux.alibaba.com; dmarc=pass (policy=none) header.from=linux.alibaba.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1721804641; a=rsa-sha256; cv=none; b=xjyWMQHzRAnJxlrBXQnTQJSYRFgjAEduntz8k7x/SM1MbFDgCWhxAAJWdRR9zaU6Uw5sFl VK4kQpOLqxWR5BLj9a1rkopzu068UX6l2dj7vJYkZXpIuFK0ghE/NBwQ2s6XOZG2GD4sko kWo8gLTYD8qMLapyPi4tP9s4C44W1qQ= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.alibaba.com; s=default; t=1721804661; h=From:To:Subject:Date:Message-Id:MIME-Version; bh=IgdtO0YJm1XtjSUXDNrZ8xElYsmGF7utZpmG/kiJcWg=; b=T4ozr6xVyg91deIMvp3xFsgzESCIb/WXsIw0L+wJEyc7wn3vPrSSc50jtXG9ko4nzfthhLHxAdhVeyAVtgDPGYBASdA9doScelINubo2r0w+R1g4mc66k0Tu/wotVKxSZ0+Ypp6L7pYw6Vw+oS2QDcix1ZHdRUecfaSj+Rt2twk= X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R101e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=maildocker-contentspam033045220184;MF=baolin.wang@linux.alibaba.com;NM=1;PH=DS;RN=13;SR=0;TI=SMTPD_---0WBDDM2T_1721804659; Received: from localhost(mailfrom:baolin.wang@linux.alibaba.com fp:SMTPD_---0WBDDM2T_1721804659) by smtp.aliyun-inc.com; Wed, 24 Jul 2024 15:04:19 +0800 From: Baolin Wang To: akpm@linux-foundation.org, hughd@google.com Cc: willy@infradead.org, david@redhat.com, 21cnbao@gmail.com, ryan.roberts@arm.com, ziy@nvidia.com, ioworker0@gmail.com, da.gomez@samsung.com, p.raghav@samsung.com, baolin.wang@linux.alibaba.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH 1/3] mm: shmem: add file length arg in shmem_get_folio() path Date: Wed, 24 Jul 2024 15:03:58 +0800 Message-Id: <70972d294797b377bf24a7290659e9057b978287.1721720891.git.baolin.wang@linux.alibaba.com> X-Mailer: git-send-email 2.39.3 In-Reply-To: References: MIME-Version: 1.0 X-Rspam-User: X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 177A8100002 X-Stat-Signature: bz9mdtw5hy8z7x8zecd3z7pquyah14kn X-HE-Tag: 1721804663-79441 X-HE-Meta: U2FsdGVkX1+vMuJKnS61fN7VLtF3zNbE+yhyRp6ScLwhmGw19OGq0KNwkkuSb4hX/5Kg7sSxvMJoFWPF7QdcgDMQf3X2K4dfYHHlRXe7XElnH5Hef5e54eDCxeu03oVWCBHj/6NDyPygnpf6ZSmrtHc1Evd+hFsg83epRo/o0YGeMQ7xku5G+/1vRogfiMYcW/5nyvkcHaOZTOkFUsmJBwUPRKbCFfUgWcB1vVBPST1P9k3jahyoEQ6r0vVJ1I9qxNo9wIucqccFenUHUG7QF6+nm0ZxVk8c8QJVci7MQpLGe3fZpyhKUzemg0hBssoTde0lceU8ZpLqpV26y51yglDZ+nK7Y0Mq3l8d/MRd5e+HdE23uxaMmhtkGe1fxadeiECPOUbz9qdVqCoM9K7Yo2X38aGH8lDkIU4Q068c80/nuT41ZqG4ZXaQgwOSnpPjwxYEA4uF+Gx+OeUtiGcS7H2UMhh1LN/Myz/N+T2WrWsRHCyYTemcmNTvW6/b/X57JvG8XefqxI8fh8VHcLpwWncNd/xK4JPmMugQGY9XuXuYgUeJX7yFJNkLPX580XLnKrXCEUFaj9ualmgqIsyZZq0PkASFPccFuXOx0oSJEhUfkyGIXVR9BdlFIhxqlLOLrfmkUF7ANHtyFNGBgdKLLhimATKVnQnKx3nIiukQb2tJ3WiAevPiIzM6tSNrzI2egJ24oZv6PBtTps96DMQxI5IqwlsDx7G9EdyPdQ9tAAmTuN47b+FyoswaJtJQ68FOZNRYSh/LIoV1x/mdY8veLNrZsvMhJVuRkLbzHp128MprEb1DT1auNIYZtuu62S3syQ6WnUPFZBVIkTTMLzMfB96NRvokIeeXSP2amXWAg1yFwWCZsCFw3MyE2vo2lh+YXI+pa+u1HllE91lqwR9+5U5VEHwS/RyDTliuH1RzYju2lY5qKyzHn9t0ourj+oQrqwEnug5Q6psb7C7aEFa h2NrS/6s FmmZyadkrbPY7jKFqJgN4jTMY3vm4pZl1hcSys/tKrwIUGGFScT6BOR7Ves0xYPmOWzo/otxdtYlQRMNQlftjp0kJEvMptOEg6Ft5LVWMeJj42t0K2+DS10jitFa9yolqoRaTMI9LtLu6cXeHTnIYB2fAgDzCLBmFYtWYZAhhudriRj84Sct50UWXHzWq9rZpoXUp5jkTZmlpEIK6MnOneBbsGUdD6kwy9wpKrXHmIvdBQCvWENxRHagi/Dpv96PvBtVp/ogojJJp8sWgZi+uCydmQ5twOcbHORv0UoQ0WLewIL4= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Daniel Gomez In preparation for large folio in the write and fallocate paths, add file length argument in shmem_get_folio() path to be able to calculate the folio order based on the file size. Use of order-0 (PAGE_SIZE) for read, page cache read, and vm fault. This enables high order folios in the write and fallocate path once the folio order is calculated based on the length. Signed-off-by: Daniel Gomez Signed-off-by: Baolin Wang --- fs/xfs/scrub/xfile.c | 6 +++--- fs/xfs/xfs_buf_mem.c | 3 ++- include/linux/shmem_fs.h | 2 +- mm/khugepaged.c | 3 ++- mm/shmem.c | 28 ++++++++++++++++------------ mm/userfaultfd.c | 2 +- 6 files changed, 25 insertions(+), 19 deletions(-) diff --git a/fs/xfs/scrub/xfile.c b/fs/xfs/scrub/xfile.c index d848222f802b..d814d9d786d3 100644 --- a/fs/xfs/scrub/xfile.c +++ b/fs/xfs/scrub/xfile.c @@ -127,7 +127,7 @@ xfile_load( unsigned int offset; if (shmem_get_folio(inode, pos >> PAGE_SHIFT, &folio, - SGP_READ) < 0) + SGP_READ, PAGE_SIZE) < 0) break; if (!folio) { /* @@ -197,7 +197,7 @@ xfile_store( unsigned int offset; if (shmem_get_folio(inode, pos >> PAGE_SHIFT, &folio, - SGP_CACHE) < 0) + SGP_CACHE, PAGE_SIZE) < 0) break; if (filemap_check_wb_err(inode->i_mapping, 0)) { folio_unlock(folio); @@ -268,7 +268,7 @@ xfile_get_folio( pflags = memalloc_nofs_save(); error = shmem_get_folio(inode, pos >> PAGE_SHIFT, &folio, - (flags & XFILE_ALLOC) ? SGP_CACHE : SGP_READ); + (flags & XFILE_ALLOC) ? SGP_CACHE : SGP_READ, PAGE_SIZE); memalloc_nofs_restore(pflags); if (error) return ERR_PTR(error); diff --git a/fs/xfs/xfs_buf_mem.c b/fs/xfs/xfs_buf_mem.c index 9bb2d24de709..784c81d35a1f 100644 --- a/fs/xfs/xfs_buf_mem.c +++ b/fs/xfs/xfs_buf_mem.c @@ -149,7 +149,8 @@ xmbuf_map_page( return -ENOMEM; } - error = shmem_get_folio(inode, pos >> PAGE_SHIFT, &folio, SGP_CACHE); + error = shmem_get_folio(inode, pos >> PAGE_SHIFT, &folio, SGP_CACHE, + PAGE_SIZE); if (error) return error; diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index 1564d7d3ca61..34beaca2f853 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -144,7 +144,7 @@ enum sgp_type { }; int shmem_get_folio(struct inode *inode, pgoff_t index, struct folio **foliop, - enum sgp_type sgp); + enum sgp_type sgp, size_t len); struct folio *shmem_read_folio_gfp(struct address_space *mapping, pgoff_t index, gfp_t gfp); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index a5ec03ef8722..3c9dbebbdf38 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -1867,7 +1867,8 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr, xas_unlock_irq(&xas); /* swap in or instantiate fallocated page */ if (shmem_get_folio(mapping->host, index, - &folio, SGP_NOALLOC)) { + &folio, SGP_NOALLOC, + PAGE_SIZE)) { result = SCAN_FAIL; goto xa_unlocked; } diff --git a/mm/shmem.c b/mm/shmem.c index db8f74cac1a2..92ed09527682 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -980,7 +980,7 @@ static struct folio *shmem_get_partial_folio(struct inode *inode, pgoff_t index) * (although in some cases this is just a waste of time). */ folio = NULL; - shmem_get_folio(inode, index, &folio, SGP_READ); + shmem_get_folio(inode, index, &folio, SGP_READ, PAGE_SIZE); return folio; } @@ -2094,7 +2094,7 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index, */ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, struct folio **foliop, enum sgp_type sgp, gfp_t gfp, - struct vm_fault *vmf, vm_fault_t *fault_type) + struct vm_fault *vmf, vm_fault_t *fault_type, size_t len) { struct vm_area_struct *vma = vmf ? vmf->vma : NULL; struct mm_struct *fault_mm; @@ -2297,10 +2297,10 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, * Return: 0 if successful, else a negative error code. */ int shmem_get_folio(struct inode *inode, pgoff_t index, struct folio **foliop, - enum sgp_type sgp) + enum sgp_type sgp, size_t len) { return shmem_get_folio_gfp(inode, index, foliop, sgp, - mapping_gfp_mask(inode->i_mapping), NULL, NULL); + mapping_gfp_mask(inode->i_mapping), NULL, NULL, len); } EXPORT_SYMBOL_GPL(shmem_get_folio); @@ -2395,7 +2395,7 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf) WARN_ON_ONCE(vmf->page != NULL); err = shmem_get_folio_gfp(inode, vmf->pgoff, &folio, SGP_CACHE, - gfp, vmf, &ret); + gfp, vmf, &ret, PAGE_SIZE); if (err) return vmf_error(err); if (folio) { @@ -2895,6 +2895,9 @@ shmem_write_begin(struct file *file, struct address_space *mapping, struct folio *folio; int ret = 0; + if (!mapping_large_folio_support(mapping)) + len = min_t(size_t, len, PAGE_SIZE - offset_in_page(pos)); + /* i_rwsem is held by caller */ if (unlikely(info->seals & (F_SEAL_GROW | F_SEAL_WRITE | F_SEAL_FUTURE_WRITE))) { @@ -2904,7 +2907,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping, return -EPERM; } - ret = shmem_get_folio(inode, index, &folio, SGP_WRITE); + ret = shmem_get_folio(inode, index, &folio, SGP_WRITE, len); if (ret) return ret; @@ -2975,7 +2978,7 @@ static ssize_t shmem_file_read_iter(struct kiocb *iocb, struct iov_iter *to) break; } - error = shmem_get_folio(inode, index, &folio, SGP_READ); + error = shmem_get_folio(inode, index, &folio, SGP_READ, PAGE_SIZE); if (error) { if (error == -EINVAL) error = 0; @@ -3152,7 +3155,7 @@ static ssize_t shmem_file_splice_read(struct file *in, loff_t *ppos, break; error = shmem_get_folio(inode, *ppos / PAGE_SIZE, &folio, - SGP_READ); + SGP_READ, PAGE_SIZE); if (error) { if (error == -EINVAL) error = 0; @@ -3339,7 +3342,8 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, error = -ENOMEM; else error = shmem_get_folio(inode, index, &folio, - SGP_FALLOC); + SGP_FALLOC, + (end - index) << PAGE_SHIFT); if (error) { info->fallocend = undo_fallocend; /* Remove the !uptodate folios we added */ @@ -3690,7 +3694,7 @@ static int shmem_symlink(struct mnt_idmap *idmap, struct inode *dir, } else { inode_nohighmem(inode); inode->i_mapping->a_ops = &shmem_aops; - error = shmem_get_folio(inode, 0, &folio, SGP_WRITE); + error = shmem_get_folio(inode, 0, &folio, SGP_WRITE, PAGE_SIZE); if (error) goto out_remove_offset; inode->i_op = &shmem_symlink_inode_operations; @@ -3736,7 +3740,7 @@ static const char *shmem_get_link(struct dentry *dentry, struct inode *inode, return ERR_PTR(-ECHILD); } } else { - error = shmem_get_folio(inode, 0, &folio, SGP_READ); + error = shmem_get_folio(inode, 0, &folio, SGP_READ, PAGE_SIZE); if (error) return ERR_PTR(error); if (!folio) @@ -5209,7 +5213,7 @@ struct folio *shmem_read_folio_gfp(struct address_space *mapping, int error; error = shmem_get_folio_gfp(inode, index, &folio, SGP_CACHE, - gfp, NULL, NULL); + gfp, NULL, NULL, PAGE_SIZE); if (error) return ERR_PTR(error); diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index e54e5c8907fa..c275e34c435a 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -391,7 +391,7 @@ static int mfill_atomic_pte_continue(pmd_t *dst_pmd, struct page *page; int ret; - ret = shmem_get_folio(inode, pgoff, &folio, SGP_NOALLOC); + ret = shmem_get_folio(inode, pgoff, &folio, SGP_NOALLOC, PAGE_SIZE); /* Our caller expects us to return -EFAULT if we failed to find folio */ if (ret == -ENOENT) ret = -EFAULT;