From patchwork Sat Sep 30 03:32:40 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hugh Dickins X-Patchwork-Id: 13404935 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B67AE77350 for ; Sat, 30 Sep 2023 03:32:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D0AF68D010C; Fri, 29 Sep 2023 23:32:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id CBBB58D002B; Fri, 29 Sep 2023 23:32:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B5D558D010C; Fri, 29 Sep 2023 23:32:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id A52078D002B for ; Fri, 29 Sep 2023 23:32:45 -0400 (EDT) Received: from smtpin06.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 7EC128061B for ; Sat, 30 Sep 2023 03:32:45 +0000 (UTC) X-FDA: 81291841890.06.801BF22 Received: from mail-yw1-f176.google.com (mail-yw1-f176.google.com [209.85.128.176]) by imf05.hostedemail.com (Postfix) with ESMTP id AA050100002 for ; Sat, 30 Sep 2023 03:32:43 +0000 (UTC) Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=fUlJNYBj; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of hughd@google.com designates 209.85.128.176 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1696044763; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=LLFDDir+UoXbMx1glevMcvIlnqgdvYAatnW1ChgCi9k=; b=tvKoWD1mv8J+Stpkb75Dt4i7IFAqO3FEFhqeW7d85oDoQdV/E62VMZy27mQ/VS/XXzZjKr lTUSAIhaJy1nOXY4wfyCG3SoICQyR6cv0saubd+YQL355I7GRJTXGtGTeRISDL9E7Pf/l7 RFVjPsHkkqrQyqshvNQ3evCoADBCHPE= ARC-Authentication-Results: i=1; imf05.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=fUlJNYBj; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf05.hostedemail.com: domain of hughd@google.com designates 209.85.128.176 as permitted sender) smtp.mailfrom=hughd@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1696044763; a=rsa-sha256; cv=none; b=AfiCGyXPPs9yMullHwro7blxWq7eMSG+jQoryYYor4TZhxDnGjRzcc9bwnYsWM5P+90r0Q X6y5ZSIVbYeSmQx+sC9YFyhuM98XS9T7Zm/rOQcbqxBM0XqrW3yH/VfVlXSv2RbAmgrzlR cOxKO0l6KbyrrBS6LGGII/wGdH9WPI8= Received: by mail-yw1-f176.google.com with SMTP id 00721157ae682-59b5484fbe6so181303407b3.1 for ; Fri, 29 Sep 2023 20:32:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1696044763; x=1696649563; darn=kvack.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=LLFDDir+UoXbMx1glevMcvIlnqgdvYAatnW1ChgCi9k=; b=fUlJNYBjcFt0+Kmvy+PKCldvlDpyQmrgZlO+Yi2A0/+Mxd7F1g31k1gQjMwflFPG2d oKMFV0l/NK3NWZX3mHKTC84chsPrWQkcVcQ441pyJ8PG32WzwreSMOq1eV3D90h9ZuCY zkL7ocp/GwA//C3gEIvkEZvp5r3z5OhhFXh7kKyqiO9YABrX6Bmk3U9/u1ySgbZJDNw3 bktpdOoGQjTNNEPSuDG+fZ6XybPQf+PN8wj1uNG9wYhhVTU5ZVYX33e/cV4KY9S2uxGI UevbNyketXhMrTWQzro6UR3dTE7bgtmZt6omBhIjhMJ8cjcxW0z6v2w6Y1qGzIv4O0pj gO8g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1696044763; x=1696649563; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LLFDDir+UoXbMx1glevMcvIlnqgdvYAatnW1ChgCi9k=; b=Uhvfz3fGb/GGBxXYcIK0EmXk9lHxp1yukwWP5buh4TGmxryXTLJ0WBSbKAXhkCHxp4 44q+bM7tnZIXKl3/qeG/O78CyNNfpgyHbZ/DQvpatQ2Ba99trYoahKB0A87sgHy0RNSX 0ZaUoYSgdlpZFndGtg6ShdH1/Fvnt80iyIWf7L8mnSnHMNREcZZy1xfXHwd8K+SGLQWi RLQ9vVG2N2c/pXj7kxz7ZeD2Pgmhq5g6w9tkTFstEa/jSu2oTE4gOqgdwocS/G/xr6hN pL2MPN4bHOz5yzFMKCi8LWSTLLqMJUtQx2Wy+AiI/ElTV4yDPeUclkviEcAkvvLMeTMt GQkA== X-Gm-Message-State: AOJu0Yw5T0RjEIanzFnpp/R6dWgkPHOmuFv7UxNFa0OWcg+HwL3o0lvH h7ZbuHwgd04Dw8R2sMAsnGUj4Q== X-Google-Smtp-Source: AGHT+IHWFfod1ajuoBsT7KhrXsYUjHQWaiJ/anqkA/cYNa4/2kw3W7fwQLS18Z3KXGOj6fs+CBR/pA== X-Received: by 2002:a81:7951:0:b0:59b:ca2f:6eff with SMTP id u78-20020a817951000000b0059bca2f6effmr4258072ywc.40.1696044762549; Fri, 29 Sep 2023 20:32:42 -0700 (PDT) Received: from ripple.attlocal.net (172-10-233-147.lightspeed.sntcca.sbcglobal.net. [172.10.233.147]) by smtp.gmail.com with ESMTPSA id m131-20020a817189000000b005a1f7231cf5sm2704514ywc.142.2023.09.29.20.32.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 Sep 2023 20:32:41 -0700 (PDT) Date: Fri, 29 Sep 2023 20:32:40 -0700 (PDT) From: Hugh Dickins X-X-Sender: hugh@ripple.attlocal.net To: Andrew Morton cc: Christian Brauner , Carlos Maiolino , Chuck Lever , Jan Kara , Matthew Wilcox , Johannes Weiner , Axel Rasmussen , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 7/8] shmem: _add_to_page_cache() before shmem_inode_acct_blocks() In-Reply-To: Message-ID: <22ddd06-d919-33b-1219-56335c1bf28e@google.com> References: MIME-Version: 1.0 X-Rspamd-Queue-Id: AA050100002 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: m46sptr8b5fod4zga7htriiue4jqbbpb X-HE-Tag: 1696044763-176050 X-HE-Meta: U2FsdGVkX1/7qU/9ky2KImEuLaBYpqx/KDqkxM4hyQlS2fn512MlnosC9HgRHSyt8X7SqlFn5m1JLt8zurxhWdnXK7Ax1QAei885zli++5hFcIYaxFA3+Itnj553/5I6iQ1ws++nAWParaRoRygCjoOWfeJuaEgYgb5y2d1SuHF5HtdvJjfE2sSe2JaKoMXnIk0zCmgeD+y+eozlpyq54SEyHMLrnvJIMa8fiBC9F9agr9DgjoZ1eOx5HafolOxOYMOESgFgGLMbKqTVL7JGM3gkR0CQ/RgAbcJYfOE9s4vYpv8vAl4JtyKKuJHg7AZZmnyD1jKBShGCQrUY25s596EwXMG8O/F1yFtZwWw8/Eq3tj1RhxRrd9H27S6+hdjXmfd9TUfoV6g2+qPJL36nQBgBpVwY0FZgekV091f/vRwTzJFkHi4Jcb9KrYMIJt7qCxU/AwvYGHplCychD6c0ZeJbxDZxBCFSz31PwAydeaantEZ41eMe8ICyZ772hFumC1M/S4ThkaAV5ydgbzYmn4wMn0k71RvBm4LN0YPvSIVfELZkaSoEC2BxV0pfr5qmryZFlqadW53cLkj7LocgjG8Uy9lFUpI3Ewe0HXSynBxia6QA7M5OYR1UcYV4uweAc58dpBwe/a+/BaeXLlyYugVS9R2yFAp6w3ZL5rruEyopp/88v8vucmdoYs8gpoLI/+zx/mnrdBj2zhIW2+TnSBoOMvwfXhI1yZyJOlRo8rFnfvO1v7XUv0xf9QVRfo6pUIf2GC1d8nMXqj6henwdgUaa7fhCvJEPWFQqQ5uo6yYVuXlY1sr1PpkSV3prDRSvTg3WO2Aav85t4S7jkuvumFvRsf6k0kclgOG0IqGsFi6OLfLWAsFTHQ7k701XLrl2uCVW8I3w8HqniNnL3LL9lpcYfyiKdR/QDUk+iTaX6FjcXc7IDaKtcpBaEnKq0GOE7PUnFcp4UZ9Xv8efmkW oCJJKN1n s8aK2MjFWg0JFzGhi2QsFouZfbZ/Z2K2fZYl015ed9nsIZVIJV+YQ2i2jHtCzukFtyRBt7wbt317qJ6nccmOyf1xwDfsAZprhYvtKd06yUorjZjX4P6Yt0WcjXYjdt4IYDqgTiIGOTBx39Xzj1r9FRA6tkyyl0DCsEI4Eo/UnCSfR4O9ckJPi7vPNNYQsX0dUJNjKHLFEeNqzR3qCi/BEnI9eXNKuooohrQyfzJKGlJ1sPZ1NnhBHeRcvkVx5dk8GImxyCduruVNmd9176Nwrrqi4GKqIm1Rifmvy9543ZAihU3U5HLPvwhWChzPdTRcStuF/GRGPRS6FaikJ0PVdDu9K0YClgIxaYTts1tW4S/Ey1eNqXUh8d+L3WAZ6JMKtygGKoJrteUamd+AtADxFSuAJxRrqebX1G+SGq0Gc2cjiX3M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There has been a recurring problem, that when a tmpfs volume is being filled by racing threads, some fail with ENOSPC (or consequent SIGBUS or EFAULT) even though all allocations were within the permitted size. This was a problem since early days, but magnified and complicated by the addition of huge pages. We have often worked around it by adding some slop to the tmpfs size, but it's hard to say how much is needed, and some users prefer not to do that e.g. keeping sparse files in a tightly tailored tmpfs helps to prevent accidental writing to holes. This comes from the allocation sequence: 1. check page cache for existing folio 2. check and reserve from vm_enough_memory 3. check and account from size of tmpfs 4. if huge, check page cache for overlapping folio 5. allocate physical folio, huge or small 6. check and charge from mem cgroup limit 7. add to page cache (but maybe another folio already got in). Concurrent tasks allocating at the same position could deplete the size allowance and fail. Doing vm_enough_memory and size checks before the folio allocation was intentional (to limit the load on the page allocator from this source) and still has some virtue; but memory cgroup never did that, so I think it's better reordered to favour predictable behaviour. 1. check page cache for existing folio 2. if huge, check page cache for overlapping folio 3. allocate physical folio, huge or small 4. check and charge from mem cgroup limit 5. add to page cache (but maybe another folio already got in) 6. check and reserve from vm_enough_memory 7. check and account from size of tmpfs. The folio lock held from allocation onwards ensures that the !uptodate folio cannot be used by others, and can safely be deleted from the cache if checks 6 or 7 subsequently fail (and those waiting on folio lock already check that the folio was not truncated once they get the lock); and the early addition to page cache ensures that racers find it before they try to duplicate the accounting. Seize the opportunity to tidy up shmem_get_folio_gfp()'s ENOSPC retrying, which can be combined inside the new shmem_alloc_and_add_folio(): doing 2 splits twice (once huge, once nonhuge) is not exactly equivalent to trying 5 splits (and giving up early on huge), but let's keep it simple unless more complication proves necessary. Userfaultfd is a foreign country: they do things differently there, and for good reason - to avoid mmap_lock deadlock. Leave ordering in shmem_mfill_atomic_pte() untouched for now, but I would rather like to mesh it better with shmem_get_folio_gfp() in the future. Signed-off-by: Hugh Dickins --- mm/shmem.c | 235 +++++++++++++++++++++++++++-------------------------- 1 file changed, 121 insertions(+), 114 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 0a7f7b567b80..4f4ab26bc58a 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -789,13 +789,11 @@ static int shmem_add_to_page_cache(struct folio *folio, xas_store(&xas, folio); if (xas_error(&xas)) goto unlock; - if (folio_test_pmd_mappable(folio)) { - count_vm_event(THP_FILE_ALLOC); + if (folio_test_pmd_mappable(folio)) __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr); - } - mapping->nrpages += nr; __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr); __lruvec_stat_mod_folio(folio, NR_SHMEM, nr); + mapping->nrpages += nr; unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp)); @@ -1612,25 +1610,17 @@ static struct folio *shmem_alloc_hugefolio(gfp_t gfp, struct shmem_inode_info *info, pgoff_t index) { struct vm_area_struct pvma; - struct address_space *mapping = info->vfs_inode.i_mapping; - pgoff_t hindex; struct folio *folio; - hindex = round_down(index, HPAGE_PMD_NR); - if (xa_find(&mapping->i_pages, &hindex, hindex + HPAGE_PMD_NR - 1, - XA_PRESENT)) - return NULL; - - shmem_pseudo_vma_init(&pvma, info, hindex); + shmem_pseudo_vma_init(&pvma, info, index); folio = vma_alloc_folio(gfp, HPAGE_PMD_ORDER, &pvma, 0, true); shmem_pseudo_vma_destroy(&pvma); - if (!folio) - count_vm_event(THP_FILE_FALLBACK); + return folio; } static struct folio *shmem_alloc_folio(gfp_t gfp, - struct shmem_inode_info *info, pgoff_t index) + struct shmem_inode_info *info, pgoff_t index) { struct vm_area_struct pvma; struct folio *folio; @@ -1642,36 +1632,101 @@ static struct folio *shmem_alloc_folio(gfp_t gfp, return folio; } -static struct folio *shmem_alloc_and_acct_folio(gfp_t gfp, struct inode *inode, - pgoff_t index, bool huge) +static struct folio *shmem_alloc_and_add_folio(gfp_t gfp, + struct inode *inode, pgoff_t index, + struct mm_struct *fault_mm, bool huge) { + struct address_space *mapping = inode->i_mapping; struct shmem_inode_info *info = SHMEM_I(inode); struct folio *folio; - int nr; - int err; + long pages; + int error; if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) huge = false; - nr = huge ? HPAGE_PMD_NR : 1; - err = shmem_inode_acct_blocks(inode, nr); - if (err) - goto failed; + if (huge) { + pages = HPAGE_PMD_NR; + index = round_down(index, HPAGE_PMD_NR); + + /* + * Check for conflict before waiting on a huge allocation. + * Conflict might be that a huge page has just been allocated + * and added to page cache by a racing thread, or that there + * is already at least one small page in the huge extent. + * Be careful to retry when appropriate, but not forever! + * Elsewhere -EEXIST would be the right code, but not here. + */ + if (xa_find(&mapping->i_pages, &index, + index + HPAGE_PMD_NR - 1, XA_PRESENT)) + return ERR_PTR(-E2BIG); - if (huge) folio = shmem_alloc_hugefolio(gfp, info, index); - else + if (!folio) + count_vm_event(THP_FILE_FALLBACK); + } else { + pages = 1; folio = shmem_alloc_folio(gfp, info, index); - if (folio) { - __folio_set_locked(folio); - __folio_set_swapbacked(folio); - return folio; + } + if (!folio) + return ERR_PTR(-ENOMEM); + + __folio_set_locked(folio); + __folio_set_swapbacked(folio); + + gfp &= GFP_RECLAIM_MASK; + error = mem_cgroup_charge(folio, fault_mm, gfp); + if (error) { + if (xa_find(&mapping->i_pages, &index, + index + pages - 1, XA_PRESENT)) { + error = -EEXIST; + } else if (huge) { + count_vm_event(THP_FILE_FALLBACK); + count_vm_event(THP_FILE_FALLBACK_CHARGE); + } + goto unlock; } - err = -ENOMEM; - shmem_inode_unacct_blocks(inode, nr); -failed: - return ERR_PTR(err); + error = shmem_add_to_page_cache(folio, mapping, index, NULL, gfp); + if (error) + goto unlock; + + error = shmem_inode_acct_blocks(inode, pages); + if (error) { + struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); + long freed; + /* + * Try to reclaim some space by splitting a few + * large folios beyond i_size on the filesystem. + */ + shmem_unused_huge_shrink(sbinfo, NULL, 2); + /* + * And do a shmem_recalc_inode() to account for freed pages: + * except our folio is there in cache, so not quite balanced. + */ + spin_lock(&info->lock); + freed = pages + info->alloced - info->swapped - + READ_ONCE(mapping->nrpages); + if (freed > 0) + info->alloced -= freed; + spin_unlock(&info->lock); + if (freed > 0) + shmem_inode_unacct_blocks(inode, freed); + error = shmem_inode_acct_blocks(inode, pages); + if (error) { + filemap_remove_folio(folio); + goto unlock; + } + } + + shmem_recalc_inode(inode, pages, 0); + folio_add_lru(folio); + return folio; + +unlock: + folio_unlock(folio); + folio_put(folio); + return ERR_PTR(error); } /* @@ -1907,29 +1962,22 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, struct vm_fault *vmf, vm_fault_t *fault_type) { struct vm_area_struct *vma = vmf ? vmf->vma : NULL; - struct address_space *mapping = inode->i_mapping; - struct shmem_inode_info *info = SHMEM_I(inode); - struct shmem_sb_info *sbinfo; struct mm_struct *fault_mm; struct folio *folio; - pgoff_t hindex; - gfp_t huge_gfp; int error; - int once = 0; - int alloced = 0; + bool alloced; if (index > (MAX_LFS_FILESIZE >> PAGE_SHIFT)) return -EFBIG; repeat: if (sgp <= SGP_CACHE && - ((loff_t)index << PAGE_SHIFT) >= i_size_read(inode)) { + ((loff_t)index << PAGE_SHIFT) >= i_size_read(inode)) return -EINVAL; - } - sbinfo = SHMEM_SB(inode->i_sb); + alloced = false; fault_mm = vma ? vma->vm_mm : NULL; - folio = filemap_get_entry(mapping, index); + folio = filemap_get_entry(inode->i_mapping, index); if (folio && vma && userfaultfd_minor(vma)) { if (!xa_is_value(folio)) folio_put(folio); @@ -1951,7 +1999,7 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, folio_lock(folio); /* Has the folio been truncated or swapped out? */ - if (unlikely(folio->mapping != mapping)) { + if (unlikely(folio->mapping != inode->i_mapping)) { folio_unlock(folio); folio_put(folio); goto repeat; @@ -1986,65 +2034,38 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, return 0; } - if (!shmem_is_huge(inode, index, false, - vma ? vma->vm_mm : NULL, vma ? vma->vm_flags : 0)) - goto alloc_nohuge; + if (shmem_is_huge(inode, index, false, fault_mm, + vma ? vma->vm_flags : 0)) { + gfp_t huge_gfp; - huge_gfp = vma_thp_gfp_mask(vma); - huge_gfp = limit_gfp_mask(huge_gfp, gfp); - folio = shmem_alloc_and_acct_folio(huge_gfp, inode, index, true); - if (IS_ERR(folio)) { -alloc_nohuge: - folio = shmem_alloc_and_acct_folio(gfp, inode, index, false); - } - if (IS_ERR(folio)) { - int retry = 5; - - error = PTR_ERR(folio); - folio = NULL; - if (error != -ENOSPC) - goto unlock; - /* - * Try to reclaim some space by splitting a large folio - * beyond i_size on the filesystem. - */ - while (retry--) { - int ret; - - ret = shmem_unused_huge_shrink(sbinfo, NULL, 1); - if (ret == SHRINK_STOP) - break; - if (ret) - goto alloc_nohuge; + huge_gfp = vma_thp_gfp_mask(vma); + huge_gfp = limit_gfp_mask(huge_gfp, gfp); + folio = shmem_alloc_and_add_folio(huge_gfp, + inode, index, fault_mm, true); + if (!IS_ERR(folio)) { + count_vm_event(THP_FILE_ALLOC); + goto alloced; } + if (PTR_ERR(folio) == -EEXIST) + goto repeat; + } + + folio = shmem_alloc_and_add_folio(gfp, inode, index, fault_mm, false); + if (IS_ERR(folio)) { + error = PTR_ERR(folio); + if (error == -EEXIST) + goto repeat; + folio = NULL; goto unlock; } - hindex = round_down(index, folio_nr_pages(folio)); - - if (sgp == SGP_WRITE) - __folio_set_referenced(folio); - - error = mem_cgroup_charge(folio, fault_mm, gfp); - if (error) { - if (folio_test_pmd_mappable(folio)) { - count_vm_event(THP_FILE_FALLBACK); - count_vm_event(THP_FILE_FALLBACK_CHARGE); - } - goto unacct; - } - - error = shmem_add_to_page_cache(folio, mapping, hindex, NULL, gfp); - if (error) - goto unacct; - - folio_add_lru(folio); - shmem_recalc_inode(inode, folio_nr_pages(folio), 0); +alloced: alloced = true; - if (folio_test_pmd_mappable(folio) && DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE) < folio_next_index(folio) - 1) { + struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); + struct shmem_inode_info *info = SHMEM_I(inode); /* * Part of the large folio is beyond i_size: subject * to shrink under memory pressure. @@ -2062,6 +2083,8 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, spin_unlock(&sbinfo->shrinklist_lock); } + if (sgp == SGP_WRITE) + folio_set_referenced(folio); /* * Let SGP_FALLOC use the SGP_WRITE optimization on a new folio. */ @@ -2085,11 +2108,6 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, /* Perhaps the file has been truncated since we checked */ if (sgp <= SGP_CACHE && ((loff_t)index << PAGE_SHIFT) >= i_size_read(inode)) { - if (alloced) { - folio_clear_dirty(folio); - filemap_remove_folio(folio); - shmem_recalc_inode(inode, 0, 0); - } error = -EINVAL; goto unlock; } @@ -2100,25 +2118,14 @@ static int shmem_get_folio_gfp(struct inode *inode, pgoff_t index, /* * Error recovery. */ -unacct: - shmem_inode_unacct_blocks(inode, folio_nr_pages(folio)); - - if (folio_test_large(folio)) { - folio_unlock(folio); - folio_put(folio); - goto alloc_nohuge; - } unlock: + if (alloced) + filemap_remove_folio(folio); + shmem_recalc_inode(inode, 0, 0); if (folio) { folio_unlock(folio); folio_put(folio); } - if (error == -ENOSPC && !once++) { - shmem_recalc_inode(inode, 0, 0); - goto repeat; - } - if (error == -EEXIST) - goto repeat; return error; }