From patchwork Fri Oct 16 02:41:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11840539 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5290916BC for ; Fri, 16 Oct 2020 02:42:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0697520878 for ; Fri, 16 Oct 2020 02:42:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="wrEmz5tx" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0697520878 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1747194001B; Thu, 15 Oct 2020 22:42:03 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 0D610940007; Thu, 15 Oct 2020 22:42:03 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id EBACA94001B; Thu, 15 Oct 2020 22:42:02 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0073.hostedemail.com [216.40.44.73]) by kanga.kvack.org (Postfix) with ESMTP id BEB9F940007 for ; Thu, 15 Oct 2020 22:42:02 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 2CABA1EE6 for ; Fri, 16 Oct 2020 02:42:02 +0000 (UTC) X-FDA: 77376238884.17.ring77_260a59a27219 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 0D3DC180D0181 for ; Fri, 16 Oct 2020 02:42:02 +0000 (UTC) X-Spam-Summary: 1,0,0,166c48cbbbf58584,d41d8cd98f00b204,akpm@linux-foundation.org,,RULES_HIT:41:69:355:379:800:960:967:968:973:988:989:1260:1345:1359:1381:1431:1437:1535:1543:1711:1730:1747:1777:1792:2198:2199:2393:2525:2553:2559:2564:2682:2685:2731:2859:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3355:3865:3866:3867:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4321:4470:4605:5007:6117:6119:6120:6261:6653:7558:7576:7901:7903:8957:8985:9025:9545:10004:11026:11473:11658:11914:12043:12048:12291:12296:12297:12438:12517:12519:12555:12679:12683:12698:12737:12986:13161:13229:13846:14096:14181:14721:21080:21450:21451:21627:21795:21939:30051:30054:30064:30070:30090,0,RBL:198.145.29.99:@linux-foundation.org:.lbl8.mailshell.net-62.2.0.100 64.100.201.201;04y8my4d3sgts94ye9hw7b67ti1j4och8atf7e1ybhcrrm395som17anjxnsq9c.6t9atc91aij6no5shawsadcqjbw48chh1doynhtjotgyt5h4ph349usz7dy4gr9.r-lbl8.mailshell.net-223.238.255.100,CacheIP:non e,Bayesi X-HE-Tag: ring77_260a59a27219 X-Filterd-Recvd-Size: 5036 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf07.hostedemail.com (Postfix) with ESMTP for ; Fri, 16 Oct 2020 02:42:01 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 577A220897; Fri, 16 Oct 2020 02:42:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1602816120; bh=Nmw9wX1eASn4GgNDc9jm2A1Y4EdGgerStKFFG3lEero=; h=Date:From:To:Subject:In-Reply-To:From; b=wrEmz5txQbLlH2OxXIypBIlZ1kezet0kf4RSDYJsTAyPFcC14CGsm5r3ZGGfyuvy0 6DB50wLXJ1DqfLhNRxDJcuPy4yhzyBTgY62+JA2KPT+Fwl+fDbyXO9xMKllstgxZTw rZWkSG4iLRXCx1t7SZOCeWHvjKQBw44AdPzq6PPs= Date: Thu, 15 Oct 2020 19:41:59 -0700 From: Andrew Morton To: akpm@linux-foundation.org, cai@lca.pw, kirill@shutemov.name, linux-mm@kvack.org, mm-commits@vger.kernel.org, songliubraving@fb.com, torvalds@linux-foundation.org, willy@infradead.org Subject: [patch 018/156] mm/filemap: fix storing to a THP shadow entry Message-ID: <20201016024159.MuK94mohm%akpm@linux-foundation.org> In-Reply-To: <20201015192732.f448da14e9854c7cb7299956@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Subject: mm/filemap: fix storing to a THP shadow entry When a THP is removed from the page cache by reclaim, we replace it with a shadow entry that occupies all slots of the XArray previously occupied by the THP. If the user then accesses that page again, we only allocate a single page, but storing it into the shadow entry replaces all entries with that one page. That leads to bugs like page dumped because: VM_BUG_ON_PAGE(page_to_pgoff(page) != offset) ------------[ cut here ]------------ kernel BUG at mm/filemap.c:2529! https://bugzilla.kernel.org/show_bug.cgi?id=206569 This is hard to reproduce with mainline, but happens regularly with the THP patchset (as so many more THPs are created). This solution is take from the THP patchset. It splits the shadow entry into order-0 pieces at the time that we bring a new page into cache. Link: https://lkml.kernel.org/r/20200903183029.14930-4-willy@infradead.org Fixes: 99cb0dbd47a1 ("mm,thp: add read-only THP support for (non-shmem) FS") Signed-off-by: Matthew Wilcox (Oracle) Cc: Song Liu Cc: "Kirill A . Shutemov" Cc: Qian Cai Signed-off-by: Andrew Morton --- mm/filemap.c | 42 +++++++++++++++++++++++++++++++----------- 1 file changed, 31 insertions(+), 11 deletions(-) --- a/mm/filemap.c~mm-filemap-fix-storing-to-a-thp-shadow-entry +++ a/mm/filemap.c @@ -829,13 +829,12 @@ EXPORT_SYMBOL_GPL(replace_page_cache_pag static int __add_to_page_cache_locked(struct page *page, struct address_space *mapping, - pgoff_t offset, gfp_t gfp_mask, + pgoff_t offset, gfp_t gfp, void **shadowp) { XA_STATE(xas, &mapping->i_pages, offset); int huge = PageHuge(page); int error; - void *old; VM_BUG_ON_PAGE(!PageLocked(page), page); VM_BUG_ON_PAGE(PageSwapBacked(page), page); @@ -846,25 +845,46 @@ static int __add_to_page_cache_locked(st page->index = offset; if (!huge) { - error = mem_cgroup_charge(page, current->mm, gfp_mask); + error = mem_cgroup_charge(page, current->mm, gfp); if (error) goto error; } + gfp &= GFP_RECLAIM_MASK; + do { + unsigned int order = xa_get_order(xas.xa, xas.xa_index); + void *entry, *old = NULL; + + if (order > thp_order(page)) + xas_split_alloc(&xas, xa_load(xas.xa, xas.xa_index), + order, gfp); xas_lock_irq(&xas); - old = xas_load(&xas); - if (old && !xa_is_value(old)) - xas_set_err(&xas, -EEXIST); + xas_for_each_conflict(&xas, entry) { + old = entry; + if (!xa_is_value(entry)) { + xas_set_err(&xas, -EEXIST); + goto unlock; + } + } + + if (old) { + if (shadowp) + *shadowp = old; + /* entry may have been split before we acquired lock */ + order = xa_get_order(xas.xa, xas.xa_index); + if (order > thp_order(page)) { + xas_split(&xas, old, order); + xas_reset(&xas); + } + } + xas_store(&xas, page); if (xas_error(&xas)) goto unlock; - if (xa_is_value(old)) { + if (old) mapping->nrexceptional--; - if (shadowp) - *shadowp = old; - } mapping->nrpages++; /* hugetlb pages do not participate in page cache accounting */ @@ -872,7 +892,7 @@ static int __add_to_page_cache_locked(st __inc_lruvec_page_state(page, NR_FILE_PAGES); unlock: xas_unlock_irq(&xas); - } while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK)); + } while (xas_nomem(&xas, gfp)); if (xas_error(&xas)) { error = xas_error(&xas);