From patchwork Wed Dec 8 04:22:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 12663623 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F063C433EF for ; Wed, 8 Dec 2021 06:04:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 544266B0071; Wed, 8 Dec 2021 01:04:43 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 4F3176B0073; Wed, 8 Dec 2021 01:04:43 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3BABC6B0075; Wed, 8 Dec 2021 01:04:43 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay035.a.hostedemail.com [64.99.140.35]) by kanga.kvack.org (Postfix) with ESMTP id 2AE926B0073 for ; Wed, 8 Dec 2021 01:04:43 -0500 (EST) Received: by unirelay01.hostedemail.com (Postfix, from userid 108) id D631A60430; Wed, 8 Dec 2021 05:38:45 +0000 (UTC) Received: by unirelay01.hostedemail.com (Postfix, from userid 108) id 8954F6058B; Wed, 8 Dec 2021 05:32:07 +0000 (UTC) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 4D5F660A08 for ; Wed, 8 Dec 2021 04:23:16 +0000 (UTC) X-FDA: 78893332350.14.F23BF92 Received: from casper.infradead.org (casper.infradead.org [90.155.50.34]) by imf04.hostedemail.com (Postfix) with ESMTP id E602340002 for ; Wed, 8 Dec 2021 04:23:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=8xbmIsGq9D+rej3BrE9p2VwSQhGNmZUjepcD8zMRSIA=; b=nlhDT7vd2bgRqO+rVD4ENwDbwp ph/q9Sk2q1LlWI0cBCKBSD41oNRVG1/Rnprrwi1LVMTIrJd36uiUseWHiHV3Hgw9J3cIPFi8igzgt 9gH7bIQSWGqTRyqcS5AwiqBX0nUkhOqDOXQpQzW2F/g97ByAvePirmpoDmHqcS99yyPTmuqzezGXU GcME17NHhb1bDoiWJDD49B332EP15+1XWZpjW7iAMF7XbGMurKIBXy4+ucSaGh3083qpA9RGzBNPN YYw0IfJ8YNC1IgWWiULwXDFJO0InwXlm8uDXEPgsJ7lPK2Z+g+Zc3aH+lUYl+2Zv9vP/OHKIWXfD1 zHTJwyQg==; Received: from willy by casper.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1muoU5-0084Zk-EL; Wed, 08 Dec 2021 04:23:13 +0000 From: "Matthew Wilcox (Oracle)" To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" Subject: [PATCH 30/48] filemap: Use a folio in filemap_page_mkwrite Date: Wed, 8 Dec 2021 04:22:38 +0000 Message-Id: <20211208042256.1923824-31-willy@infradead.org> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211208042256.1923824-1-willy@infradead.org> References: <20211208042256.1923824-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: E602340002 X-Stat-Signature: jby645q8umfhkhz3pzake83wed7bmxz4 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=infradead.org header.s=casper.20170209 header.b=nlhDT7vd; spf=none (imf04.hostedemail.com: domain of willy@infradead.org has no SPF policy when checking 90.155.50.34) smtp.mailfrom=willy@infradead.org; dmarc=none X-HE-Tag: 1638937395-949305 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This fixes a bug for tail pages. They always have a NULL mapping, so the check would fail and we would never mark the folio as dirty. Ends up growing the kernel by 19 bytes although there will be fewer calls to compound_head() dynamically. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 8cca04a79808..4ae9d5befffa 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -3349,24 +3349,24 @@ EXPORT_SYMBOL(filemap_map_pages); vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf) { struct address_space *mapping = vmf->vma->vm_file->f_mapping; - struct page *page = vmf->page; + struct folio *folio = page_folio(vmf->page); vm_fault_t ret = VM_FAULT_LOCKED; sb_start_pagefault(mapping->host->i_sb); file_update_time(vmf->vma->vm_file); - lock_page(page); - if (page->mapping != mapping) { - unlock_page(page); + folio_lock(folio); + if (folio->mapping != mapping) { + folio_unlock(folio); ret = VM_FAULT_NOPAGE; goto out; } /* - * We mark the page dirty already here so that when freeze is in + * We mark the folio dirty already here so that when freeze is in * progress, we are guaranteed that writeback during freezing will - * see the dirty page and writeprotect it again. + * see the dirty folio and writeprotect it again. */ - set_page_dirty(page); - wait_for_stable_page(page); + folio_mark_dirty(folio); + folio_wait_stable(folio); out: sb_end_pagefault(mapping->host->i_sb); return ret;