From patchwork Wed Feb 12 04:18:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377491 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AF8961800 for ; Wed, 12 Feb 2020 04:19:04 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8FB7221734 for ; Wed, 12 Feb 2020 04:19:04 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="o+4Q+Lx3" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728212AbgBLETD (ORCPT ); Tue, 11 Feb 2020 23:19:03 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:54106 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728137AbgBLESu (ORCPT ); Tue, 11 Feb 2020 23:18:50 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=hIDhhEou9RiUJDaLZAeECfggSk3sIeZ9QeAjOR5EQ34=; b=o+4Q+Lx3BQoXdB6pmIZNMe2aUb CZi3KzFXiBXqjCzC5E0v34MYMxbcmJu3Rrncx01PU9cwODwDLM1dClFtq5oqCLkulOMhhBVCwFQQK O7gK1YrssQJSLR/N2MFpNRVO2Ps52v5EPJNgcxPSJ21lMydKV4fn5OeGP5HFe+0KqDluiMhr1V5vX bb2mpuVTh9W6rq+6ruMUc7V8iK5X13IWLX8A2Zyh/r87S7Gg3R2VmAgJuMtNHbggNG+d3TY2DSdXy uco25ofA+Usz+g9M8rzzYYkIgZFCFuK8TtxGMYwsVRZ5JkQ2iy6Z5kXrYwDXZrnQ6X2yNM6EG5bOW tEJkZUrA==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU6-0006mM-Gc; Wed, 12 Feb 2020 04:18:46 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, "Kirill A . Shutemov" Subject: [PATCH v2 01/25] mm: Use vm_fault error code directly Date: Tue, 11 Feb 2020 20:18:21 -0800 Message-Id: <20200212041845.25879-2-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" Use VM_FAULT_OOM instead of indirecting through vmf_error(-ENOMEM). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kirill A. Shutemov Reviewed-by: Christoph Hellwig --- mm/filemap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/filemap.c b/mm/filemap.c index 1784478270e1..1beb7716276b 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -2491,7 +2491,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf) if (!page) { if (fpin) goto out_retry; - return vmf_error(-ENOMEM); + return VM_FAULT_OOM; } } From patchwork Wed Feb 12 04:18:22 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377561 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id E1B0714E3 for ; Wed, 12 Feb 2020 04:20:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B75E920848 for ; Wed, 12 Feb 2020 04:20:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="si99YFEQ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728377AbgBLEUS (ORCPT ); Tue, 11 Feb 2020 23:20:18 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53876 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727960AbgBLESq (ORCPT ); Tue, 11 Feb 2020 23:18:46 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=G02P3NwoSCoisXgQ/IS2rHa3w6ThGczr5ALv3SgsGPE=; b=si99YFEQ0elcNaPbG1Sa7xumtX V7eGl8RRqn/ROGpPEuHPSmuTWjQIm5vo7lQlpOZyNr13m5VvzeRnukA8bO75KMxjKUMkqCvVFh9RC bAGhuFS7oXGvdVwJlWCSFVy2qtN1ME4nHd27PKLOLRcFpRUQbNZhMNAwiN2REv3fDHGKextlNerDD AakK2U6Jo2OqyZdebCrhKzIsfldMP9qoKNsSszalDBnlEhAMnq5Knq12SDN/Xf2sQKEbysUiSHdRV Xn/xPbEVn5qH1t/ema2JlwYjN6p69/bDPnkIZG02tw99eCasVDKlwm2YRsUoTroeE1jDg5M3G8Pfg rq46tHlQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU6-0006mQ-Hd; Wed, 12 Feb 2020 04:18:46 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 02/25] mm: Optimise find_subpage for !THP Date: Tue, 11 Feb 2020 20:18:22 -0800 Message-Id: <20200212041845.25879-3-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" If THP is disabled, find_subpage can become a no-op. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kirill A. Shutemov --- include/linux/pagemap.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 75bdfec49710..0842622cca90 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -340,7 +340,7 @@ static inline struct page *find_subpage(struct page *page, pgoff_t offset) VM_BUG_ON_PAGE(PageTail(page), page); - return page + (offset & (compound_nr(page) - 1)); + return page + (offset & (hpage_nr_pages(page) - 1)); } struct page *find_get_entry(struct address_space *mapping, pgoff_t offset); From patchwork Wed Feb 12 04:18:23 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377557 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3A163109A for ; Wed, 12 Feb 2020 04:20:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 028E520848 for ; Wed, 12 Feb 2020 04:20:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="R3nxxPSW" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728373AbgBLEUS (ORCPT ); Tue, 11 Feb 2020 23:20:18 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53880 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727961AbgBLESq (ORCPT ); Tue, 11 Feb 2020 23:18:46 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=1UKSv/uR1gizoUM2UndkwewM2cMbtXFX9L0k4TgJeuY=; b=R3nxxPSWtj03IZA6Q60snyIzPC GEXcItmVPJlJHpYTVYajKyGL+Oah/pLRMZoM0pxNVjAmV17PEmygBknVi6eaUTjjKoN9cpq4+JHXL RL1UbPTwnKrsk8GpNs8cRGurbpDDDUdN3l81jSG+bIPO5ZbdY7TCErTlS6AdU1B8FLHKB17Xe3ARX +aT0Jn9aCjuKj/9UzBhjJdNoBuCU8PsSWEVj2v1aZQiKQVHYA2abifN36GqiA25Zw1l7OsXPbSN3+ fEL/2JM3lAN5lWKlWRJNGCA+CDJ3KnkVMySNLAc80+BzEgDx27JeWSIwm+clccZuD90m53fWyXXjY D2CNc58w==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU6-0006mU-JQ; Wed, 12 Feb 2020 04:18:46 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 03/25] mm: Use VM_BUG_ON_PAGE in clear_page_dirty_for_io Date: Tue, 11 Feb 2020 20:18:23 -0800 Message-Id: <20200212041845.25879-4-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" Dumping the page information in this circumstance helps for debugging. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Kirill A. Shutemov --- mm/page-writeback.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 2caf780a42e7..9173c25cf8e6 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2655,7 +2655,7 @@ int clear_page_dirty_for_io(struct page *page) struct address_space *mapping = page_mapping(page); int ret = 0; - BUG_ON(!PageLocked(page)); + VM_BUG_ON_PAGE(!PageLocked(page), page); if (mapping && mapping_cap_account_dirty(mapping)) { struct inode *inode = mapping->host; From patchwork Wed Feb 12 04:18:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377553 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3651A14E3 for ; Wed, 12 Feb 2020 04:20:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1542F2168B for ; Wed, 12 Feb 2020 04:20:17 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="mal+r4/h" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728186AbgBLEUK (ORCPT ); Tue, 11 Feb 2020 23:20:10 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53888 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727963AbgBLESq (ORCPT ); Tue, 11 Feb 2020 23:18:46 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Kw58gaPv+7G0ZzcrVuIkBegsWMsGq2FXs/Of17fNElw=; b=mal+r4/hnQop8UfcokebKYPdWH QakKSQKRGopYdR/y5IwkPMXDj+TzY65rYD857LwOYVY3BwyfthEHdBIJwtgW7PJc92A4zSeOhDE6A ol8njve22guWjTH1ZN0rlVNVpKZbD+nwTbSl2kMw5KawiE5uMVi7Ov0r8CTY+gUY6Yiw4bfR3aPMl poW55S1VNKJ2nym/OaNZxHoxi2m5Yj16Wl9LwaHREI8mBQU5e/li4ijO+rbv2FTBBVfu/JPKoUu1E 7caUPh3pIAyqpTmz/fJwTdZgeV9V9Vm6+MVimiv3Mo1VYsSmT6tKmy/9atmLWdqGvdSFrOZWtN+FU UA9jI2sw==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU6-0006md-Ke; Wed, 12 Feb 2020 04:18:46 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 04/25] mm: Unexport find_get_entry Date: Tue, 11 Feb 2020 20:18:24 -0800 Message-Id: <20200212041845.25879-5-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" No in-tree users (proc, madvise, memcg, mincore) can be built as a module. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig Acked-by: Kirill A. Shutemov --- mm/filemap.c | 1 - 1 file changed, 1 deletion(-) diff --git a/mm/filemap.c b/mm/filemap.c index 1beb7716276b..83ce9ce0bee1 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1536,7 +1536,6 @@ struct page *find_get_entry(struct address_space *mapping, pgoff_t offset) return page; } -EXPORT_SYMBOL(find_get_entry); /** * find_lock_entry - locate, pin and lock a page cache entry From patchwork Wed Feb 12 04:18:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377555 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3FCAF14E3 for ; Wed, 12 Feb 2020 04:20:19 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1483D2168B for ; Wed, 12 Feb 2020 04:20:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="huDOzVl6" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728151AbgBLEUS (ORCPT ); Tue, 11 Feb 2020 23:20:18 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53892 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727964AbgBLESq (ORCPT ); Tue, 11 Feb 2020 23:18:46 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=UEEB4xcwawu3PE2qZV8LQki9JonFO23BUNIp+IfEYZY=; b=huDOzVl6Bk4e98P/yVaPtrZR7/ 7+Vrye8M7d37D2jF4l2CgwYVJxrxcXfYnkcQ/YIZFzibgM0J5VTkhmbPRjBumtkEJXa06OAZv+scn DHfSfY1hs7FEwHbTLzMnsE99GjBNAV3xybDvtoVCyyQwtmEf1vPgO3FjzD6NySy4ctFemvkV5GOeq 2SPtjcSXcgF7gxCqhVYLn05GIMQpmc7i2vpvnLiX4fVEq57nNW8dyWeEJ/ErT01ClUwhZ6WYC2QRw LPtzX/IN1v6HSlHJIxaY/nRlpATOVEWihWcsQY8nhjBREQyhwmiezDt/5ycSvWInIZLFJuEj8FMd8 SWVmJRyg==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU6-0006mp-M7; Wed, 12 Feb 2020 04:18:46 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 05/25] mm: Fix documentation of FGP flags Date: Tue, 11 Feb 2020 20:18:25 -0800 Message-Id: <20200212041845.25879-6-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" We never had PCG flags; they've been called FGP flags since their introduction in 2014. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kirill A. Shutemov --- mm/filemap.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 83ce9ce0bee1..3204293f9b58 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1577,12 +1577,12 @@ EXPORT_SYMBOL(find_lock_entry); * pagecache_get_page - find and get a page reference * @mapping: the address_space to search * @offset: the page index - * @fgp_flags: PCG flags + * @fgp_flags: FGP flags * @gfp_mask: gfp mask to use for the page cache data page allocation * * Looks up the page cache slot at @mapping & @offset. * - * PCG flags modify how the page is returned. + * FGP flags modify how the page is returned. * * @fgp_flags can be: * From patchwork Wed Feb 12 04:18:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377545 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 21DA914E3 for ; Wed, 12 Feb 2020 04:20:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 00C1C215A4 for ; Wed, 12 Feb 2020 04:20:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="p3kVKCqF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728341AbgBLET6 (ORCPT ); Tue, 11 Feb 2020 23:19:58 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53898 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727897AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=A6iWVb3zlDQS/VntSVu1LOGr/OubbxQggSx3CMqmkcA=; b=p3kVKCqFG4KTxDyrYl8WC4YcFv qK5fjiTePUliVUHjNkZH53pxDwgPYIqz+I3SCVqRs+iHUjDDCy5aZlZKdDEDkxkSWDO4TlqW02/7X csIWdkr/o/VRP7po+sjfQtIF6xCGwmVc8RNJY04BFKvpDlICjbC8KVYCOkimMuQczJ4Ka/Ne04an0 GnL++QUEk+Rg8WeH0MA8DAWFLWUf0NuH53Mo0tt0CSs1iLkGSvmfuEktXIBvDI2/XzIOV29ElsvO4 sQXtBNBiDLPz8AhTCmIQwT/RufUke68Wgmk9qmJRxgpIufXKO8XuN7BWSbz62ZtSBXbqiTyf6o2Tq k1G+cqEQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU6-0006mv-NF; Wed, 12 Feb 2020 04:18:46 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 06/25] mm: Allow hpages to be arbitrary order Date: Tue, 11 Feb 2020 20:18:26 -0800 Message-Id: <20200212041845.25879-7-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" Remove the assumption in hpage_nr_pages() that compound pages are necessarily PMD sized. The return type needs to be signed as we need to use the negative value, eg when calling update_lru_size(). Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/huge_mm.h | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 5aca3d1bdb32..16367e2f771e 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -230,12 +230,8 @@ static inline spinlock_t *pud_trans_huge_lock(pud_t *pud, else return NULL; } -static inline int hpage_nr_pages(struct page *page) -{ - if (unlikely(PageTransHuge(page))) - return HPAGE_PMD_NR; - return 1; -} + +#define hpage_nr_pages(page) (long)compound_nr(page) struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, int flags, struct dev_pagemap **pgmap); @@ -289,7 +285,7 @@ static inline struct list_head *page_deferred_list(struct page *page) #define HPAGE_PUD_MASK ({ BUILD_BUG(); 0; }) #define HPAGE_PUD_SIZE ({ BUILD_BUG(); 0; }) -#define hpage_nr_pages(x) 1 +#define hpage_nr_pages(x) 1L static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) { From patchwork Wed Feb 12 04:18:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377509 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4F410109A for ; Wed, 12 Feb 2020 04:19:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 2E70424650 for ; Wed, 12 Feb 2020 04:19:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="UoKeR+uD" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728061AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53906 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727965AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=5fxK0+YFXSkJnTT1JbNUHf7Rkjo4ppyqzoiB3mODWTo=; b=UoKeR+uDs6mPf2KwwVwGp8Y8n7 EZNJYoeThkt8fZ0c/Zl3wznWrJqQ+tE8q5Q1vB+D3kW99hjtNNmSiX3LMu/Qvb8rils17qJy6BQWU 6pQPxQkv1QrXKwZlAY0jwvVoYYAU4AkXSB5DB+VzmvFIGphY1ZZRjpnD1dVkYQYzE+FkHxV/m13WW i735Pqrqr4igy2Jta9pUn3WREr4oFJH8PYtL7vCPdoYvObVwIpI4FxD8Eq5gesahTePe8Ncg1gijI /u5Dyjnwx3ObHccnydEVez2c+hVP2sWPTndWnQ0+X+pQ3QCeE9OPYHAiV9VSfWioHrFxcMPeiFERZ s9EjqAdw==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU6-0006n2-OM; Wed, 12 Feb 2020 04:18:46 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 07/25] mm: Introduce thp_size Date: Tue, 11 Feb 2020 20:18:27 -0800 Message-Id: <20200212041845.25879-8-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" This is like page_size(), but compiles down to just PAGE_SIZE if THP are disabled. Convert the users of hpage_nr_pages() which would prefer this interface. Signed-off-by: Matthew Wilcox (Oracle) --- drivers/nvdimm/btt.c | 4 +--- drivers/nvdimm/pmem.c | 3 +-- include/linux/huge_mm.h | 2 ++ mm/internal.h | 2 +- mm/page_io.c | 2 +- mm/page_vma_mapped.c | 4 ++-- 6 files changed, 8 insertions(+), 9 deletions(-) diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c index 0d04ea3d9fd7..5d6a2a22f5a0 100644 --- a/drivers/nvdimm/btt.c +++ b/drivers/nvdimm/btt.c @@ -1488,10 +1488,8 @@ static int btt_rw_page(struct block_device *bdev, sector_t sector, { struct btt *btt = bdev->bd_disk->private_data; int rc; - unsigned int len; - len = hpage_nr_pages(page) * PAGE_SIZE; - rc = btt_do_bvec(btt, NULL, page, len, 0, op, sector); + rc = btt_do_bvec(btt, NULL, page, thp_size(page), 0, op, sector); if (rc == 0) page_endio(page, op_is_write(op), 0); diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c index 4eae441f86c9..9c71c81f310f 100644 --- a/drivers/nvdimm/pmem.c +++ b/drivers/nvdimm/pmem.c @@ -223,8 +223,7 @@ static int pmem_rw_page(struct block_device *bdev, sector_t sector, struct pmem_device *pmem = bdev->bd_queue->queuedata; blk_status_t rc; - rc = pmem_do_bvec(pmem, page, hpage_nr_pages(page) * PAGE_SIZE, - 0, op, sector); + rc = pmem_do_bvec(pmem, page, thp_size(page), 0, op, sector); /* * The ->rw_page interface is subtle and tricky. The core diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 16367e2f771e..3680ae2d9019 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -232,6 +232,7 @@ static inline spinlock_t *pud_trans_huge_lock(pud_t *pud, } #define hpage_nr_pages(page) (long)compound_nr(page) +#define thp_size(page) page_size(page) struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, int flags, struct dev_pagemap **pgmap); @@ -286,6 +287,7 @@ static inline struct list_head *page_deferred_list(struct page *page) #define HPAGE_PUD_SIZE ({ BUILD_BUG(); 0; }) #define hpage_nr_pages(x) 1L +#define thp_size(x) PAGE_SIZE static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) { diff --git a/mm/internal.h b/mm/internal.h index 41b93c4b3ab7..390d81d8b85f 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -358,7 +358,7 @@ vma_address(struct page *page, struct vm_area_struct *vma) unsigned long start, end; start = __vma_address(page, vma); - end = start + PAGE_SIZE * (hpage_nr_pages(page) - 1); + end = start + thp_size(page) - PAGE_SIZE; /* page should be within @vma mapping range */ VM_BUG_ON_VMA(end < vma->vm_start || start >= vma->vm_end, vma); diff --git a/mm/page_io.c b/mm/page_io.c index 76965be1d40e..dd935129e3cb 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -41,7 +41,7 @@ static struct bio *get_swap_bio(gfp_t gfp_flags, bio->bi_iter.bi_sector <<= PAGE_SHIFT - 9; bio->bi_end_io = end_io; - bio_add_page(bio, page, PAGE_SIZE * hpage_nr_pages(page), 0); + bio_add_page(bio, page, thp_size(page), 0); } return bio; } diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c index 719c35246cfa..e65629c056e8 100644 --- a/mm/page_vma_mapped.c +++ b/mm/page_vma_mapped.c @@ -227,7 +227,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) if (pvmw->address >= pvmw->vma->vm_end || pvmw->address >= __vma_address(pvmw->page, pvmw->vma) + - hpage_nr_pages(pvmw->page) * PAGE_SIZE) + thp_size(pvmw->page)) return not_found(pvmw); /* Did we cross page table boundary? */ if (pvmw->address % PMD_SIZE == 0) { @@ -268,7 +268,7 @@ int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma) unsigned long start, end; start = __vma_address(page, vma); - end = start + PAGE_SIZE * (hpage_nr_pages(page) - 1); + end = start + thp_size(page) - PAGE_SIZE; if (unlikely(end < vma->vm_start || start >= vma->vm_end)) return 0; From patchwork Wed Feb 12 04:18:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377471 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 99062109A for ; Wed, 12 Feb 2020 04:18:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 70E362086A for ; Wed, 12 Feb 2020 04:18:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="FAdp2tXx" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728028AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53910 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727966AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=i7vboR3/DJp/zECAeL6j4GiEm3ZUJFUzH7UtZQ9ZEF0=; b=FAdp2tXxaIowBuCaDAdUtlgPMI REH220rdRzBHCOYiuA1BN9W47EQ538R6k5uZEtBF5gMgY1fSqKoONpkywoiMJYaTBVacuHgS1XX+5 /ypXgTWDk74QCFtWDsO33RDqXvZqkbaQIv1735c50gFF7JWhNnPtRp18/lI1u3ogH9s0cZ2nclgDr nXd/qmh9tgn5N2nr2dZndgBL0oz0wuJZVqkOggxHGrl/I0AJqJ+VoKG5i2qF8m4FOVYnsvambdbMt VuFwGES8IyvxbUvH3bpT4T7NR4wNGX10JWVDUqKfgiN2Sh3rQaeZhjpZndaJGkIXb3P9rpzojlF1d Iz4gnKbg==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU6-0006n7-Q5; Wed, 12 Feb 2020 04:18:46 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 08/25] mm: Introduce thp_order Date: Tue, 11 Feb 2020 20:18:28 -0800 Message-Id: <20200212041845.25879-9-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" Like compound_order() except 0 when THP is disabled. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/huge_mm.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 3680ae2d9019..3de788ee25bd 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -233,6 +233,7 @@ static inline spinlock_t *pud_trans_huge_lock(pud_t *pud, #define hpage_nr_pages(page) (long)compound_nr(page) #define thp_size(page) page_size(page) +#define thp_order(page) compound_order(page) struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, int flags, struct dev_pagemap **pgmap); @@ -288,6 +289,7 @@ static inline struct list_head *page_deferred_list(struct page *page) #define hpage_nr_pages(x) 1L #define thp_size(x) PAGE_SIZE +#define thp_order(x) 0U static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) { From patchwork Wed Feb 12 04:18:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377519 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6A27014E3 for ; Wed, 12 Feb 2020 04:19:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 40629215A4 for ; Wed, 12 Feb 2020 04:19:27 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="q+ARYIkz" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728222AbgBLETR (ORCPT ); Tue, 11 Feb 2020 23:19:17 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53914 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727972AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Ar2j50GYs9J9BaYFHSXaQwTgcbYxh2+e3nItv/JisDg=; b=q+ARYIkzqoV0EJate98vASh97q PAPLu29G/dVnmGW7ZtdxE3PfJDAhe+wPW54riDrYkchOXlSxW6L5OpWb1aUtWrmytruLUDQdZ+Ra9 5fAeEVdmOr7W5O2E/UwdBPHneBERnjwgf+T0asiJA7QPtKN4mPH/kiXG6l50XLMNbo+cX84geZMps 0OfMo8pMcPhDlZt+upvImJ5SXA27gBcxt1ozyfKx190UbW7DMVCXKAMM9fqm7zIIpCXgZYMCMABjW F/HmewLt4yINlc5w+FN8BzD4dEdCW08w3SGCR9ozhFdcOy4Mjp07JhxAu45NVvbeWFZ0hKV7tZX5D 0e62MobA==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU6-0006nH-RF; Wed, 12 Feb 2020 04:18:46 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 09/25] fs: Add a filesystem flag for large pages Date: Tue, 11 Feb 2020 20:18:29 -0800 Message-Id: <20200212041845.25879-10-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" The page cache needs to know whether the filesystem supports pages > PAGE_SIZE. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/fs.h | 1 + 1 file changed, 1 insertion(+) diff --git a/include/linux/fs.h b/include/linux/fs.h index d4e2d2964346..24e720723afb 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -2235,6 +2235,7 @@ struct file_system_type { #define FS_HAS_SUBTYPE 4 #define FS_USERNS_MOUNT 8 /* Can be mounted by userns root */ #define FS_DISALLOW_NOTIFY_PERM 16 /* Disable fanotify permission events */ +#define FS_LARGE_PAGES 8192 /* Remove once all fs converted */ #define FS_RENAME_DOES_D_MOVE 32768 /* FS will handle d_move() during rename() internally. */ int (*init_fs_context)(struct fs_context *); const struct fs_parameter_spec *parameters; From patchwork Wed Feb 12 04:18:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377541 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5064D14E3 for ; Wed, 12 Feb 2020 04:19:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 3087E20848 for ; Wed, 12 Feb 2020 04:19:55 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Y6kcowk7" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728298AbgBLETl (ORCPT ); Tue, 11 Feb 2020 23:19:41 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53922 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727973AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=nIbGcDMGk7E56iXuXq0k2ZdUB2/VJCnjF4rx4mVAl90=; b=Y6kcowk76Sp5IArbGVvPKTXiDl qtqBZiFQaKpY/5/NotzOzXAzBmGpUUr/5ThSGfCbGMk6dscEu3SHz4SzNXD6QwRUjV0zwwpvWfOaj OxxiC+SpCdj1T0+lhdlr9XGmzmJFZ19YBHVXBQIoowiLnbxKd6NL+WnjmgqLhw4L8d7uD7j8e3NOE RjT3ljAqjjySxEddUYGh2iMfjUKhpJLLVvxXc+zpPbme1Ci6bGMjPAwOw1potxMHapK0Od2FGaNwb FJuz1GTbM5YQeUg0KmTjVcq+irU/MMZWYAHyd8atbUc8c5ecp8Cvv1C/KQvM6eW6SUNQI2FiWC83e jPY9L/hw==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU6-0006nR-Sc; Wed, 12 Feb 2020 04:18:46 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 10/25] fs: Introduce i_blocks_per_page Date: Tue, 11 Feb 2020 20:18:30 -0800 Message-Id: <20200212041845.25879-11-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" This helper is useful for both large pages in the page cache and for supporting block size larger than page size. Convert some example users (we have a few different ways of writing this idiom). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 8 ++++---- fs/jfs/jfs_metapage.c | 2 +- fs/xfs/xfs_aops.c | 2 +- include/linux/pagemap.h | 16 ++++++++++++++++ 4 files changed, 22 insertions(+), 6 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index e40eb45230fa..c551a48e2a81 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -46,7 +46,7 @@ iomap_page_create(struct inode *inode, struct page *page) { struct iomap_page *iop = to_iomap_page(page); - if (iop || i_blocksize(inode) == PAGE_SIZE) + if (iop || i_blocks_per_page(inode, page) <= 1) return iop; iop = kmalloc(sizeof(*iop), GFP_NOFS | __GFP_NOFAIL); @@ -152,7 +152,7 @@ iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len) unsigned int i; spin_lock_irqsave(&iop->uptodate_lock, flags); - for (i = 0; i < PAGE_SIZE / i_blocksize(inode); i++) { + for (i = 0; i < i_blocks_per_page(inode, page); i++) { if (i >= first && i <= last) set_bit(i, iop->uptodate); else if (!test_bit(i, iop->uptodate)) @@ -1073,7 +1073,7 @@ iomap_finish_page_writeback(struct inode *inode, struct page *page, mapping_set_error(inode->i_mapping, -EIO); } - WARN_ON_ONCE(i_blocksize(inode) < PAGE_SIZE && !iop); + WARN_ON_ONCE(i_blocks_per_page(inode, page) > 1 && !iop); WARN_ON_ONCE(iop && atomic_read(&iop->write_count) <= 0); if (!iop || atomic_dec_and_test(&iop->write_count)) @@ -1369,7 +1369,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, int error = 0, count = 0, i; LIST_HEAD(submit_list); - WARN_ON_ONCE(i_blocksize(inode) < PAGE_SIZE && !iop); + WARN_ON_ONCE(i_blocks_per_page(inode, page) > 1 && !iop); WARN_ON_ONCE(iop && atomic_read(&iop->write_count) != 0); /* diff --git a/fs/jfs/jfs_metapage.c b/fs/jfs/jfs_metapage.c index a2f5338a5ea1..176580f54af9 100644 --- a/fs/jfs/jfs_metapage.c +++ b/fs/jfs/jfs_metapage.c @@ -473,7 +473,7 @@ static int metapage_readpage(struct file *fp, struct page *page) struct inode *inode = page->mapping->host; struct bio *bio = NULL; int block_offset; - int blocks_per_page = PAGE_SIZE >> inode->i_blkbits; + int blocks_per_page = i_blocks_per_page(inode, page); sector_t page_start; /* address of page in fs blocks */ sector_t pblock; int xlen; diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 0897dd71c622..5573bf2957dd 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -544,7 +544,7 @@ xfs_discard_page( page, ip->i_ino, offset); error = xfs_bmap_punch_delalloc_range(ip, start_fsb, - PAGE_SIZE / i_blocksize(inode)); + i_blocks_per_page(inode, page)); if (error && !XFS_FORCED_SHUTDOWN(mp)) xfs_alert(mp, "page discard unable to remove delalloc mapping."); out_invalidate: diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 0842622cca90..aa925295347c 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -748,4 +748,20 @@ static inline int page_mkwrite_check_truncate(struct page *page, return offset; } +/** + * i_blocks_per_page - How many blocks fit in this page. + * @inode: The inode which contains the blocks. + * @page: The (potentially large) page. + * + * If the block size is larger than the size of this page, will return + * zero, + * + * Context: Any context. + * Return: The number of filesystem blocks covered by this page. + */ +static inline +unsigned int i_blocks_per_page(struct inode *inode, struct page *page) +{ + return thp_size(page) >> inode->i_blkbits; +} #endif /* _LINUX_PAGEMAP_H */ From patchwork Wed Feb 12 04:18:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377547 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6D64D1800 for ; Wed, 12 Feb 2020 04:20:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4BECE20848 for ; Wed, 12 Feb 2020 04:20:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="r8IAdQpE" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728167AbgBLET6 (ORCPT ); Tue, 11 Feb 2020 23:19:58 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53928 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727974AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=5Ar0Jeyv020FR8vajw8QCoSu+6yNwintSCRCindI1ZQ=; b=r8IAdQpEyytTz+UkRifmhnp5oX tug+Zi6+43HDkgNWNMhAa6OxOVd1gvh5gww7IiRtJBPzgMbvLgd4eyZtWFBYBKhBWDBKmzf9sKi6t 2I9goiE3TQteNTlAxrA50cRHD4Dyj1NxT6zZxIc3j4q1cn4+JYscrIoBIFKc5iRshjfoKQLa441Ox xYxbbKWzdqFhZXpx8MbdbSRVRBHohpv2+cBC5IcdVnOeJJGWWpjF4qTtq/+1LJpHlyuqYxHdoA4JL 4a+qp70EIhma0kqSu9bFWN7Vt/okHXBWf9mim6hn+OffxYV+Gb8S/Zv2Vh5S1N+x+hgBjCDvvh+RP nnc2wqAg==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU6-0006na-Ty; Wed, 12 Feb 2020 04:18:46 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 11/25] fs: Make page_mkwrite_check_truncate thp-aware Date: Tue, 11 Feb 2020 20:18:31 -0800 Message-Id: <20200212041845.25879-12-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" If the page is compound, check the appropriate indices and return the appropriate sizes. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index aa925295347c..2ec33aabdbf6 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -732,17 +732,18 @@ static inline int page_mkwrite_check_truncate(struct page *page, struct inode *inode) { loff_t size = i_size_read(inode); - pgoff_t index = size >> PAGE_SHIFT; - int offset = offset_in_page(size); + pgoff_t first_index = size >> PAGE_SHIFT; + pgoff_t last_index = first_index + hpage_nr_pages(page) - 1; + unsigned long offset = offset_in_this_page(page, size); if (page->mapping != inode->i_mapping) return -EFAULT; /* page is wholly inside EOF */ - if (page->index < index) - return PAGE_SIZE; + if (page->index < first_index) + return thp_size(page); /* page is wholly past EOF */ - if (page->index > index || !offset) + if (page->index > last_index || !offset) return -EFAULT; /* page is partially inside EOF */ return offset; From patchwork Wed Feb 12 04:18:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377517 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1680D109A for ; Wed, 12 Feb 2020 04:19:26 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id EA7E92465D for ; Wed, 12 Feb 2020 04:19:25 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="qNLJwgF/" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728229AbgBLETR (ORCPT ); Tue, 11 Feb 2020 23:19:17 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53934 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727982AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=E2LjqBszgquKpJquZnIE6WmTb8rlYH+KCAsDdBVlwlw=; b=qNLJwgF/rnW8ybxq8FcTESoXmR kVdQKbpxEeeKJQu4bjlaz4jNso3rZZfpN+vBHnd47cUEq2I8jusmAt5TLAumk5so/nas2pJ7J/Y5k eOf8j1QpTmlsBsAsHjljLv+G6VSZh+CGIlvZHd6hNXqBL2DYe0OdSNGoc0ONtr5emMxK1XPxHeVbf 0H9JwJFkSY11MVcB+hNkT8blC7dR/PCRxNKR91Qm1BeU3gPD8W7hudcYZZhBMM6Pat4MrZooqucUj aH3/mxaJvgYuCgOx6Qd6GMVTUGgF1bxXVUjbyYpknlqN5J4POdJQ+lLl5MPzOry8xFTjX158hclxi ZgMszcVA==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU6-0006ng-V3; Wed, 12 Feb 2020 04:18:46 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 12/25] mm: Add file_offset_of_ helpers Date: Tue, 11 Feb 2020 20:18:32 -0800 Message-Id: <20200212041845.25879-13-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" The page_offset function is badly named for people reading the functions which call it. The natural meaning of a function with this name would be 'offset within a page', not 'page offset in bytes within a file'. Dave Chinner suggests file_offset_of_page() as a replacement function name and I'm also adding file_offset_of_next_page() as a helper for the large page work. Also add kernel-doc for these functions so they show up in the kernel API book. page_offset() is retained as a compatibility define for now. Signed-off-by: Matthew Wilcox (Oracle) --- drivers/net/ethernet/ibm/ibmveth.c | 2 -- include/linux/pagemap.h | 25 ++++++++++++++++++++++--- 2 files changed, 22 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/ibm/ibmveth.c b/drivers/net/ethernet/ibm/ibmveth.c index 84121aab7ff1..4cad94ac9bc9 100644 --- a/drivers/net/ethernet/ibm/ibmveth.c +++ b/drivers/net/ethernet/ibm/ibmveth.c @@ -978,8 +978,6 @@ static int ibmveth_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) return -EOPNOTSUPP; } -#define page_offset(v) ((unsigned long)(v) & ((1 << 12) - 1)) - static int ibmveth_send(struct ibmveth_adapter *adapter, union ibmveth_buf_desc *descs, unsigned long mss) { diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 2ec33aabdbf6..497197315b73 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -432,14 +432,33 @@ static inline pgoff_t page_to_pgoff(struct page *page) return page_to_index(page); } -/* - * Return byte-offset into filesystem object for page. +/** + * file_offset_of_page - File offset of this page. + * @page: Page cache page. + * + * Context: Any context. + * Return: The offset of the first byte of this page. */ -static inline loff_t page_offset(struct page *page) +static inline loff_t file_offset_of_page(struct page *page) { return ((loff_t)page->index) << PAGE_SHIFT; } +/* Legacy; please convert callers */ +#define page_offset(page) file_offset_of_page(page) + +/** + * file_offset_of_next_page - File offset of the next page. + * @page: Page cache page. + * + * Context: Any context. + * Return: The offset of the first byte after this page. + */ +static inline loff_t file_offset_of_next_page(struct page *page) +{ + return file_offset_of_page(page) + thp_size(page); +} + static inline loff_t page_file_offset(struct page *page) { return ((loff_t)page_index(page)) << PAGE_SHIFT; From patchwork Wed Feb 12 04:18:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377549 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AA6AB109A for ; Wed, 12 Feb 2020 04:20:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 7E5F120848 for ; Wed, 12 Feb 2020 04:20:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="dkvEEjgL" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727989AbgBLET5 (ORCPT ); Tue, 11 Feb 2020 23:19:57 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53938 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727983AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=NvMfSbwCSPQiuZsVqZ+gREFLXJWQ8lHxQALsTybal28=; b=dkvEEjgLlJHt5AUjH/hWPpXsBM lvuxoe/UOq8JMvoe4+4d2BIvo6eOU/p/MgXsXq4YB7gYbXgFFo0Mu5apYDAh2dsrW5vd4BgV6XZJE YAFMVpJO0SUtPSCPR1Ykw3uoryiy98Elgl14DBpGG6bYlMS6ug+xx/vw5p4m8wSxkDae6nxACaB50 JyFVlMdaM6MyG1iN+HegUPoASXD8Q2wsxrpX0158WABia1lrxdNI7Zpo0FAGwy6cRUJgzoU5k4yox Br/r/CtwEdK8t6TEE56VC4g60V3GKDvaqMOHo24Eom5/5TB54IhcZiwEx+OLv0zJDjDwU+OoEiizA j5WW+MTw==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU7-0006nq-13; Wed, 12 Feb 2020 04:18:47 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 13/25] fs: Add zero_user_large Date: Tue, 11 Feb 2020 20:18:33 -0800 Message-Id: <20200212041845.25879-14-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" We can't kmap() a THP, so add a wrapper around zero_user() for large pages. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kirill A. Shutemov --- include/linux/highmem.h | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index ea5cdbd8c2c3..4465b8784353 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -245,6 +245,28 @@ static inline void zero_user(struct page *page, zero_user_segments(page, start, start + size, 0, 0); } +static inline void zero_user_large(struct page *page, + unsigned start, unsigned size) +{ + unsigned int i; + + for (i = 0; i < thp_order(page); i++) { + if (start > PAGE_SIZE) { + start -= PAGE_SIZE; + } else { + unsigned this_size = size; + + if (size > (PAGE_SIZE - start)) + this_size = PAGE_SIZE - start; + zero_user(page + i, start, this_size); + start = 0; + size -= this_size; + if (!size) + break; + } + } +} + #ifndef __HAVE_ARCH_COPY_USER_HIGHPAGE static inline void copy_user_highpage(struct page *to, struct page *from, From patchwork Wed Feb 12 04:18:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377551 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 83256109A for ; Wed, 12 Feb 2020 04:20:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 62D29215A4 for ; Wed, 12 Feb 2020 04:20:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="BAi07SFY" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728157AbgBLEUJ (ORCPT ); Tue, 11 Feb 2020 23:20:09 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53944 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727984AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=sX4JsUr+tuWcmLSLlWcGUILvZedWlRhKWW4lO2XCZl8=; b=BAi07SFYrT63J5NWZS9dRacsoq qUewDhM8qfGCJu7B/54nEvPjmaFm2Wo7xdONOrwpAtZKPfLQq0HQ8eE+zOm67Bb7hLUM1Pchqw9Hu NEiKpgvYEPrwxIMGi5WkOK8g4ZX2Z8BAFCQAkoiR7gSHV+Ca85kLUHvMpnOPJBWCsg1rR8ieJzVv+ QmGdF+tUovygqL2pnRjTD/02QlEULBBYbdDiRZQ7aoyKnp6Gw719G7tvqUWdhAFYjXSoXi7YnxQI/ T35KobuW1nH9+gJpyDQBi3eVHuZ3s0PnDuSvBL0dIizlUWElH58GsTYMwO8LrBIf9BMGhqOxzfGLy tO4RLZbw==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU7-0006nx-25; Wed, 12 Feb 2020 04:18:47 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 14/25] iomap: Support arbitrarily many blocks per page Date: Tue, 11 Feb 2020 20:18:34 -0800 Message-Id: <20200212041845.25879-15-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" Size the uptodate array dynamically. Now that this array is protected by a spinlock, we can use bitmap functions to set the bits in this array instead of a loop around set_bit(). Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 27 +++++++++------------------ 1 file changed, 9 insertions(+), 18 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index c551a48e2a81..5e5a6b038fc3 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -22,14 +22,14 @@ #include "../internal.h" /* - * Structure allocated for each page when block size < PAGE_SIZE to track + * Structure allocated for each page when block size < page size to track * sub-page uptodate status and I/O completions. */ struct iomap_page { atomic_t read_count; atomic_t write_count; spinlock_t uptodate_lock; - DECLARE_BITMAP(uptodate, PAGE_SIZE / 512); + unsigned long uptodate[]; }; static inline struct iomap_page *to_iomap_page(struct page *page) @@ -45,15 +45,14 @@ static struct iomap_page * iomap_page_create(struct inode *inode, struct page *page) { struct iomap_page *iop = to_iomap_page(page); + unsigned int n, nr_blocks = i_blocks_per_page(inode, page); - if (iop || i_blocks_per_page(inode, page) <= 1) + if (iop || nr_blocks <= 1) return iop; - iop = kmalloc(sizeof(*iop), GFP_NOFS | __GFP_NOFAIL); - atomic_set(&iop->read_count, 0); - atomic_set(&iop->write_count, 0); + n = BITS_TO_LONGS(nr_blocks); + iop = kzalloc(struct_size(iop, uptodate, n), GFP_NOFS | __GFP_NOFAIL); spin_lock_init(&iop->uptodate_lock); - bitmap_zero(iop->uptodate, PAGE_SIZE / SECTOR_SIZE); /* * migrate_page_move_mapping() assumes that pages with private data have @@ -146,20 +145,12 @@ iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len) struct iomap_page *iop = to_iomap_page(page); struct inode *inode = page->mapping->host; unsigned first = off >> inode->i_blkbits; - unsigned last = (off + len - 1) >> inode->i_blkbits; - bool uptodate = true; + unsigned count = len >> inode->i_blkbits; unsigned long flags; - unsigned int i; spin_lock_irqsave(&iop->uptodate_lock, flags); - for (i = 0; i < i_blocks_per_page(inode, page); i++) { - if (i >= first && i <= last) - set_bit(i, iop->uptodate); - else if (!test_bit(i, iop->uptodate)) - uptodate = false; - } - - if (uptodate) + bitmap_set(iop->uptodate, first, count); + if (bitmap_full(iop->uptodate, i_blocks_per_page(inode, page))) SetPageUptodate(page); spin_unlock_irqrestore(&iop->uptodate_lock, flags); } From patchwork Wed Feb 12 04:18:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377539 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A65081800 for ; Wed, 12 Feb 2020 04:19:44 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 8690C2082F for ; Wed, 12 Feb 2020 04:19:44 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="gyhDgffl" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728304AbgBLETl (ORCPT ); Tue, 11 Feb 2020 23:19:41 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53950 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727989AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=3yymKC5HofjIktaqISxer59WHRLOSzpRhrvUFUGt25s=; b=gyhDgfflQ7Z5TXmu97SKt/yc1b KDPf0SRcDenzPcvfPOEkDtLj9omcH8T1hdZS8YivupCAPmJ9G78E7iHIGNdaluqG+aVhdHDEKZlOg NHdnQMZ7BOOG9hY64LAmrmFJz+zPSZpMota5fUPkyWk4kQWxaEonw+5HiO+xMJZok/SuMjs4qP60j iabs20AExdfeQWu/OEQ9gZeSj8OpKkybOVKrLTQ2Zrm9/Lsxb8OptwZXi67dOUrSi8NK96uP/c2yZ YSU6DSRhTI12htrS5PlsrdVPpVJHNJ8huIugXWso03h5k5FTX+4hz9qYeo2vj0j+s/WwelTTvQpQb 0f593Iig==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU7-0006o1-3E; Wed, 12 Feb 2020 04:18:47 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 15/25] iomap: Support large pages in iomap_adjust_read_range Date: Tue, 11 Feb 2020 20:18:35 -0800 Message-Id: <20200212041845.25879-16-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" Pass the struct page instead of the iomap_page so we can determine the size of the page. Introduce offset_in_this_page() and use thp_size() instead of PAGE_SIZE. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 16 +++++++++------- include/linux/mm.h | 2 ++ 2 files changed, 11 insertions(+), 7 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 5e5a6b038fc3..e522039f627f 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -83,15 +83,16 @@ iomap_page_release(struct page *page) * Calculate the range inside the page that we actually need to read. */ static void -iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, +iomap_adjust_read_range(struct inode *inode, struct page *page, loff_t *pos, loff_t length, unsigned *offp, unsigned *lenp) { + struct iomap_page *iop = to_iomap_page(page); loff_t orig_pos = *pos; loff_t isize = i_size_read(inode); unsigned block_bits = inode->i_blkbits; unsigned block_size = (1 << block_bits); - unsigned poff = offset_in_page(*pos); - unsigned plen = min_t(loff_t, PAGE_SIZE - poff, length); + unsigned poff = offset_in_this_page(page, *pos); + unsigned plen = min_t(loff_t, thp_size(page) - poff, length); unsigned first = poff >> block_bits; unsigned last = (poff + plen - 1) >> block_bits; @@ -129,7 +130,8 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, * page cache for blocks that are entirely outside of i_size. */ if (orig_pos <= isize && orig_pos + length > isize) { - unsigned end = offset_in_page(isize - 1) >> block_bits; + unsigned end = offset_in_this_page(page, isize - 1) >> + block_bits; if (first <= end && last > end) plen -= (last - end) * block_size; @@ -256,7 +258,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, } /* zero post-eof blocks as the page may be mapped */ - iomap_adjust_read_range(inode, iop, &pos, length, &poff, &plen); + iomap_adjust_read_range(inode, page, &pos, length, &poff, &plen); if (plen == 0) goto done; @@ -547,7 +549,6 @@ static int __iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags, struct page *page, struct iomap *srcmap) { - struct iomap_page *iop = iomap_page_create(inode, page); loff_t block_size = i_blocksize(inode); loff_t block_start = pos & ~(block_size - 1); loff_t block_end = (pos + len + block_size - 1) & ~(block_size - 1); @@ -556,9 +557,10 @@ __iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags, if (PageUptodate(page)) return 0; + iomap_page_create(inode, page); do { - iomap_adjust_read_range(inode, iop, &block_start, + iomap_adjust_read_range(inode, page, &block_start, block_end - block_start, &poff, &plen); if (plen == 0) break; diff --git a/include/linux/mm.h b/include/linux/mm.h index 52269e56c514..b4bf86590096 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1387,6 +1387,8 @@ static inline void clear_page_pfmemalloc(struct page *page) extern void pagefault_out_of_memory(void); #define offset_in_page(p) ((unsigned long)(p) & ~PAGE_MASK) +#define offset_in_this_page(page, p) \ + ((unsigned long)(p) & (thp_size(page) - 1)) /* * Flags passed to show_mem() and show_free_areas() to suppress output in From patchwork Wed Feb 12 04:18:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377533 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3FF0514E3 for ; Wed, 12 Feb 2020 04:19:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 1FB0C21D7D for ; Wed, 12 Feb 2020 04:19:41 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="Mzmo0k/a" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728289AbgBLETj (ORCPT ); Tue, 11 Feb 2020 23:19:39 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53956 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727991AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=OHiH3KAgiS8rCXh8VJ0556v4ieZAW8VpEHQUzRQYEA4=; b=Mzmo0k/aq/GOLignpIcekDR9ml +jfxQ5r99fcTR6tkjKnQv+GQzMNXmc1WCJ/KNHxG8Ve3zMTQOW7l6i1FbOfnKUZI3Eiw5yhla3ueI 4FfrnVuwbS4gv0rvuZ18fFl0Euri8VR5EPuXEGBEDeAcPvpg5GTjUj60tLCPgl1Nw54UJqhAHdv8J ZfNRECkJVehwyOcfB2jd5qI1K47e7PzpVG0LUyaRIm/FokguXb3RRqxW7pFJsuk7FQ1D5aVxZP+lr 32PV2qGten9T2IY8nyUPJKyF9akRp/1wZ/4x0PfXzZR6tRcDH/B3GpgY4cadVwioBtChkzABhkqI0 7bOQ4Uuw==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU7-0006oB-4o; Wed, 12 Feb 2020 04:18:47 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 16/25] iomap: Support large pages in read paths Date: Tue, 11 Feb 2020 20:18:36 -0800 Message-Id: <20200212041845.25879-17-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" Use thp_size() instead of PAGE_SIZE, use offset_in_this_page() and abstract away how to access the list of readahead pages. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 28 ++++++++++++++++++++-------- 1 file changed, 20 insertions(+), 8 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index e522039f627f..68f8903ecd6d 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -179,14 +179,16 @@ iomap_read_finish(struct iomap_page *iop, struct page *page) static void iomap_read_page_end_io(struct bio_vec *bvec, int error) { - struct page *page = bvec->bv_page; + struct page *page = compound_head(bvec->bv_page); struct iomap_page *iop = to_iomap_page(page); + unsigned offset = bvec->bv_offset + + PAGE_SIZE * (bvec->bv_page - page); if (unlikely(error)) { ClearPageUptodate(page); SetPageError(page); } else { - iomap_set_range_uptodate(page, bvec->bv_offset, bvec->bv_len); + iomap_set_range_uptodate(page, offset, bvec->bv_len); } iomap_read_finish(iop, page); @@ -239,6 +241,16 @@ static inline bool iomap_block_needs_zeroing(struct inode *inode, pos >= i_size_read(inode); } +/* + * Estimate the number of vectors we need based on the current page size; + * if we're wrong we'll end up doing an overly large allocation or needing + * to do a second allocation, neither of which is a big deal. + */ +static unsigned int iomap_nr_vecs(struct page *page, loff_t length) +{ + return (length + thp_size(page) - 1) >> page_shift(page); +} + static loff_t iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, struct iomap *iomap, struct iomap *srcmap) @@ -263,7 +275,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, goto done; if (iomap_block_needs_zeroing(inode, iomap, pos)) { - zero_user(page, poff, plen); + zero_user_large(page, poff, plen); iomap_set_range_uptodate(page, poff, plen); goto done; } @@ -294,7 +306,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, if (!ctx->bio || !is_contig || bio_full(ctx->bio, plen)) { gfp_t gfp = mapping_gfp_constraint(page->mapping, GFP_KERNEL); - int nr_vecs = (length + PAGE_SIZE - 1) >> PAGE_SHIFT; + int nr_vecs = iomap_nr_vecs(page, length); if (ctx->bio) submit_bio(ctx->bio); @@ -331,9 +343,9 @@ iomap_readpage(struct page *page, const struct iomap_ops *ops) trace_iomap_readpage(page->mapping->host, 1); - for (poff = 0; poff < PAGE_SIZE; poff += ret) { - ret = iomap_apply(inode, page_offset(page) + poff, - PAGE_SIZE - poff, 0, ops, &ctx, + for (poff = 0; poff < thp_size(page); poff += ret) { + ret = iomap_apply(inode, file_offset_of_page(page) + poff, + thp_size(page) - poff, 0, ops, &ctx, iomap_readpage_actor); if (ret <= 0) { WARN_ON_ONCE(ret == 0); @@ -376,7 +388,7 @@ iomap_readahead_actor(struct inode *inode, loff_t pos, loff_t length, if (WARN_ON(ret == 0)) break; done += ret; - if (offset_in_page(pos + done) == 0) { + if (offset_in_this_page(ctx->cur_page, pos + done) == 0) { ctx->rac->nr_pages -= ctx->rac->batch_count; if (!ctx->cur_page_in_bio) unlock_page(ctx->cur_page); From patchwork Wed Feb 12 04:18:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377473 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D5B6114E3 for ; Wed, 12 Feb 2020 04:18:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B58E221734 for ; Wed, 12 Feb 2020 04:18:48 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="JmBotVia" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728066AbgBLESs (ORCPT ); Tue, 11 Feb 2020 23:18:48 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53958 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727998AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=Akb0WAOF7QXNlcGV5Jo3slrj0DWeTKrSDHLBEAFZyQo=; b=JmBotViaw17Fpv0rX8r920ud8v gvwB06iH1yebAwT05IgWsF35N8L/z3MfTRwcbA/+edkCt5PDtRRvyJKgX1Zs1xyjKXrgoQ3/F7wtm ClUFOlYoly8K5XVu4kihVGcq6CUd7NvtQILRNTP6QsiUvvynXlJ46xVlgeQhzcjY1fuBtH/tDm9z5 Jl6laf1T4z3RGi8SXvH2Sq6PNhsQb4w+o53oYwsZf2z52oko7ArM0zDh/dYlKzgFBxaucQ8gU1YWs awYt2DS8lkSJQcODtOu89pHporVkbPNTgIWslA25H3eCLiT3GdPBpKVuDxv/4gk0//WUuxD/wAxNl gtvYFBPw==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU7-0006oK-65; Wed, 12 Feb 2020 04:18:47 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 17/25] iomap: Support large pages in write paths Date: Tue, 11 Feb 2020 20:18:37 -0800 Message-Id: <20200212041845.25879-18-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" Use thp_size() instead of PAGE_SIZE, offset_in_this_page(). Also simplify the logic in iomap_do_writepage() for determining end of file. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 36 +++++++++++++++++++++--------------- 1 file changed, 21 insertions(+), 15 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 68f8903ecd6d..af1f56408fcd 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -445,7 +445,7 @@ iomap_is_partially_uptodate(struct page *page, unsigned long from, unsigned i; /* Limit range to one page */ - len = min_t(unsigned, PAGE_SIZE - from, count); + len = min_t(unsigned, thp_size(page) - from, count); /* First and last blocks in range within page */ first = from >> inode->i_blkbits; @@ -488,7 +488,7 @@ iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len) * If we are invalidating the entire page, clear the dirty state from it * and release it to avoid unnecessary buildup of the LRU. */ - if (offset == 0 && len == PAGE_SIZE) { + if (offset == 0 && len == thp_size(page)) { WARN_ON_ONCE(PageWriteback(page)); cancel_dirty_page(page); iomap_page_release(page); @@ -564,7 +564,9 @@ __iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags, loff_t block_size = i_blocksize(inode); loff_t block_start = pos & ~(block_size - 1); loff_t block_end = (pos + len + block_size - 1) & ~(block_size - 1); - unsigned from = offset_in_page(pos), to = from + len, poff, plen; + unsigned from = offset_in_this_page(page, pos); + unsigned to = from + len; + unsigned poff, plen; int status; if (PageUptodate(page)) @@ -696,7 +698,7 @@ __iomap_write_end(struct inode *inode, loff_t pos, unsigned len, */ if (unlikely(copied < len && !PageUptodate(page))) return 0; - iomap_set_range_uptodate(page, offset_in_page(pos), len); + iomap_set_range_uptodate(page, offset_in_this_page(page, pos), len); iomap_set_page_dirty(page); return copied; } @@ -771,6 +773,10 @@ iomap_write_actor(struct inode *inode, loff_t pos, loff_t length, void *data, unsigned long bytes; /* Bytes to write to page */ size_t copied; /* Bytes copied from user */ + /* + * XXX: We don't know what size page we'll find in the + * page cache, so only copy up to a regular page boundary. + */ offset = offset_in_page(pos); bytes = min_t(unsigned long, PAGE_SIZE - offset, iov_iter_count(i)); @@ -1320,7 +1326,7 @@ iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page, { sector_t sector = iomap_sector(&wpc->iomap, offset); unsigned len = i_blocksize(inode); - unsigned poff = offset & (PAGE_SIZE - 1); + unsigned poff = offset & (thp_size(page) - 1); bool merged, same_page = false; if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, offset, sector)) { @@ -1372,9 +1378,10 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, unsigned len = i_blocksize(inode); u64 file_offset; /* file offset of page */ int error = 0, count = 0, i; + int nr_blocks = i_blocks_per_page(inode, page); LIST_HEAD(submit_list); - WARN_ON_ONCE(i_blocks_per_page(inode, page) > 1 && !iop); + WARN_ON_ONCE(nr_blocks > 1 && !iop); WARN_ON_ONCE(iop && atomic_read(&iop->write_count) != 0); /* @@ -1382,8 +1389,8 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * end of the current map or find the current map invalid, grab a new * one. */ - for (i = 0, file_offset = page_offset(page); - i < (PAGE_SIZE >> inode->i_blkbits) && file_offset < end_offset; + for (i = 0, file_offset = file_offset_of_page(page); + i < nr_blocks && file_offset < end_offset; i++, file_offset += len) { if (iop && !test_bit(i, iop->uptodate)) continue; @@ -1477,7 +1484,6 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) { struct iomap_writepage_ctx *wpc = data; struct inode *inode = page->mapping->host; - pgoff_t end_index; u64 end_offset; loff_t offset; @@ -1518,10 +1524,9 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * ---------------------------------^------------------| */ offset = i_size_read(inode); - end_index = offset >> PAGE_SHIFT; - if (page->index < end_index) - end_offset = (loff_t)(page->index + 1) << PAGE_SHIFT; - else { + end_offset = file_offset_of_next_page(page); + + if (end_offset > offset) { /* * Check whether the page to write out is beyond or straddles * i_size or not. @@ -1533,7 +1538,8 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * | | Straddles | * ---------------------------------^-----------|--------| */ - unsigned offset_into_page = offset & (PAGE_SIZE - 1); + unsigned offset_into_page = offset_in_this_page(page, offset); + pgoff_t end_index = offset >> PAGE_SHIFT; /* * Skip the page if it is fully outside i_size, e.g. due to a @@ -1564,7 +1570,7 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * memory is zeroed when mapped, and writes to that region are * not written out to the file." */ - zero_user_segment(page, offset_into_page, PAGE_SIZE); + zero_user_segment(page, offset_into_page, thp_size(page)); /* Adjust the end_offset to the end of file */ end_offset = offset; From patchwork Wed Feb 12 04:18:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377543 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id BE85514E3 for ; Wed, 12 Feb 2020 04:19:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 956FC20848 for ; Wed, 12 Feb 2020 04:19:56 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="rW9y0pa0" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728294AbgBLETk (ORCPT ); Tue, 11 Feb 2020 23:19:40 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53964 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727999AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=ANdAmaI1jIMT7Kwn6UD2dfCdUamGTMd6IuFM7yo8vDk=; b=rW9y0pa0OQFs6gEXOFm+WZ7xoP 5IcA4fr74GkQErjTNoTojWCu6XC1fZGO2XlfB24Lj/WkRXWUHrg78jdigHj21Zy6E6yb1Vn8CYYM7 hFOvs8CgYis/6cUWwendKQCRbsv5WAEeswEP10V4HWjCKjlVS9uTdrvezHOQpbxDiP7snhLAYxPiL UcO/W0Tbt5EzhntD1alLZv5XUsmt0BuoFlO8uf7OtMkvzs+r729MqD4PQP0UPBilNrcdFuqQpS1UH o/+6pZ4hXSumJS+ET+UKTkyAn1/YO1T2aLKmwRFarmeX3zCjkk4WyxLMvDjdXX3aVf+B6g5qoB5I4 Qncws1VA==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU7-0006oS-8C; Wed, 12 Feb 2020 04:18:47 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 18/25] iomap: Inline data shouldn't see large pages Date: Tue, 11 Feb 2020 20:18:38 -0800 Message-Id: <20200212041845.25879-19-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" Assert that we're not seeing large pages in functions that read/write inline data, rather than zeroing out the tail. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index af1f56408fcd..a7a21b99b3a0 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -224,6 +224,7 @@ iomap_read_inline_data(struct inode *inode, struct page *page, return; BUG_ON(page->index); + BUG_ON(PageCompound(page)); BUG_ON(size > PAGE_SIZE - offset_in_page(iomap->inline_data)); addr = kmap_atomic(page); @@ -710,6 +711,7 @@ iomap_write_end_inline(struct inode *inode, struct page *page, void *addr; WARN_ON_ONCE(!PageUptodate(page)); + BUG_ON(PageCompound(page)); BUG_ON(pos + copied > PAGE_SIZE - offset_in_page(iomap->inline_data)); addr = kmap_atomic(page); From patchwork Wed Feb 12 04:18:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377505 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 6C08714E3 for ; Wed, 12 Feb 2020 04:19:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4144721734 for ; Wed, 12 Feb 2020 04:19:15 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="mL1CBG80" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728079AbgBLESs (ORCPT ); Tue, 11 Feb 2020 23:18:48 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53972 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728000AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=d1v/EAEwBpaURPMR3I6ceII7qxa5d8ZtTLZ0hbWs6Do=; b=mL1CBG80IxbXJemxmEvSnYtTcS jYbOFspXS4vCYLmFZxFeRFpnGVZaarDxMd7Zx+A8+xxrnI/aishDN6KV0ZRQZROXgV1Cp6uVe1Dpu KzhB6+ext1celPSo5Z0vh9t3I/hRW3rWlxqaAxl8mqwFupAxs0uB7s2YDAbnHrjogOJqWwwf7boXG ly1pmQBs6c8rZjvaWLBsxfK393ZA74eipLEmBsvi+ikjWyb5Ch0Mgh9Z6ZSt/qZoCvgB46ANSITp3 p1K8PsoM75X6l/12/Kp9JbJbPBIMh0sTN9j8a/0STfTGw6GQjet2BxKwuMUYSJBnQEmjC9u4gGXE4 oD05T5Eg==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU7-0006oZ-9e; Wed, 12 Feb 2020 04:18:47 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 19/25] xfs: Support large pages Date: Tue, 11 Feb 2020 20:18:39 -0800 Message-Id: <20200212041845.25879-20-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" There is one place which assumes the size of a page; fix it. Signed-off-by: Matthew Wilcox (Oracle) --- fs/xfs/xfs_aops.c | 2 +- fs/xfs/xfs_super.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 5573bf2957dd..0c10fd799f8c 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -548,7 +548,7 @@ xfs_discard_page( if (error && !XFS_FORCED_SHUTDOWN(mp)) xfs_alert(mp, "page discard unable to remove delalloc mapping."); out_invalidate: - iomap_invalidatepage(page, 0, PAGE_SIZE); + iomap_invalidatepage(page, 0, thp_size(page)); } static const struct iomap_writeback_ops xfs_writeback_ops = { diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c index 2094386af8ac..a02efa1f72af 100644 --- a/fs/xfs/xfs_super.c +++ b/fs/xfs/xfs_super.c @@ -1779,7 +1779,7 @@ static struct file_system_type xfs_fs_type = { .init_fs_context = xfs_init_fs_context, .parameters = xfs_fs_parameters, .kill_sb = kill_block_super, - .fs_flags = FS_REQUIRES_DEV, + .fs_flags = FS_REQUIRES_DEV | FS_LARGE_PAGES, }; MODULE_ALIAS_FS("xfs"); From patchwork Wed Feb 12 04:18:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377477 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 648DA191F for ; Wed, 12 Feb 2020 04:18:49 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 4604C2173E for ; Wed, 12 Feb 2020 04:18:49 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="EvJY83cu" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728103AbgBLESs (ORCPT ); Tue, 11 Feb 2020 23:18:48 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53980 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728003AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=gXJv+foPDzm9zZZb2en6nqyTOSipe1jE6l4dnk9iVRY=; b=EvJY83cuo9CuZJSwbi/lXWN3fj 3hmkqb1bPoc4dHHzywZCdrw9jpeYiTqqBtnxy23Te13nFXUD9zv6WmZrQqrNpCxOJDcnHFLEaAjvu cI/IU+S+kEnckvQ33zp+vvt5HrXRget0R5I4oS4OF/XVbp84syTfZTPyqwi23UR9+KYuASjxG3j4b Jq+mT5s/TcDEa6IPWtAko+4rA1fFFXCi6hu8ubmfD5KucwIokKRQrNw3OpJNIKP/SPZXU/lNrfjpu BIhDwvKiLNrJ5+cqAt4m6860BFcMc/AJCQJ89CaoeCaX7cbamgIiU5I3qifIGDaoRGjtM1WX3xVjF duIt4uvA==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU7-0006oi-B9; Wed, 12 Feb 2020 04:18:47 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, "Kirill A . Shutemov" Subject: [PATCH v2 20/25] mm: Make prep_transhuge_page return its argument Date: Tue, 11 Feb 2020 20:18:40 -0800 Message-Id: <20200212041845.25879-21-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" By permitting NULL or order-0 pages as an argument, and returning the argument, callers can write: return prep_transhuge_page(alloc_pages(...)); instead of assigning the result to a temporary variable and conditionally passing that to prep_transhuge_page(). Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kirill A. Shutemov --- include/linux/huge_mm.h | 7 +++++-- mm/huge_memory.c | 9 +++++++-- 2 files changed, 12 insertions(+), 4 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 3de788ee25bd..865b9c16c99c 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -158,7 +158,7 @@ extern unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); -extern void prep_transhuge_page(struct page *page); +extern struct page *prep_transhuge_page(struct page *page); extern void free_transhuge_page(struct page *page); bool is_transparent_hugepage(struct page *page); @@ -307,7 +307,10 @@ static inline bool transhuge_vma_suitable(struct vm_area_struct *vma, return false; } -static inline void prep_transhuge_page(struct page *page) {} +static inline struct page *prep_transhuge_page(struct page *page) +{ + return page; +} static inline bool is_transparent_hugepage(struct page *page) { diff --git a/mm/huge_memory.c b/mm/huge_memory.c index b08b199f9a11..b52e007f0856 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -508,15 +508,20 @@ static inline struct deferred_split *get_deferred_split_queue(struct page *page) } #endif -void prep_transhuge_page(struct page *page) +struct page *prep_transhuge_page(struct page *page) { + if (!page || compound_order(page) == 0) + return page; /* - * we use page->mapping and page->indexlru in second tail page + * we use page->mapping and page->index in second tail page * as list_head: assuming THP order >= 2 */ + BUG_ON(compound_order(page) == 1); INIT_LIST_HEAD(page_deferred_list(page)); set_compound_page_dtor(page, TRANSHUGE_PAGE_DTOR); + + return page; } bool is_transparent_hugepage(struct page *page) From patchwork Wed Feb 12 04:18:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377495 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 14E5814E3 for ; Wed, 12 Feb 2020 04:19:10 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E81D121734 for ; Wed, 12 Feb 2020 04:19:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="SnUC7w/y" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728122AbgBLESt (ORCPT ); Tue, 11 Feb 2020 23:18:49 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53984 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728004AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=jC8GtqN7d2PIFCD2G3A5qYYZdOVd9+NyuF2giQfeE+o=; b=SnUC7w/yzeO9wGbmJ2fnvAHg89 K5jXSIBNG58NYxIMnIxF21rJkwkCQYrEkore2fZNo1VJUUYPGzVtcNmb30mYaSZpvPQ1y9/HBi30Q ihriKwGl6uluD0O34tdvP28toUE4cDKNstqK9oF1csHhTMaTegIsCdmmbMEWx720j7RKCkLtDCBHl f/tImbi4SOkmns7CSo9mpGgVQ7cdb4s3Occ8ld3mmAe9XLVzNA6svmT5unVEozWJfcFLuqQowPSNN B+zKBDGLCj1zhTKXB6udltcwdAMlV3iQ2t8dhHlyZTvf3EpY/3GJYFEg+B3YuvLEwtecTbcOYSIkL /e4SD9XA==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU7-0006op-CJ; Wed, 12 Feb 2020 04:18:47 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, "Kirill A . Shutemov" Subject: [PATCH v2 21/25] mm: Add __page_cache_alloc_order Date: Tue, 11 Feb 2020 20:18:41 -0800 Message-Id: <20200212041845.25879-22-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" This new function allows page cache pages to be allocated that are larger than an order-0 page. Signed-off-by: Matthew Wilcox (Oracle) Acked-by: Kirill A. Shutemov --- include/linux/pagemap.h | 24 +++++++++++++++++++++--- mm/filemap.c | 12 ++++++++---- 2 files changed, 29 insertions(+), 7 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 497197315b73..64a3cf79611f 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -207,15 +207,33 @@ static inline int page_cache_add_speculative(struct page *page, int count) return __page_cache_add_speculative(page, count); } +static inline gfp_t thp_gfpmask(gfp_t gfp) +{ +#ifdef CONFIG_TRANSPARENT_HUGEPAGE + /* We'd rather allocate smaller pages than stall a page fault */ + gfp |= GFP_TRANSHUGE_LIGHT; + gfp &= ~__GFP_DIRECT_RECLAIM; +#endif + return gfp; +} + #ifdef CONFIG_NUMA -extern struct page *__page_cache_alloc(gfp_t gfp); +extern struct page *__page_cache_alloc_order(gfp_t gfp, unsigned int order); #else -static inline struct page *__page_cache_alloc(gfp_t gfp) +static inline +struct page *__page_cache_alloc_order(gfp_t gfp, unsigned int order) { - return alloc_pages(gfp, 0); + if (order == 0) + return alloc_pages(gfp, 0); + return prep_transhuge_page(alloc_pages(thp_gfpmask(gfp), order)); } #endif +static inline struct page *__page_cache_alloc(gfp_t gfp) +{ + return __page_cache_alloc_order(gfp, 0); +} + static inline struct page *page_cache_alloc(struct address_space *x) { return __page_cache_alloc(mapping_gfp_mask(x)); diff --git a/mm/filemap.c b/mm/filemap.c index 3204293f9b58..1061463a169e 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -941,24 +941,28 @@ int add_to_page_cache_lru(struct page *page, struct address_space *mapping, EXPORT_SYMBOL_GPL(add_to_page_cache_lru); #ifdef CONFIG_NUMA -struct page *__page_cache_alloc(gfp_t gfp) +struct page *__page_cache_alloc_order(gfp_t gfp, unsigned int order) { int n; struct page *page; + if (order > 0) + gfp = thp_gfpmask(gfp); + if (cpuset_do_page_mem_spread()) { unsigned int cpuset_mems_cookie; do { cpuset_mems_cookie = read_mems_allowed_begin(); n = cpuset_mem_spread_node(); - page = __alloc_pages_node(n, gfp, 0); + page = __alloc_pages_node(n, gfp, order); + prep_transhuge_page(page); } while (!page && read_mems_allowed_retry(cpuset_mems_cookie)); return page; } - return alloc_pages(gfp, 0); + return prep_transhuge_page(alloc_pages(gfp, order)); } -EXPORT_SYMBOL(__page_cache_alloc); +EXPORT_SYMBOL(__page_cache_alloc_order); #endif /* From patchwork Wed Feb 12 04:18:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377523 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F0FE9109A for ; Wed, 12 Feb 2020 04:19:31 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id D066A2082F for ; Wed, 12 Feb 2020 04:19:31 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="uiu6mEGG" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728264AbgBLET1 (ORCPT ); Tue, 11 Feb 2020 23:19:27 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53988 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728008AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=oDWvgcF6dlhezySIqP1bKbtFkpsm87G/1cVbkdABghc=; b=uiu6mEGGpI8HmZNr0RR0F2S9yH j5qWzjYpCw2UwjBC+47AJqdt0D7v8CHHwEcSlzSW8pnr4n4vtFX48tPwImDqKlLt4wIYDxKrCPoLT G8t8yZZuq5zYQ+y0pzrEXeJhbuCv8EE89EV6pDLGq6MEhNkpMy0Q/RkMkgW9eRjOjD98AF/+rxhcj juRr/7u3bMtlJI2bQZRZ5DVKbIdriM9ZCIeBDupEagxsY6BGfmSkwZQl2i0Fdvr/AAxLtkVhAWyJa ZM0fBodWRQS7vAHEJj3fG2z9/BsZ+kY3FZnkVhe1NFjqz6CM0z1du9sY+9ndBHHU2LSCgc5bFumkM 3w6mAUsQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU7-0006ou-E3; Wed, 12 Feb 2020 04:18:47 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 22/25] mm: Allow large pages to be added to the page cache Date: Tue, 11 Feb 2020 20:18:42 -0800 Message-Id: <20200212041845.25879-23-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" We return -EEXIST if there are any non-shadow entries in the page cache in the range covered by the large page. If there are multiple shadow entries in the range, we set *shadowp to one of them (currently the one at the highest index). If that turns out to be the wrong answer, we can implement something more complex. This is mostly modelled after the equivalent function in the shmem code. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 46 +++++++++++++++++++++++++++++++++------------- 1 file changed, 33 insertions(+), 13 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 1061463a169e..08b5cd4ce47b 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -834,6 +834,7 @@ static int __add_to_page_cache_locked(struct page *page, int huge = PageHuge(page); struct mem_cgroup *memcg; int error; + unsigned int nr = 1; void *old; VM_BUG_ON_PAGE(!PageLocked(page), page); @@ -845,31 +846,50 @@ static int __add_to_page_cache_locked(struct page *page, gfp_mask, &memcg, false); if (error) return error; + xas_set_order(&xas, offset, thp_order(page)); + nr = hpage_nr_pages(page); } - get_page(page); + page_ref_add(page, nr); page->mapping = mapping; page->index = offset; do { + unsigned long exceptional = 0; + unsigned int i = 0; + xas_lock_irq(&xas); - old = xas_load(&xas); - if (old && !xa_is_value(old)) - xas_set_err(&xas, -EEXIST); - xas_store(&xas, page); + xas_for_each_conflict(&xas, old) { + if (!xa_is_value(old)) { + xas_set_err(&xas, -EEXIST); + break; + } + exceptional++; + if (shadowp) + *shadowp = old; + } + xas_create_range(&xas); if (xas_error(&xas)) goto unlock; - if (xa_is_value(old)) { - mapping->nrexceptional--; - if (shadowp) - *shadowp = old; +next: + xas_store(&xas, page); + if (++i < nr) { + xas_next(&xas); + goto next; } - mapping->nrpages++; + mapping->nrexceptional -= exceptional; + mapping->nrpages += nr; /* hugetlb pages do not participate in page cache accounting */ - if (!huge) - __inc_node_page_state(page, NR_FILE_PAGES); + if (!huge) { + __mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, + nr); + if (nr > 1) { + __inc_node_page_state(page, NR_FILE_THPS); + filemap_nr_thps_inc(mapping); + } + } unlock: xas_unlock_irq(&xas); } while (xas_nomem(&xas, gfp_mask & GFP_RECLAIM_MASK)); @@ -886,7 +906,7 @@ static int __add_to_page_cache_locked(struct page *page, /* Leave page->index set: truncation relies upon it */ if (!huge) mem_cgroup_cancel_charge(page, memcg, false); - put_page(page); + page_ref_sub(page, nr); return xas_error(&xas); } ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO); From patchwork Wed Feb 12 04:18:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377503 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1646C109A for ; Wed, 12 Feb 2020 04:19:14 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id E99B721739 for ; Wed, 12 Feb 2020 04:19:13 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="bpV61dLd" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728093AbgBLESs (ORCPT ); Tue, 11 Feb 2020 23:18:48 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53992 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728011AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=UfFsjnMZO/Rq4fYsTDy7TAQ1WDn/2/A9cAH2PPVx4Dc=; b=bpV61dLdwafyzPNxp6FbhzqRwJ XRn6Pl44OIU/41u7z97pqglIliE/1VqmSbRySYbVFJRuVRft/OXdYdyRLmTwpF2EOAPnFqRL15qO1 Qa9Rj/fj1zgklqzpMA+8V+9h9dosTebFWSLFNs5UN1x8Ps6gkKlgYQCJLXJJVhU5yx1wk99j4BH3n zQQzS+2vNL2Uckl/cCaJ5uig19zh2yCoKyNzbW++uU1cObrtqkEOZ16W4xaiGNzCCXxjUIz3K0+3W 6fzXvOVRw8o3SuD+lZD9WyLJDwKJ3Bp4KiGInAH4GKrU0eTmxhESHW7zSeyv7QGwzZwssYGybcAAQ L4PEpiJQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU7-0006p3-Fh; Wed, 12 Feb 2020 04:18:47 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 23/25] mm: Allow large pages to be removed from the page cache Date: Tue, 11 Feb 2020 20:18:43 -0800 Message-Id: <20200212041845.25879-24-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" page_cache_free_page() assumes compound pages are PMD_SIZE; fix that assumption. Signed-off-by: Matthew Wilcox (Oracle) --- mm/filemap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/filemap.c b/mm/filemap.c index 08b5cd4ce47b..e74a22af7e4e 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -248,7 +248,7 @@ static void page_cache_free_page(struct address_space *mapping, freepage(page); if (PageTransHuge(page) && !PageHuge(page)) { - page_ref_sub(page, HPAGE_PMD_NR); + page_ref_sub(page, hpage_nr_pages(page)); VM_BUG_ON_PAGE(page_count(page) <= 0, page); } else { put_page(page); From patchwork Wed Feb 12 04:18:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377499 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8DD9914E3 for ; Wed, 12 Feb 2020 04:19:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 6E2C021734 for ; Wed, 12 Feb 2020 04:19:11 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="lTq6KvZ2" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728112AbgBLESt (ORCPT ); Tue, 11 Feb 2020 23:18:49 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:53998 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728020AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=F5NU3oCRsJ/VRewPgyFmDI/jOyoaYvHI3EnYnqICXs0=; b=lTq6KvZ25CFBn20eSyRhQz+xt4 xZ5E0flMQSOELfzqd57XfcPrr/jsPgY5r7h5VGuLF8q8wml2MDikHBD1nkHZxF9oGnyYdmwD/nBOJ 5rSiKXMb5QcGERIaLQXM/T9RTt7f8B/4vli/UrD3Y4uEBPB3+vbbi0uGsbBVDvmcJx6IzRE9lSdH3 IBqUNVplisHwBmiOyYmef8UmsYF2HfbbPUQasNPZXwcFJfxaTuFUqdY1Z0wwSVA0k60KQ1LbCJzdH E4aajx9ShctTRRA6nl9E05t9s7BupBssLXhxgL+gGIqri1khlJg9vlNER8XmyWjsAcqS0QZMbGckM gIRtseLQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU7-0006pC-HQ; Wed, 12 Feb 2020 04:18:47 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org Subject: [PATCH v2 24/25] mm: Add large page readahead Date: Tue, 11 Feb 2020 20:18:44 -0800 Message-Id: <20200212041845.25879-25-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: "Matthew Wilcox (Oracle)" If the filesystem supports large pages, allocate larger pages in the readahead code when it seems worth doing. The heuristic for choosing larger page sizes will surely need some tuning, but this aggressive ramp-up seems good for testing. Signed-off-by: Matthew Wilcox (Oracle) --- mm/readahead.c | 98 +++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 93 insertions(+), 5 deletions(-) diff --git a/mm/readahead.c b/mm/readahead.c index 29ca25c8f01e..b582f09aa7e3 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -406,13 +406,96 @@ static int try_context_readahead(struct address_space *mapping, return 1; } +static inline int ra_alloc_page(struct address_space *mapping, pgoff_t offset, + pgoff_t mark, unsigned int order, gfp_t gfp) +{ + int err; + struct page *page = __page_cache_alloc_order(gfp, order); + + if (!page) + return -ENOMEM; + if (mark - offset < (1UL << order)) + SetPageReadahead(page); + err = add_to_page_cache_lru(page, mapping, offset, gfp); + if (err) + put_page(page); + return err; +} + +#define PMD_ORDER (PMD_SHIFT - PAGE_SHIFT) + +static unsigned long page_cache_readahead_order(struct address_space *mapping, + struct file_ra_state *ra, struct file *file, unsigned int order) +{ + struct readahead_control rac = { + .mapping = mapping, + .file = file, + .start = ra->start, + .nr_pages = 0, + }; + unsigned int old_order = order; + pgoff_t offset = ra->start; + pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT; + pgoff_t mark = offset + ra->size - ra->async_size; + int err = 0; + gfp_t gfp = readahead_gfp_mask(mapping); + + limit = min(limit, offset + ra->size - 1); + + /* Grow page size up to PMD size */ + if (order < PMD_ORDER) { + order += 2; + if (order > PMD_ORDER) + order = PMD_ORDER; + while ((1 << order) > ra->size) + order--; + } + + /* If size is somehow misaligned, fill with order-0 pages */ + while (!err && offset & ((1UL << old_order) - 1)) { + err = ra_alloc_page(mapping, offset++, mark, 0, gfp); + if (!err) + rac.nr_pages++; + } + + while (!err && offset & ((1UL << order) - 1)) { + err = ra_alloc_page(mapping, offset, mark, old_order, gfp); + if (!err) + rac.nr_pages += 1UL << old_order; + offset += 1UL << old_order; + } + + while (!err && offset <= limit) { + err = ra_alloc_page(mapping, offset, mark, order, gfp); + if (!err) + rac.nr_pages += 1UL << order; + offset += 1UL << order; + } + + if (offset > limit) { + ra->size += offset - limit - 1; + ra->async_size += offset - limit - 1; + } + + read_pages(&rac, NULL); + + /* + * If there were already pages in the page cache, then we may have + * left some gaps. Let the regular readahead code take care of this + * situation. + */ + if (err) + return ra_submit(ra, mapping, file); + return 0; +} + /* * A minimal readahead algorithm for trivial sequential/random reads. */ static unsigned long ondemand_readahead(struct address_space *mapping, struct file_ra_state *ra, struct file *filp, - bool hit_readahead_marker, pgoff_t offset, + struct page *page, pgoff_t offset, unsigned long req_size) { struct backing_dev_info *bdi = inode_to_bdi(mapping->host); @@ -451,7 +534,7 @@ ondemand_readahead(struct address_space *mapping, * Query the pagecache for async_size, which normally equals to * readahead size. Ramp it up and use it as the new readahead size. */ - if (hit_readahead_marker) { + if (page) { pgoff_t start; rcu_read_lock(); @@ -520,7 +603,12 @@ ondemand_readahead(struct address_space *mapping, } } - return ra_submit(ra, mapping, filp); + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) || !page || + !(mapping->host->i_sb->s_type->fs_flags & FS_LARGE_PAGES)) + return ra_submit(ra, mapping, filp); + + return page_cache_readahead_order(mapping, ra, filp, + compound_order(page)); } /** @@ -555,7 +643,7 @@ void page_cache_sync_readahead(struct address_space *mapping, } /* do read-ahead */ - ondemand_readahead(mapping, ra, filp, false, offset, req_size); + ondemand_readahead(mapping, ra, filp, NULL, offset, req_size); } EXPORT_SYMBOL_GPL(page_cache_sync_readahead); @@ -602,7 +690,7 @@ page_cache_async_readahead(struct address_space *mapping, return; /* do read-ahead */ - ondemand_readahead(mapping, ra, filp, true, offset, req_size); + ondemand_readahead(mapping, ra, filp, page, offset, req_size); } EXPORT_SYMBOL_GPL(page_cache_async_readahead); From patchwork Wed Feb 12 04:18:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11377531 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 23DA21800 for ; Wed, 12 Feb 2020 04:19:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 02D372082F for ; Wed, 12 Feb 2020 04:19:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="QgJ8JI65" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728276AbgBLETe (ORCPT ); Tue, 11 Feb 2020 23:19:34 -0500 Received: from bombadil.infradead.org ([198.137.202.133]:54010 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728021AbgBLESr (ORCPT ); Tue, 11 Feb 2020 23:18:47 -0500 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=rIiQdfF70UH/NrhNrIlb2hDhHJTI1cNopc0LJtiqgNw=; b=QgJ8JI65whKFKU40aTGC2KbUv3 BOdg/4SyM3pmx8HK0RKR0bP0589Kia3x6ulK2ToRDjDsrVpakDyE40tBu88D3EmHYynUa5oa0x+vc P/Vjoatctq97kYIa2S2vMPDKfP4r2k/f0gNCt62+MxHeqanSZ8qOyxEvD2jNCUdjh46X0lt5y8esb S7OVD4wbUosgzfkOI/s3dIYt8CU6AWifyjf0xFKTHvPILoi7jnbFXffxzvggiwGWHV/hRquBL2D9O O0T4AU04ap+pZLPolLLlUVH51OTrxxe90oEQdd1CP0m62bmrSIRBjfQKMvVWSLqAbqIE3StRc5Ces IvgLH8Tw==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1j1jU7-0006pL-Ic; Wed, 12 Feb 2020 04:18:47 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org Cc: William Kucharski , linux-kernel@vger.kernel.org, Matthew Wilcox Subject: [PATCH v2 25/25] mm: Align THP mappings for non-DAX Date: Tue, 11 Feb 2020 20:18:45 -0800 Message-Id: <20200212041845.25879-26-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200212041845.25879-1-willy@infradead.org> References: <20200212041845.25879-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: William Kucharski When we have the opportunity to use transparent huge pages to map a file, we want to follow the same rules as DAX. Signed-off-by: William Kucharski Signed-off-by: Matthew Wilcox (Oracle) --- mm/huge_memory.c | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index b52e007f0856..b8d9e0d76062 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -577,13 +577,10 @@ unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long ret; loff_t off = (loff_t)pgoff << PAGE_SHIFT; - if (!IS_DAX(filp->f_mapping->host) || !IS_ENABLED(CONFIG_FS_DAX_PMD)) - goto out; - ret = __thp_get_unmapped_area(filp, addr, len, off, flags, PMD_SIZE); if (ret) return ret; -out: + return current->mm->get_unmapped_area(filp, addr, len, pgoff, flags); } EXPORT_SYMBOL_GPL(thp_get_unmapped_area);