From patchwork Fri May 29 02:57:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11577515 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 9709160D for ; Fri, 29 May 2020 02:59:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 58ED82075F for ; Fri, 29 May 2020 02:59:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="UpCNN+v5" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 58ED82075F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E4968800C8; Thu, 28 May 2020 22:58:31 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4FED1800C5; Thu, 28 May 2020 22:58:31 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E9876800CC; Thu, 28 May 2020 22:58:30 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0168.hostedemail.com [216.40.44.168]) by kanga.kvack.org (Postfix) with ESMTP id 43D38800C3 for ; Thu, 28 May 2020 22:58:30 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 120FA2C78 for ; Fri, 29 May 2020 02:58:30 +0000 (UTC) X-FDA: 76868248380.06.knot31_fbc57b67df3e Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id E9F6E1003D0C8 for ; Fri, 29 May 2020 02:58:29 +0000 (UTC) X-Spam-Summary: 2,0,0,d01ba1a9cc78b6da,d41d8cd98f00b204,willy@infradead.org,,RULES_HIT:41:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:4321:4384:4605:5007:6261:6653:7576:7903:8603:9036:10004:11026:11232:11473:11658:11914:12043:12296:12297:12438:12555:12895:12986:13894:14096:14181:14394:14721:21080:21433:21451:21627:21990:30003:30034:30054,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-64.201.201.201 62.8.0.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: knot31_fbc57b67df3e X-Filterd-Recvd-Size: 5603 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf32.hostedemail.com (Postfix) with ESMTP for ; Fri, 29 May 2020 02:58:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=ZJvaEr5U2kZRZcf2t8EEk8IqgSIXKuiXLgLZOLB5+d0=; b=UpCNN+v5D9G6Ud8+KqU4T/GKDu /JiZLT8JEzBWfRiHlt3mpWNLXkaxa0fnykvbFrFyl6VkDY1vsnmxQKTxwtLzcj088LkpjrAxlZZ0M HLaxV1C7ZX4971CCGf6sHbzX5AbrNSBRNECNoI0PZ/zhFKPO2ejvrKtHsfFA+0vOqAd5JuxKCOj6H zbRQQbtb/BRv9y2r+OVyzVUUcXxYvGwGiSnqiTAQ4n7hwJC95MNxjVAzgtIEacuIUfDbhx45IQL/X xC8a1H/LHw4ei7CtXS2+dqGmeJnjK+JdOPxxsmQS4Vkpmh5bOpQxeMuHQQbRgSW9zFLbiIP6xhLJ9 RWpoiUYw==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jeVE2-0008QU-Tf; Fri, 29 May 2020 02:58:26 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH v5 09/39] fs: Introduce i_blocks_per_page Date: Thu, 28 May 2020 19:57:54 -0700 Message-Id: <20200529025824.32296-10-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200529025824.32296-1-willy@infradead.org> References: <20200529025824.32296-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: E9F6E1003D0C8 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" This helper is useful for both large pages in the page cache and for supporting block size larger than page size. Convert some example users (we have a few different ways of writing this idiom). Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 8 ++++---- fs/jfs/jfs_metapage.c | 2 +- fs/xfs/xfs_aops.c | 2 +- include/linux/pagemap.h | 16 ++++++++++++++++ 4 files changed, 22 insertions(+), 6 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 890c8fcda4f3..4bc37bf8d057 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -46,7 +46,7 @@ iomap_page_create(struct inode *inode, struct page *page) { struct iomap_page *iop = to_iomap_page(page); - if (iop || i_blocksize(inode) == PAGE_SIZE) + if (iop || i_blocks_per_page(inode, page) <= 1) return iop; iop = kmalloc(sizeof(*iop), GFP_NOFS | __GFP_NOFAIL); @@ -152,7 +152,7 @@ iomap_iop_set_range_uptodate(struct page *page, unsigned off, unsigned len) unsigned int i; spin_lock_irqsave(&iop->uptodate_lock, flags); - for (i = 0; i < PAGE_SIZE / i_blocksize(inode); i++) { + for (i = 0; i < i_blocks_per_page(inode, page); i++) { if (i >= first && i <= last) set_bit(i, iop->uptodate); else if (!test_bit(i, iop->uptodate)) @@ -1090,7 +1090,7 @@ iomap_finish_page_writeback(struct inode *inode, struct page *page, mapping_set_error(inode->i_mapping, -EIO); } - WARN_ON_ONCE(i_blocksize(inode) < PAGE_SIZE && !iop); + WARN_ON_ONCE(i_blocks_per_page(inode, page) > 1 && !iop); WARN_ON_ONCE(iop && atomic_read(&iop->write_count) <= 0); if (!iop || atomic_dec_and_test(&iop->write_count)) @@ -1386,7 +1386,7 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, int error = 0, count = 0, i; LIST_HEAD(submit_list); - WARN_ON_ONCE(i_blocksize(inode) < PAGE_SIZE && !iop); + WARN_ON_ONCE(i_blocks_per_page(inode, page) > 1 && !iop); WARN_ON_ONCE(iop && atomic_read(&iop->write_count) != 0); /* diff --git a/fs/jfs/jfs_metapage.c b/fs/jfs/jfs_metapage.c index a2f5338a5ea1..176580f54af9 100644 --- a/fs/jfs/jfs_metapage.c +++ b/fs/jfs/jfs_metapage.c @@ -473,7 +473,7 @@ static int metapage_readpage(struct file *fp, struct page *page) struct inode *inode = page->mapping->host; struct bio *bio = NULL; int block_offset; - int blocks_per_page = PAGE_SIZE >> inode->i_blkbits; + int blocks_per_page = i_blocks_per_page(inode, page); sector_t page_start; /* address of page in fs blocks */ sector_t pblock; int xlen; diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c index 1fd4fb7a607c..5b25f5ee84dc 100644 --- a/fs/xfs/xfs_aops.c +++ b/fs/xfs/xfs_aops.c @@ -544,7 +544,7 @@ xfs_discard_page( page, ip->i_ino, offset); error = xfs_bmap_punch_delalloc_range(ip, start_fsb, - PAGE_SIZE / i_blocksize(inode)); + i_blocks_per_page(inode, page)); if (error && !XFS_FORCED_SHUTDOWN(mp)) xfs_alert(mp, "page discard unable to remove delalloc mapping."); out_invalidate: diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index e40527e53620..53a914105591 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -851,4 +851,20 @@ static inline int page_mkwrite_check_truncate(struct page *page, return offset; } +/** + * i_blocks_per_page - How many blocks fit in this page. + * @inode: The inode which contains the blocks. + * @page: The (potentially large) page. + * + * If the block size is larger than the size of this page, will return + * zero, + * + * Context: Any context. + * Return: The number of filesystem blocks covered by this page. + */ +static inline +unsigned int i_blocks_per_page(struct inode *inode, struct page *page) +{ + return thp_size(page) >> inode->i_blkbits; +} #endif /* _LINUX_PAGEMAP_H */