From patchwork Mon Aug 24 15:16:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11733481 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id F04B513B1 for ; Mon, 24 Aug 2020 15:24:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CAF3E2074D for ; Mon, 24 Aug 2020 15:24:39 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="ZGQSHZbY" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727841AbgHXPR4 (ORCPT ); Mon, 24 Aug 2020 11:17:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41298 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726999AbgHXPRN (ORCPT ); Mon, 24 Aug 2020 11:17:13 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DC85FC061573; Mon, 24 Aug 2020 08:17:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=GPFgCYmxv4H+cWd8xnvyf0L9mCYFI6Udvua5Tow80pE=; b=ZGQSHZbYFffQRLYPHl3+9lLOEn ro8mxCPTSctq7sAjopAJF3cl7hz3/7B5iPaXBF5RrxLqMc3DCDTUljy8j6vPTOXrVftxqvAy1mFlx nng+/8uH9xWqZWIU5bZpmF5JPW9UdI4rizhs7YeTdL0xuEsXAB/Y7EeUnyJ932yBalU/UIPRgKE2S Jx22Ac6Ep2KtmGawfYtHtk3j4WYZ724agSeNWgh0RzA1ZQUyEsAEXvdjehlVCLftHrD/n4+03mP8y ifIj5+ILXK8MrOeMm5Z+vtvKMdq5jShLv+o/22evclmJsSZPoavZEJM0snV8VlXhLJ9wogSzyvytm 9IlleK4w==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kAEDW-0004CQ-Cn; Mon, 24 Aug 2020 15:17:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , "Darrick J . Wong" , linux-block@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 01/11] fs: Make page_mkwrite_check_truncate thp-aware Date: Mon, 24 Aug 2020 16:16:50 +0100 Message-Id: <20200824151700.16097-2-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200824151700.16097-1-willy@infradead.org> References: <20200824151700.16097-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If the page is compound, check the last index in the page and return the appropriate size. Change the return type to ssize_t in case we ever support pages larger than 2GB. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/pagemap.h | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h index 853733286138..50b176b65911 100644 --- a/include/linux/pagemap.h +++ b/include/linux/pagemap.h @@ -876,22 +876,22 @@ static inline unsigned long dir_pages(struct inode *inode) * @page: the page to check * @inode: the inode to check the page against * - * Returns the number of bytes in the page up to EOF, + * Return: The number of bytes in the page up to EOF, * or -EFAULT if the page was truncated. */ -static inline int page_mkwrite_check_truncate(struct page *page, +static inline ssize_t page_mkwrite_check_truncate(struct page *page, struct inode *inode) { loff_t size = i_size_read(inode); pgoff_t index = size >> PAGE_SHIFT; - int offset = offset_in_page(size); + unsigned long offset = offset_in_thp(page, size); if (page->mapping != inode->i_mapping) return -EFAULT; /* page is wholly inside EOF */ - if (page->index < index) - return PAGE_SIZE; + if (page->index + thp_nr_pages(page) - 1 < index) + return thp_size(page); /* page is wholly past EOF */ if (page->index > index || !offset) return -EFAULT; From patchwork Mon Aug 24 15:16:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11733457 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EEA7E13B1 for ; Mon, 24 Aug 2020 15:23:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D6490207CD for ; Mon, 24 Aug 2020 15:23:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="ipPBqSL0" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726519AbgHXPSD (ORCPT ); Mon, 24 Aug 2020 11:18:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41300 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727024AbgHXPRN (ORCPT ); Mon, 24 Aug 2020 11:17:13 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14814C061574; Mon, 24 Aug 2020 08:17:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=tkgRxuTPYnxNgLBArPqLRlqKK9A+6igTAfP7nCRNlGc=; b=ipPBqSL0oCa/1OAlUVtSdjtfv2 Mo1C7lgQoUgRLN7mAE+JJBRcm3iDG6EieL+HncqJyRG0Tq2GR2v0eztW+76Pg2S25SdGtahheyFcS ZXG8ptTDyp5Tz4IabCSQ9+Dy8rwmzx0aY6H4J8Ie8V6xnDmHjryYm1/zMTBW1wshnkVRuCdEKUlmS SlMOmuyRrWwYawR0VbPF9aHDh8nv0JlIc6lpEYqsvoram5ePLMh6oNrUH7jxsoaMBpmeDpmgwYaEM 4rdFFo005AFOFKfKv2tL+DMTVsvIRkofZA5JBTE8P6a2xeesmHcJuGjbUPV4aty34sSt0k4Hor31P KyFKs2OQ==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kAEDW-0004CU-Ie; Mon, 24 Aug 2020 15:17:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , "Darrick J . Wong" , linux-block@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 02/11] mm: Support THPs in zero_user_segments Date: Mon, 24 Aug 2020 16:16:51 +0100 Message-Id: <20200824151700.16097-3-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200824151700.16097-1-willy@infradead.org> References: <20200824151700.16097-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org We can only kmap() one subpage of a THP at a time, so loop over all relevant subpages, skipping ones which don't need to be zeroed. This is too large to inline when THPs are enabled and we actually need highmem, so put it in highmem.c. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/highmem.h | 15 +++++++--- mm/highmem.c | 62 +++++++++++++++++++++++++++++++++++++++-- 2 files changed, 71 insertions(+), 6 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 14e6202ce47f..5390bfd4bdd3 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -284,13 +284,18 @@ static inline void clear_highpage(struct page *page) kunmap_atomic(kaddr); } +#if defined(CONFIG_HIGHMEM) && defined(CONFIG_TRANSPARENT_HUGEPAGE) +void zero_user_segments(struct page *page, unsigned start1, unsigned end1, + unsigned start2, unsigned end2); +#else /* !HIGHMEM || !TRANSPARENT_HUGEPAGE */ static inline void zero_user_segments(struct page *page, - unsigned start1, unsigned end1, - unsigned start2, unsigned end2) + unsigned start1, unsigned end1, + unsigned start2, unsigned end2) { void *kaddr = kmap_atomic(page); + unsigned int i; - BUG_ON(end1 > PAGE_SIZE || end2 > PAGE_SIZE); + BUG_ON(end1 > thp_size(page) || end2 > thp_size(page)); if (end1 > start1) memset(kaddr + start1, 0, end1 - start1); @@ -299,8 +304,10 @@ static inline void zero_user_segments(struct page *page, memset(kaddr + start2, 0, end2 - start2); kunmap_atomic(kaddr); - flush_dcache_page(page); + for (i = 0; i < thp_nr_pages(page); i++) + flush_dcache_page(page + i); } +#endif /* !HIGHMEM || !TRANSPARENT_HUGEPAGE */ static inline void zero_user_segment(struct page *page, unsigned start, unsigned end) diff --git a/mm/highmem.c b/mm/highmem.c index 64d8dea47dd1..c0f1e389a153 100644 --- a/mm/highmem.c +++ b/mm/highmem.c @@ -367,9 +367,67 @@ void kunmap_high(struct page *page) if (need_wakeup) wake_up(pkmap_map_wait); } - EXPORT_SYMBOL(kunmap_high); -#endif + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +void zero_user_segments(struct page *page, unsigned start1, unsigned end1, + unsigned start2, unsigned end2) +{ + unsigned int i; + + BUG_ON(end1 > thp_size(page) || end2 > thp_size(page)); + + for (i = 0; i < thp_nr_pages(page); i++) { + void *kaddr; + unsigned this_end; + + if (end1 == 0 && start2 >= PAGE_SIZE) { + start2 -= PAGE_SIZE; + end2 -= PAGE_SIZE; + continue; + } + + if (start1 >= PAGE_SIZE) { + start1 -= PAGE_SIZE; + end1 -= PAGE_SIZE; + if (start2) { + start2 -= PAGE_SIZE; + end2 -= PAGE_SIZE; + } + continue; + } + + kaddr = kmap_atomic(page + i); + + this_end = min_t(unsigned, end1, PAGE_SIZE); + if (end1 > start1) + memset(kaddr + start1, 0, this_end - start1); + end1 -= this_end; + start1 = 0; + + if (start2 >= PAGE_SIZE) { + start2 -= PAGE_SIZE; + end2 -= PAGE_SIZE; + } else { + this_end = min_t(unsigned, end2, PAGE_SIZE); + if (end2 > start2) + memset(kaddr + start2, 0, this_end - start2); + end2 -= this_end; + start2 = 0; + } + + kunmap_atomic(kaddr); + flush_dcache_page(page + i); + + if (!end1 && !end2) + break; + } + + BUG_ON((start1 | start2 | end1 | end2) != 0); +} +EXPORT_SYMBOL(zero_user_segments); +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +#endif /* CONFIG_HIGHMEM */ #if defined(HASHED_PAGE_VIRTUAL) From patchwork Mon Aug 24 15:16:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11733507 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id AA5EB138A for ; Mon, 24 Aug 2020 15:26:01 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 921F420866 for ; Mon, 24 Aug 2020 15:26:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="udgs60rQ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727903AbgHXPZg (ORCPT ); Mon, 24 Aug 2020 11:25:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41302 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727068AbgHXPRN (ORCPT ); Mon, 24 Aug 2020 11:17:13 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7526EC061575; Mon, 24 Aug 2020 08:17:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=EnhOr651fYG//wVSSI2lABo+UhZFOxuqGW2PFxxTzF8=; b=udgs60rQFoUDneH6c+cOnxYloe oP2a2dWBv33jQrN/PrdqpCym621ma9scUuR8om7YiIO8QqKMEr8jXHJDDXf4A+j9dOcecTa7CAJRY qTzNmzkVcIq/ORNZaKLOQk9wNnfLBllsEJX59nQ/7/4FR1f/LzW5IkcJ3qpJ/k+nKWwUKg7eFH+ko EFxsIJFDg5er5xfChA9HgBSodRIVTTwUS7u9GLfQVIry20YBBhVxKNurwfEYpumD60GgPcdmOt/08 rllwFwgX+ekgdSz1i3SPpC/nEbn+G09/J3Ew4w+qnOE0kUEvCLwyV6hBgXD5Dsfc3DMg8gyUJIPJN HcO2EHtg==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kAEDW-0004Ca-QM; Mon, 24 Aug 2020 15:17:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , "Darrick J . Wong" , linux-block@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 03/11] mm: Zero the head page, not the tail page Date: Mon, 24 Aug 2020 16:16:52 +0100 Message-Id: <20200824151700.16097-4-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200824151700.16097-1-willy@infradead.org> References: <20200824151700.16097-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Pass the head page to zero_user_segment(), not the tail page, and adjust the byte offsets appropriately. Signed-off-by: Matthew Wilcox (Oracle) --- mm/shmem.c | 7 +++++++ mm/truncate.c | 7 +++++++ 2 files changed, 14 insertions(+) diff --git a/mm/shmem.c b/mm/shmem.c index 271548ca20f3..77982149b437 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -958,11 +958,18 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend, struct page *page = NULL; shmem_getpage(inode, start - 1, &page, SGP_READ); if (page) { + struct page *head = thp_head(page); unsigned int top = PAGE_SIZE; if (start > end) { top = partial_end; partial_end = 0; } + if (head != page) { + unsigned int diff = start - 1 - head->index; + partial_start += diff << PAGE_SHIFT; + top += diff << PAGE_SHIFT; + page = head; + } zero_user_segment(page, partial_start, top); set_page_dirty(page); unlock_page(page); diff --git a/mm/truncate.c b/mm/truncate.c index dd9ebc1da356..152974888124 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -374,12 +374,19 @@ void truncate_inode_pages_range(struct address_space *mapping, if (partial_start) { struct page *page = find_lock_page(mapping, start - 1); if (page) { + struct page *head = thp_head(page); unsigned int top = PAGE_SIZE; if (start > end) { /* Truncation within a single page */ top = partial_end; partial_end = 0; } + if (head != page) { + unsigned int diff = start - 1 - head->index; + partial_start += diff << PAGE_SHIFT; + top += diff << PAGE_SHIFT; + page = head; + } wait_on_page_writeback(page); zero_user_segment(page, partial_start, top); cleancache_invalidate_page(mapping, page); From patchwork Mon Aug 24 15:16:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11733513 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 041A113B1 for ; Mon, 24 Aug 2020 15:26:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DEAB12078D for ; Mon, 24 Aug 2020 15:26:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="KjEwPAyZ" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725958AbgHXPZg (ORCPT ); Mon, 24 Aug 2020 11:25:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41310 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727073AbgHXPRN (ORCPT ); Mon, 24 Aug 2020 11:17:13 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A63EEC061755; Mon, 24 Aug 2020 08:17:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dHPPYRW7Wyiha4H6WrrBOpZ59u0yhrJkqhE0Dkrp7hM=; b=KjEwPAyZ8OUP/RcNmcLBjN6pV9 DYxhARYtqve1duOzKEr5Xcr8ocVEgCPy5en8ISypdz3pD0/emFMaoaK964ggJbBJd4CHSpyT3WEhu 0y6wYMVR7+fRIFcOy93o7M7VjwVQJ2xwbm+pdChAWPmGp+OmWbWtvw+aOaVa57kmuz7SRhwBIN4Fc 30r8ipYwGYpOa8V4zgtr8UIVLB1lL3cJvWUX0JclfJMAnpd40fQMPlocGCq31ZPxGn3lcZpHClZrm zqQ8gd6k7v8fIGqGlKFqh8jNJvV/8acgEr2H0EAiXuLTwto1N6KPDtw4/WKYOVuujDGyEcyB9Ml0W Dnaqwevg==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kAEDX-0004Cj-1b; Mon, 24 Aug 2020 15:17:03 +0000 From: "Matthew Wilcox (Oracle)" To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , "Darrick J . Wong" , linux-block@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 04/11] block: Add bio_for_each_thp_segment_all Date: Mon, 24 Aug 2020 16:16:53 +0100 Message-Id: <20200824151700.16097-5-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200824151700.16097-1-willy@infradead.org> References: <20200824151700.16097-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Iterate once for each THP instead of once for each base page. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/bio.h | 13 +++++++++++++ include/linux/bvec.h | 27 +++++++++++++++++++++++++++ 2 files changed, 40 insertions(+) diff --git a/include/linux/bio.h b/include/linux/bio.h index c6d765382926..a0e104910097 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -129,12 +129,25 @@ static inline bool bio_next_segment(const struct bio *bio, return true; } +static inline bool bio_next_thp_segment(const struct bio *bio, + struct bvec_iter_all *iter) +{ + if (iter->idx >= bio->bi_vcnt) + return false; + + bvec_thp_advance(&bio->bi_io_vec[iter->idx], iter); + return true; +} + /* * drivers should _never_ use the all version - the bio may have been split * before it got to the driver and the driver won't own all of it */ #define bio_for_each_segment_all(bvl, bio, iter) \ for (bvl = bvec_init_iter_all(&iter); bio_next_segment((bio), &iter); ) +#define bio_for_each_thp_segment_all(bvl, bio, iter) \ + for (bvl = bvec_init_iter_all(&iter); \ + bio_next_thp_segment((bio), &iter); ) static inline void bio_advance_iter(const struct bio *bio, struct bvec_iter *iter, unsigned int bytes) diff --git a/include/linux/bvec.h b/include/linux/bvec.h index ac0c7299d5b8..ea8a37a7515b 100644 --- a/include/linux/bvec.h +++ b/include/linux/bvec.h @@ -162,4 +162,31 @@ static inline void bvec_advance(const struct bio_vec *bvec, } } +static inline void bvec_thp_advance(const struct bio_vec *bvec, + struct bvec_iter_all *iter_all) +{ + struct bio_vec *bv = &iter_all->bv; + unsigned int page_size; + + if (iter_all->done) { + bv->bv_page += thp_nr_pages(bv->bv_page); + page_size = thp_size(bv->bv_page); + bv->bv_offset = 0; + } else { + bv->bv_page = thp_head(bvec->bv_page + + (bvec->bv_offset >> PAGE_SHIFT)); + page_size = thp_size(bv->bv_page); + bv->bv_offset = bvec->bv_offset - + (bv->bv_page - bvec->bv_page) * PAGE_SIZE; + BUG_ON(bv->bv_offset >= page_size); + } + bv->bv_len = min(page_size - bv->bv_offset, + bvec->bv_len - iter_all->done); + iter_all->done += bv->bv_len; + + if (iter_all->done == bvec->bv_len) { + iter_all->idx++; + iter_all->done = 0; + } +} #endif /* __LINUX_BVEC_ITER_H */ From patchwork Mon Aug 24 15:16:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11733493 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4E2C813B1 for ; Mon, 24 Aug 2020 15:25:09 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 36E9C20866 for ; Mon, 24 Aug 2020 15:25:09 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="H05djLHG" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728050AbgHXPYy (ORCPT ); Mon, 24 Aug 2020 11:24:54 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727116AbgHXPRN (ORCPT ); Mon, 24 Aug 2020 11:17:13 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EE5C9C0613ED; Mon, 24 Aug 2020 08:17:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=dyVV1/WtfynBLYrYkKLGof4OOEqAPKqK6TxXZzYjTco=; b=H05djLHGmbjcLzwN2UkfMkgywz cBT773asAxYxNl0FfZCC4YxpgQYKexyL+T9KYhcLqabzR42Rt+jLn+oAx+Wd+nxEUB68ImToQl8vz iPt8nyRkZaXL7Ak6LzrdCxg2mpHL86NDxywIsPMEBuzsqrQNXk3bv8V4Loz8INQdMhEhY7E5Lwx7o Kig1UbJsto4PSwnYbgxrS5XJf580l7EgSPaJmKs7hn8hCVs2htX5K7NS/GsK+7WexgDVcGswaPiIA 8bUPrHiKrPIPfrYy+7M4pcPyA26Kn4dIlFnyUoy0qCF5Lag0jGZUxtOVefahuOXUR8oGES9ohpwvB 002L+kKA==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kAEDX-0004Cq-9R; Mon, 24 Aug 2020 15:17:03 +0000 From: "Matthew Wilcox (Oracle)" To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , "Darrick J . Wong" , linux-block@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 05/11] iomap: Support THPs in iomap_adjust_read_range Date: Mon, 24 Aug 2020 16:16:54 +0100 Message-Id: <20200824151700.16097-6-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200824151700.16097-1-willy@infradead.org> References: <20200824151700.16097-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Pass the struct page instead of the iomap_page so we can determine the size of the page. Use offset_in_thp() instead of offset_in_page() and use thp_size() instead of PAGE_SIZE. Convert the arguments to be size_t instead of unsigned int, in case pages ever get larger than 2^31 bytes. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 2dba054095e8..5cc0343b6a8e 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -76,16 +76,16 @@ iomap_page_release(struct page *page) /* * Calculate the range inside the page that we actually need to read. */ -static void -iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, - loff_t *pos, loff_t length, unsigned *offp, unsigned *lenp) +static void iomap_adjust_read_range(struct inode *inode, struct page *page, + loff_t *pos, loff_t length, size_t *offp, size_t *lenp) { + struct iomap_page *iop = to_iomap_page(page); loff_t orig_pos = *pos; loff_t isize = i_size_read(inode); unsigned block_bits = inode->i_blkbits; unsigned block_size = (1 << block_bits); - unsigned poff = offset_in_page(*pos); - unsigned plen = min_t(loff_t, PAGE_SIZE - poff, length); + size_t poff = offset_in_thp(page, *pos); + size_t plen = min_t(loff_t, thp_size(page) - poff, length); unsigned first = poff >> block_bits; unsigned last = (poff + plen - 1) >> block_bits; @@ -123,7 +123,7 @@ iomap_adjust_read_range(struct inode *inode, struct iomap_page *iop, * page cache for blocks that are entirely outside of i_size. */ if (orig_pos <= isize && orig_pos + length > isize) { - unsigned end = offset_in_page(isize - 1) >> block_bits; + unsigned end = offset_in_thp(page, isize - 1) >> block_bits; if (first <= end && last > end) plen -= (last - end) * block_size; @@ -234,7 +234,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, struct iomap_page *iop = iomap_page_create(inode, page); bool same_page = false, is_contig = false; loff_t orig_pos = pos; - unsigned poff, plen; + size_t poff, plen; sector_t sector; if (iomap->type == IOMAP_INLINE) { @@ -244,7 +244,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, } /* zero post-eof blocks as the page may be mapped */ - iomap_adjust_read_range(inode, iop, &pos, length, &poff, &plen); + iomap_adjust_read_range(inode, page, &pos, length, &poff, &plen); if (plen == 0) goto done; @@ -550,18 +550,19 @@ static int __iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags, struct page *page, struct iomap *srcmap) { - struct iomap_page *iop = iomap_page_create(inode, page); loff_t block_size = i_blocksize(inode); loff_t block_start = pos & ~(block_size - 1); loff_t block_end = (pos + len + block_size - 1) & ~(block_size - 1); - unsigned from = offset_in_page(pos), to = from + len, poff, plen; + unsigned from = offset_in_page(pos), to = from + len; + size_t poff, plen; int status; if (PageUptodate(page)) return 0; + iomap_page_create(inode, page); do { - iomap_adjust_read_range(inode, iop, &block_start, + iomap_adjust_read_range(inode, page, &block_start, block_end - block_start, &poff, &plen); if (plen == 0) break; From patchwork Mon Aug 24 15:16:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11733487 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id ACB5D138A for ; Mon, 24 Aug 2020 15:24:51 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 91B502074D for ; Mon, 24 Aug 2020 15:24:51 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="GB8FYN2i" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728022AbgHXPYk (ORCPT ); Mon, 24 Aug 2020 11:24:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41358 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727798AbgHXPRZ (ORCPT ); Mon, 24 Aug 2020 11:17:25 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B96BC061795; Mon, 24 Aug 2020 08:17:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=eozBFwSKGPfBaW+jGggP/6aRg3H7QN0Z3eUiLt+WZqI=; b=GB8FYN2i+QVtGYzxtjTOv8TVE2 K4Td5UBppble8UOssfOfKNFyoA/jgO03nfOSAO5nyNyyvp2+N1iHaVqGVGSyLUGO1x/cSyT2Q+PPC szEmqvJN2lOfDMNC8hLLRbj/b/9gf/xyA/qOa0pJ7WvjtSnQOMKNxhFzPRFtvHJqKmBXcTK0V3oY2 /KVMc6hshLUZsQeW1JYTOQqnMTwv8VBh0i3/Z8GOkI//Z2YyJkJZmdbD2SGV5GrVD6MxYkPz8fd/9 6i2ACc6kA6VjWnJkf2pp1MgiRrm2KCt8V7rIC1uJOzJqruJkzk22pd7VRQEVYaiNQJU9A9Zl27kig hvfYk3qA==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kAEDX-0004Cy-IR; Mon, 24 Aug 2020 15:17:03 +0000 From: "Matthew Wilcox (Oracle)" To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , "Darrick J . Wong" , linux-block@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 06/11] iomap: Support THPs in invalidatepage Date: Mon, 24 Aug 2020 16:16:55 +0100 Message-Id: <20200824151700.16097-7-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200824151700.16097-1-willy@infradead.org> References: <20200824151700.16097-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org If we're punching a hole in a THP, we need to remove the per-page iomap data as the THP is about to be split and each page will need its own. This means that writepage can now come across a page with no iop allocated, so remove the assertions that there is already one, and just create one (with the Uptodate bits set) if there isn't one. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 17 ++++++++++++----- 1 file changed, 12 insertions(+), 5 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 5cc0343b6a8e..9ea162617398 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -54,6 +54,8 @@ iomap_page_create(struct inode *inode, struct page *page) iop = kzalloc(struct_size(iop, uptodate, BITS_TO_LONGS(nr_blocks)), GFP_NOFS | __GFP_NOFAIL); spin_lock_init(&iop->uptodate_lock); + if (PageUptodate(page)) + bitmap_fill(iop->uptodate, nr_blocks); attach_page_private(page, iop); return iop; } @@ -483,10 +485,17 @@ iomap_invalidatepage(struct page *page, unsigned int offset, unsigned int len) * If we are invalidating the entire page, clear the dirty state from it * and release it to avoid unnecessary buildup of the LRU. */ - if (offset == 0 && len == PAGE_SIZE) { + if (offset == 0 && len == thp_size(page)) { WARN_ON_ONCE(PageWriteback(page)); cancel_dirty_page(page); iomap_page_release(page); + return; + } + + /* Punching a hole in a THP requires releasing the iop */ + if (PageTransHuge(page)) { + VM_BUG_ON_PAGE(!PageUptodate(page), page); + iomap_page_release(page); } } EXPORT_SYMBOL_GPL(iomap_invalidatepage); @@ -1043,14 +1052,13 @@ static void iomap_finish_page_writeback(struct inode *inode, struct page *page, int error, unsigned int len) { - struct iomap_page *iop = to_iomap_page(page); + struct iomap_page *iop = iomap_page_create(inode, page); if (error) { SetPageError(page); mapping_set_error(inode->i_mapping, -EIO); } - WARN_ON_ONCE(i_blocks_per_page(inode, page) > 1 && !iop); WARN_ON_ONCE(iop && atomic_read(&iop->write_count) <= 0); if (!iop || atomic_sub_and_test(len, &iop->write_count)) @@ -1340,14 +1348,13 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, struct writeback_control *wbc, struct inode *inode, struct page *page, u64 end_offset) { - struct iomap_page *iop = to_iomap_page(page); + struct iomap_page *iop = iomap_page_create(inode, page); struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); u64 file_offset; /* file offset of page */ int error = 0, count = 0, i; LIST_HEAD(submit_list); - WARN_ON_ONCE(i_blocks_per_page(inode, page) > 1 && !iop); WARN_ON_ONCE(iop && atomic_read(&iop->write_count) != 0); /* From patchwork Mon Aug 24 15:16:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11733499 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 71C8F138A for ; Mon, 24 Aug 2020 15:25:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 52D902078D for ; Mon, 24 Aug 2020 15:25:36 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="KNITFqto" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727798AbgHXPYx (ORCPT ); Mon, 24 Aug 2020 11:24:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41360 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727816AbgHXPRZ (ORCPT ); Mon, 24 Aug 2020 11:17:25 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 08A86C061796; Mon, 24 Aug 2020 08:17:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=6v6sxL9n7n7WqGQ7gCiCB5GZIc0luvdHyz2bcr1RQfk=; b=KNITFqto3qR1lC83tciVGERcJ8 Ud8xyxeHgC8obkoXv0ujny3E3xheErjrM6iaM89jBUF3PJXghzWhI+jRS/+KPc37Typi8yLronrIR vnCvSo6qlkYa2TS6b09bJ//ByhEbWmfKWhZZMvz/0Wuto0d92ZweL1SUc3yooHxGDj0VA/k8tN1Bs 3KRjTrvzG40z2Yil+hka0gIS/iGLx+v2prWrtqZri2knxemrCNJKqA0LqeHRmgB9wb+TCHDZyW5av IX+e2b1Iw1BSbfYWdmHFv0w+S4dLsXfyb7XuvkP/B5wHLIwx9RyYjWDewsNFC5gt1sMN0BuR63gMs 0yIcBFRA==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kAEDX-0004D6-Sv; Mon, 24 Aug 2020 15:17:03 +0000 From: "Matthew Wilcox (Oracle)" To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , "Darrick J . Wong" , linux-block@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 07/11] iomap: Support THPs in read paths Date: Mon, 24 Aug 2020 16:16:56 +0100 Message-Id: <20200824151700.16097-8-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200824151700.16097-1-willy@infradead.org> References: <20200824151700.16097-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Use thp_size() instead of PAGE_SIZE, offset_in_thp() instead of offset_in_page() and bio_for_each_thp_segment_all(). Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 9ea162617398..d14de8886d5c 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -187,7 +187,7 @@ iomap_read_end_io(struct bio *bio) struct bio_vec *bvec; struct bvec_iter_all iter_all; - bio_for_each_segment_all(bvec, bio, iter_all) + bio_for_each_thp_segment_all(bvec, bio, iter_all) iomap_read_page_end_io(bvec, error); bio_put(bio); } @@ -227,6 +227,16 @@ static inline bool iomap_block_needs_zeroing(struct inode *inode, pos >= i_size_read(inode); } +/* + * Estimate the number of vectors we need based on the current page size; + * if we're wrong we'll end up doing an overly large allocation or needing + * to do a second allocation, neither of which is a big deal. + */ +static unsigned int iomap_nr_vecs(struct page *page, loff_t length) +{ + return (length + thp_size(page) - 1) >> page_shift(page); +} + static loff_t iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, struct iomap *iomap, struct iomap *srcmap) @@ -280,7 +290,7 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data, if (!ctx->bio || !is_contig || bio_full(ctx->bio, plen)) { gfp_t gfp = mapping_gfp_constraint(page->mapping, GFP_KERNEL); gfp_t orig_gfp = gfp; - int nr_vecs = (length + PAGE_SIZE - 1) >> PAGE_SHIFT; + int nr_vecs = iomap_nr_vecs(page, length); if (ctx->bio) submit_bio(ctx->bio); @@ -324,9 +334,9 @@ iomap_readpage(struct page *page, const struct iomap_ops *ops) trace_iomap_readpage(page->mapping->host, 1); - for (poff = 0; poff < PAGE_SIZE; poff += ret) { + for (poff = 0; poff < thp_size(page); poff += ret) { ret = iomap_apply(inode, page_offset(page) + poff, - PAGE_SIZE - poff, 0, ops, &ctx, + thp_size(page) - poff, 0, ops, &ctx, iomap_readpage_actor); if (ret <= 0) { WARN_ON_ONCE(ret == 0); @@ -360,7 +370,8 @@ iomap_readahead_actor(struct inode *inode, loff_t pos, loff_t length, loff_t done, ret; for (done = 0; done < length; done += ret) { - if (ctx->cur_page && offset_in_page(pos + done) == 0) { + if (ctx->cur_page && + offset_in_thp(ctx->cur_page, pos + done) == 0) { if (!ctx->cur_page_in_bio) unlock_page(ctx->cur_page); put_page(ctx->cur_page); From patchwork Mon Aug 24 15:16:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11733433 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3E724138A for ; Mon, 24 Aug 2020 15:18:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 23FCC2074D for ; Mon, 24 Aug 2020 15:18:53 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="wOwrwRJD" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727877AbgHXPSY (ORCPT ); Mon, 24 Aug 2020 11:18:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41392 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727814AbgHXPRg (ORCPT ); Mon, 24 Aug 2020 11:17:36 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 72AE8C061797; Mon, 24 Aug 2020 08:17:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=iplF7f6E9rVkJGIqHTB4yelEOJviHWixj4QJSoLjmRs=; b=wOwrwRJDflHbn0w8l8oVd8i/gG tDkiDCAoTdInbSi1U4J8L8XJ+4FtpCYlW6KYoznapN7lbPYwaEXP+IgUuQ8k38eYQJNWoMOSpUe3F ZGAiJBT6q9NrHezDtdE9spayZleXvlrupTLI31qA5MUil5lsTDYQ1NY4SFvDhH6pGk4dKU2G0mOx8 mi8dnnU/mP4QZ/sDShPacSwywxJhHkL/oFsGpIzGuVdJg5c3yZVtkqL2Xp4kWRMkCDv0vEMueaG17 DpXJxnwR9FEgTiHv6DaZ8g+0qu3PjtAhbodG4Z8OEKtI7Xg8Ld2GScT9R1lfXOXEr4SLJh7JZuN7s zGaW7JtA==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kAEDY-0004DG-9x; Mon, 24 Aug 2020 15:17:04 +0000 From: "Matthew Wilcox (Oracle)" To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , "Darrick J . Wong" , linux-block@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 08/11] iomap: Change iomap_write_begin calling convention Date: Mon, 24 Aug 2020 16:16:57 +0100 Message-Id: <20200824151700.16097-9-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200824151700.16097-1-willy@infradead.org> References: <20200824151700.16097-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Pass (up to) the remaining length of the extent to iomap_write_begin() and have it return the number of bytes that will fit in the page. That lets us copy more bytes per call to iomap_write_begin() if the page cache has already allocated a THP (and will in future allow us to pass a hint to the page cache that it should try to allocate a larger page if there are none in the cache). Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 61 +++++++++++++++++++++++------------------- 1 file changed, 33 insertions(+), 28 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index d14de8886d5c..f43a15aaa381 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -566,14 +566,14 @@ iomap_read_page_sync(loff_t block_start, struct page *page, unsigned poff, return submit_bio_wait(&bio); } -static int -__iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags, - struct page *page, struct iomap *srcmap) +static ssize_t __iomap_write_begin(struct inode *inode, loff_t pos, + size_t len, int flags, struct page *page, struct iomap *srcmap) { loff_t block_size = i_blocksize(inode); loff_t block_start = pos & ~(block_size - 1); loff_t block_end = (pos + len + block_size - 1) & ~(block_size - 1); - unsigned from = offset_in_page(pos), to = from + len; + size_t from = offset_in_thp(page, pos); + size_t to = from + len; size_t poff, plen; int status; @@ -609,12 +609,13 @@ __iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, int flags, return 0; } -static int -iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, unsigned flags, - struct page **pagep, struct iomap *iomap, struct iomap *srcmap) +static ssize_t iomap_write_begin(struct inode *inode, loff_t pos, loff_t len, + unsigned flags, struct page **pagep, struct iomap *iomap, + struct iomap *srcmap) { const struct iomap_page_ops *page_ops = iomap->page_ops; struct page *page; + size_t offset; int status = 0; BUG_ON(pos + len > iomap->offset + iomap->length); @@ -625,6 +626,8 @@ iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, unsigned flags, return -EINTR; if (page_ops && page_ops->page_prepare) { + if (len > UINT_MAX) + len = UINT_MAX; status = page_ops->page_prepare(inode, pos, len, iomap); if (status) return status; @@ -636,6 +639,10 @@ iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, unsigned flags, status = -ENOMEM; goto out_no_page; } + page = thp_head(page); + offset = offset_in_thp(page, pos); + if (len > thp_size(page) - offset) + len = thp_size(page) - offset; if (srcmap->type == IOMAP_INLINE) iomap_read_inline_data(inode, page, srcmap); @@ -645,11 +652,11 @@ iomap_write_begin(struct inode *inode, loff_t pos, unsigned len, unsigned flags, status = __iomap_write_begin(inode, pos, len, flags, page, srcmap); - if (unlikely(status)) + if (status < 0) goto out_unlock; *pagep = page; - return 0; + return len; out_unlock: unlock_page(page); @@ -805,8 +812,10 @@ iomap_write_actor(struct inode *inode, loff_t pos, loff_t length, void *data, status = iomap_write_begin(inode, pos, bytes, 0, &page, iomap, srcmap); - if (unlikely(status)) + if (status < 0) break; + /* We may be partway through a THP */ + offset = offset_in_thp(page, pos); if (mapping_writably_mapped(inode->i_mapping)) flush_dcache_page(page); @@ -866,7 +875,6 @@ static loff_t iomap_unshare_actor(struct inode *inode, loff_t pos, loff_t length, void *data, struct iomap *iomap, struct iomap *srcmap) { - long status = 0; loff_t written = 0; /* don't bother with blocks that are not shared to start with */ @@ -877,25 +885,24 @@ iomap_unshare_actor(struct inode *inode, loff_t pos, loff_t length, void *data, return length; do { - unsigned long offset = offset_in_page(pos); - unsigned long bytes = min_t(loff_t, PAGE_SIZE - offset, length); struct page *page; + ssize_t bytes; - status = iomap_write_begin(inode, pos, bytes, + bytes = iomap_write_begin(inode, pos, length, IOMAP_WRITE_F_UNSHARE, &page, iomap, srcmap); - if (unlikely(status)) - return status; + if (bytes < 0) + return bytes; - status = iomap_write_end(inode, pos, bytes, bytes, page, iomap, + bytes = iomap_write_end(inode, pos, bytes, bytes, page, iomap, srcmap); - if (WARN_ON_ONCE(status == 0)) + if (WARN_ON_ONCE(bytes == 0)) return -EIO; cond_resched(); - pos += status; - written += status; - length -= status; + pos += bytes; + written += bytes; + length -= bytes; balance_dirty_pages_ratelimited(inode->i_mapping); } while (length); @@ -926,15 +933,13 @@ static loff_t iomap_zero(struct inode *inode, loff_t pos, u64 length, struct iomap *iomap, struct iomap *srcmap) { struct page *page; - int status; - unsigned offset = offset_in_page(pos); - unsigned bytes = min_t(u64, PAGE_SIZE - offset, length); + ssize_t bytes; - status = iomap_write_begin(inode, pos, bytes, 0, &page, iomap, srcmap); - if (status) - return status; + bytes = iomap_write_begin(inode, pos, length, 0, &page, iomap, srcmap); + if (bytes < 0) + return bytes; - zero_user(page, offset, bytes); + zero_user(page, offset_in_thp(page, pos), bytes); mark_page_accessed(page); return iomap_write_end(inode, pos, bytes, bytes, page, iomap, srcmap); From patchwork Mon Aug 24 15:16:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11733439 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8B5FB16B1 for ; Mon, 24 Aug 2020 15:19:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6D73E2074D for ; Mon, 24 Aug 2020 15:19:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="L4x7nj1u" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727889AbgHXPSs (ORCPT ); Mon, 24 Aug 2020 11:18:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41394 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727823AbgHXPRg (ORCPT ); Mon, 24 Aug 2020 11:17:36 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C01EDC061798; Mon, 24 Aug 2020 08:17:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=J5sJnI72bBCHI4QXotnHD2dCMjWGHzbD2S9EGp70gwQ=; b=L4x7nj1ugmStyNhAJUaMfBbrIu SsxvIfUclsXrsKt4SXVFbCC+ZXKe0Udd141G3gDrNZ1Oi1iWKZAVVY3RJvEhj+LpfE4/EMbwwT8Z2 f3Hg8p+3zJ70El/d0gDFlNf4O53JsmyRkFGFlOAgEc2MdI2e6pl2ZyoBN/0aPL00x7dswxKexzt/r dW8EEz98V2/ez2p4ogbEjXOOOxYwuhOuQ0YNpgz6HrORokH/MLaDjrnA4OaduBgtcN+Pee3Chnkkj Fc91HeFXf1HEp4HRhZMXckQEAsAKLmXfMLpcI2nrqKpToVpKC5tdHw6RCXvQczzTQeoDOyesqxVQa XSuenlpA==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kAEDZ-0004Dm-5b; Mon, 24 Aug 2020 15:17:05 +0000 From: "Matthew Wilcox (Oracle)" To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , "Darrick J . Wong" , linux-block@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 09/11] iomap: Support THPs in write paths Date: Mon, 24 Aug 2020 16:16:58 +0100 Message-Id: <20200824151700.16097-10-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200824151700.16097-1-willy@infradead.org> References: <20200824151700.16097-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Use thp_size() instead of PAGE_SIZE and offset_in_thp() instead of offset_in_page(). Also simplify the logic in iomap_do_writepage() for determining end of file. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 54 ++++++++++++++++++++++++------------------ 1 file changed, 31 insertions(+), 23 deletions(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index f43a15aaa381..52d371c59758 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -452,7 +452,7 @@ iomap_is_partially_uptodate(struct page *page, unsigned long from, unsigned i; /* Limit range to one page */ - len = min_t(unsigned, PAGE_SIZE - from, count); + len = min_t(unsigned, thp_size(page) - from, count); /* First and last blocks in range within page */ first = from >> inode->i_blkbits; @@ -649,8 +649,8 @@ static ssize_t iomap_write_begin(struct inode *inode, loff_t pos, loff_t len, else if (iomap->flags & IOMAP_F_BUFFER_HEAD) status = __block_write_begin_int(page, pos, len, NULL, srcmap); else - status = __iomap_write_begin(inode, pos, len, flags, page, - srcmap); + status = __iomap_write_begin(inode, pos, len, flags, + thp_head(page), srcmap); if (status < 0) goto out_unlock; @@ -675,6 +675,7 @@ iomap_set_page_dirty(struct page *page) struct address_space *mapping = page_mapping(page); int newly_dirty; + VM_BUG_ON_PGFLAGS(PageTail(page), page); if (unlikely(!mapping)) return !TestSetPageDirty(page); @@ -697,7 +698,9 @@ EXPORT_SYMBOL_GPL(iomap_set_page_dirty); static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, size_t copied, struct page *page) { - flush_dcache_page(page); + size_t offset = offset_in_thp(page, pos); + + flush_dcache_page(page + offset / PAGE_SIZE); /* * The blocks that were entirely written will now be uptodate, so we @@ -712,7 +715,7 @@ static size_t __iomap_write_end(struct inode *inode, loff_t pos, size_t len, */ if (unlikely(copied < len && !PageUptodate(page))) return 0; - iomap_set_range_uptodate(page, offset_in_page(pos), len); + iomap_set_range_uptodate(page, offset, len); iomap_set_page_dirty(page); return copied; } @@ -749,7 +752,8 @@ static size_t iomap_write_end(struct inode *inode, loff_t pos, size_t len, ret = block_write_end(NULL, inode->i_mapping, pos, len, copied, page, NULL); } else { - ret = __iomap_write_end(inode, pos, len, copied, page); + ret = __iomap_write_end(inode, pos, len, copied, + thp_head(page)); } /* @@ -788,6 +792,10 @@ iomap_write_actor(struct inode *inode, loff_t pos, loff_t length, void *data, unsigned long bytes; /* Bytes to write to page */ size_t copied; /* Bytes copied from user */ + /* + * XXX: We don't know what size page we'll find in the + * page cache, so only copy up to a regular page boundary. + */ offset = offset_in_page(pos); bytes = min_t(unsigned long, PAGE_SIZE - offset, iov_iter_count(i)); @@ -818,7 +826,7 @@ iomap_write_actor(struct inode *inode, loff_t pos, loff_t length, void *data, offset = offset_in_thp(page, pos); if (mapping_writably_mapped(inode->i_mapping)) - flush_dcache_page(page); + flush_dcache_page(page + offset / PAGE_SIZE); copied = iov_iter_copy_from_user_atomic(page, i, offset, bytes); @@ -1110,7 +1118,7 @@ iomap_finish_ioend(struct iomap_ioend *ioend, int error) next = bio->bi_private; /* walk each page on bio, ending page IO on them */ - bio_for_each_segment_all(bv, bio, iter_all) + bio_for_each_thp_segment_all(bv, bio, iter_all) iomap_finish_page_writeback(inode, bv->bv_page, error, bv->bv_len); bio_put(bio); @@ -1317,7 +1325,7 @@ iomap_add_to_ioend(struct inode *inode, loff_t offset, struct page *page, { sector_t sector = iomap_sector(&wpc->iomap, offset); unsigned len = i_blocksize(inode); - unsigned poff = offset & (PAGE_SIZE - 1); + unsigned poff = offset_in_thp(page, offset); bool merged, same_page = false; if (!wpc->ioend || !iomap_can_add_to_ioend(wpc, offset, sector)) { @@ -1367,8 +1375,9 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, struct iomap_page *iop = iomap_page_create(inode, page); struct iomap_ioend *ioend, *next; unsigned len = i_blocksize(inode); - u64 file_offset; /* file offset of page */ + loff_t pos; int error = 0, count = 0, i; + int nr_blocks = i_blocks_per_page(inode, page); LIST_HEAD(submit_list); WARN_ON_ONCE(iop && atomic_read(&iop->write_count) != 0); @@ -1378,20 +1387,20 @@ iomap_writepage_map(struct iomap_writepage_ctx *wpc, * end of the current map or find the current map invalid, grab a new * one. */ - for (i = 0, file_offset = page_offset(page); - i < (PAGE_SIZE >> inode->i_blkbits) && file_offset < end_offset; - i++, file_offset += len) { + for (i = 0, pos = page_offset(page); + i < nr_blocks && pos < end_offset; + i++, pos += len) { if (iop && !test_bit(i, iop->uptodate)) continue; - error = wpc->ops->map_blocks(wpc, inode, file_offset); + error = wpc->ops->map_blocks(wpc, inode, pos); if (error) break; if (WARN_ON_ONCE(wpc->iomap.type == IOMAP_INLINE)) continue; if (wpc->iomap.type == IOMAP_HOLE) continue; - iomap_add_to_ioend(inode, file_offset, page, iop, wpc, wbc, + iomap_add_to_ioend(inode, pos, page, iop, wpc, wbc, &submit_list); count++; } @@ -1473,11 +1482,11 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) { struct iomap_writepage_ctx *wpc = data; struct inode *inode = page->mapping->host; - pgoff_t end_index; u64 end_offset; loff_t offset; - trace_iomap_writepage(inode, page_offset(page), PAGE_SIZE); + VM_BUG_ON_PGFLAGS(PageTail(page), page); + trace_iomap_writepage(inode, page_offset(page), thp_size(page)); /* * Refuse to write the page out if we are called from reclaim context. @@ -1514,10 +1523,8 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * ---------------------------------^------------------| */ offset = i_size_read(inode); - end_index = offset >> PAGE_SHIFT; - if (page->index < end_index) - end_offset = (loff_t)(page->index + 1) << PAGE_SHIFT; - else { + end_offset = page_offset(page) + thp_size(page); + if (end_offset > offset) { /* * Check whether the page to write out is beyond or straddles * i_size or not. @@ -1529,7 +1536,8 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * | | Straddles | * ---------------------------------^-----------|--------| */ - unsigned offset_into_page = offset & (PAGE_SIZE - 1); + unsigned offset_into_page = offset_in_thp(page, offset); + pgoff_t end_index = offset >> PAGE_SHIFT; /* * Skip the page if it is fully outside i_size, e.g. due to a @@ -1560,7 +1568,7 @@ iomap_do_writepage(struct page *page, struct writeback_control *wbc, void *data) * memory is zeroed when mapped, and writes to that region are * not written out to the file." */ - zero_user_segment(page, offset_into_page, PAGE_SIZE); + zero_user_segment(page, offset_into_page, thp_size(page)); /* Adjust the end_offset to the end of file */ end_offset = offset; From patchwork Mon Aug 24 15:16:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11733471 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id CC96D13B1 for ; Mon, 24 Aug 2020 15:24:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B48BC2078D for ; Mon, 24 Aug 2020 15:24:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="OmmOLV8x" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727995AbgHXPXr (ORCPT ); Mon, 24 Aug 2020 11:23:47 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41444 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727835AbgHXPR4 (ORCPT ); Mon, 24 Aug 2020 11:17:56 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 58D55C06179A; Mon, 24 Aug 2020 08:17:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=GPa/f1s9d2u/PuYPCiIU+4Nee5fNzjjdkCFYgSpocY4=; b=OmmOLV8xzcX7qqjpLomJ8zGwQd z2K0FTYxiyLFlVv+FLrifaHYjUYOlUwJD2Kko0doP0aDjPHosbMMmq+hlzdZix9Azb5uij+QoPiJC W3ofjxVxCXijvp+O9eSARifw4k1xAUcjaJuSm5RG5cug0w/Jk4Lu4l0qQfiGglAhwbgFN6UmsWqAL MXkLeoaN+cbt7rsRo3/cahvkpPoYNz2fKN/hIMWsBXByUOKAWcj5rQkhEfRUE2oHy5zw/6KqsdQ1z bhJ+yMQ4Lypok8ULGhOmwoz8rM0jgOtHJLg9hF8f0DF4soyq6nhppYCxTsYiaWqbFPYDHrFsMiQei w5ieW9rA==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kAEDZ-0004Dt-Dq; Mon, 24 Aug 2020 15:17:05 +0000 From: "Matthew Wilcox (Oracle)" To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , "Darrick J . Wong" , linux-block@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Christoph Hellwig Subject: [PATCH 10/11] iomap: Inline data shouldn't see THPs Date: Mon, 24 Aug 2020 16:16:59 +0100 Message-Id: <20200824151700.16097-11-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200824151700.16097-1-willy@infradead.org> References: <20200824151700.16097-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org Assert that we're not seeing THPs in functions that read/write inline data, rather than zeroing out the tail. Signed-off-by: Matthew Wilcox (Oracle) Reviewed-by: Christoph Hellwig --- fs/iomap/buffered-io.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index 52d371c59758..ca2aa1995519 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -210,6 +210,7 @@ iomap_read_inline_data(struct inode *inode, struct page *page, return; BUG_ON(page->index); + BUG_ON(PageCompound(page)); BUG_ON(size > PAGE_SIZE - offset_in_page(iomap->inline_data)); addr = kmap_atomic(page); @@ -727,6 +728,7 @@ static size_t iomap_write_end_inline(struct inode *inode, struct page *page, flush_dcache_page(page); WARN_ON_ONCE(!PageUptodate(page)); + BUG_ON(PageCompound(page)); BUG_ON(pos + copied > PAGE_SIZE - offset_in_page(iomap->inline_data)); addr = kmap_atomic(page); From patchwork Mon Aug 24 15:17:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11733479 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 21174138A for ; Mon, 24 Aug 2020 15:24:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F0E1D2078D for ; Mon, 24 Aug 2020 15:24:34 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="oOk8U/QD" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726999AbgHXPXp (ORCPT ); Mon, 24 Aug 2020 11:23:45 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41446 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727838AbgHXPR4 (ORCPT ); Mon, 24 Aug 2020 11:17:56 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6F914C06179B; Mon, 24 Aug 2020 08:17:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=Huo6sRbyJX9efH4aMnmRbcXQABN82YlWzJMQK7nzbq8=; b=oOk8U/QDtTn5eoOaAbpATFETeZ RO5j/88tOn7AiH6SAv7tSpzye6OMyYDvhOKVfTs9Rv04yGxu1zLhlxg2z5pBJDB3qNzoFAvQ2YZCK Xc2BwJyRybfzVEhAhLVarz3sEuywitv0peWEW7G2anq/cOFH0oDDER1Nwhy9zpVLuSlIadNEqpDtB YTeLFVX5FCN00qjYm83jsVT7jyxAXHkOE/DwswSNis1nLInBZBlg8B2N7vNAz+TnYKGhrjq5omxpv b6ZHaMFKe8uinPq6Hdn0gW65hyGzXYlGYw3YUbAnXVupRW3Sdgyln54DIVwUv42pL0M6WOyvXnQQ8 ZiFY94nQ==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kAEDZ-0004E1-RG; Mon, 24 Aug 2020 15:17:05 +0000 From: "Matthew Wilcox (Oracle)" To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , "Darrick J . Wong" , linux-block@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 11/11] iomap: Handle tail pages in iomap_page_mkwrite Date: Mon, 24 Aug 2020 16:17:00 +0100 Message-Id: <20200824151700.16097-12-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200824151700.16097-1-willy@infradead.org> References: <20200824151700.16097-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org iomap_page_mkwrite() can be called with a tail page. If we are, operate on the head page, since we're treating the entire thing as a single unit and the whole page is dirtied at the same time. Signed-off-by: Matthew Wilcox (Oracle) --- fs/iomap/buffered-io.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c index ca2aa1995519..eb8202424665 100644 --- a/fs/iomap/buffered-io.c +++ b/fs/iomap/buffered-io.c @@ -1043,7 +1043,7 @@ iomap_page_mkwrite_actor(struct inode *inode, loff_t pos, loff_t length, vm_fault_t iomap_page_mkwrite(struct vm_fault *vmf, const struct iomap_ops *ops) { - struct page *page = vmf->page; + struct page *page = thp_head(vmf->page); struct inode *inode = file_inode(vmf->vma->vm_file); unsigned long length; loff_t offset;