From patchwork Mon Aug 24 15:16:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11733455 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id C434413B1 for ; Mon, 24 Aug 2020 15:23:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id ABF0A207CD for ; Mon, 24 Aug 2020 15:23:35 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="ipPBqSL0" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726867AbgHXPSf (ORCPT ); Mon, 24 Aug 2020 11:18:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41300 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727024AbgHXPRN (ORCPT ); Mon, 24 Aug 2020 11:17:13 -0400 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14814C061574; Mon, 24 Aug 2020 08:17:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version: References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To: Content-Type:Content-ID:Content-Description; bh=tkgRxuTPYnxNgLBArPqLRlqKK9A+6igTAfP7nCRNlGc=; b=ipPBqSL0oCa/1OAlUVtSdjtfv2 Mo1C7lgQoUgRLN7mAE+JJBRcm3iDG6EieL+HncqJyRG0Tq2GR2v0eztW+76Pg2S25SdGtahheyFcS ZXG8ptTDyp5Tz4IabCSQ9+Dy8rwmzx0aY6H4J8Ie8V6xnDmHjryYm1/zMTBW1wshnkVRuCdEKUlmS SlMOmuyRrWwYawR0VbPF9aHDh8nv0JlIc6lpEYqsvoram5ePLMh6oNrUH7jxsoaMBpmeDpmgwYaEM 4rdFFo005AFOFKfKv2tL+DMTVsvIRkofZA5JBTE8P6a2xeesmHcJuGjbUPV4aty34sSt0k4Hor31P KyFKs2OQ==; Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kAEDW-0004CU-Ie; Mon, 24 Aug 2020 15:17:02 +0000 From: "Matthew Wilcox (Oracle)" To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , "Darrick J . Wong" , linux-block@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH 02/11] mm: Support THPs in zero_user_segments Date: Mon, 24 Aug 2020 16:16:51 +0100 Message-Id: <20200824151700.16097-3-willy@infradead.org> X-Mailer: git-send-email 2.21.3 In-Reply-To: <20200824151700.16097-1-willy@infradead.org> References: <20200824151700.16097-1-willy@infradead.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org We can only kmap() one subpage of a THP at a time, so loop over all relevant subpages, skipping ones which don't need to be zeroed. This is too large to inline when THPs are enabled and we actually need highmem, so put it in highmem.c. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/highmem.h | 15 +++++++--- mm/highmem.c | 62 +++++++++++++++++++++++++++++++++++++++-- 2 files changed, 71 insertions(+), 6 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 14e6202ce47f..5390bfd4bdd3 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -284,13 +284,18 @@ static inline void clear_highpage(struct page *page) kunmap_atomic(kaddr); } +#if defined(CONFIG_HIGHMEM) && defined(CONFIG_TRANSPARENT_HUGEPAGE) +void zero_user_segments(struct page *page, unsigned start1, unsigned end1, + unsigned start2, unsigned end2); +#else /* !HIGHMEM || !TRANSPARENT_HUGEPAGE */ static inline void zero_user_segments(struct page *page, - unsigned start1, unsigned end1, - unsigned start2, unsigned end2) + unsigned start1, unsigned end1, + unsigned start2, unsigned end2) { void *kaddr = kmap_atomic(page); + unsigned int i; - BUG_ON(end1 > PAGE_SIZE || end2 > PAGE_SIZE); + BUG_ON(end1 > thp_size(page) || end2 > thp_size(page)); if (end1 > start1) memset(kaddr + start1, 0, end1 - start1); @@ -299,8 +304,10 @@ static inline void zero_user_segments(struct page *page, memset(kaddr + start2, 0, end2 - start2); kunmap_atomic(kaddr); - flush_dcache_page(page); + for (i = 0; i < thp_nr_pages(page); i++) + flush_dcache_page(page + i); } +#endif /* !HIGHMEM || !TRANSPARENT_HUGEPAGE */ static inline void zero_user_segment(struct page *page, unsigned start, unsigned end) diff --git a/mm/highmem.c b/mm/highmem.c index 64d8dea47dd1..c0f1e389a153 100644 --- a/mm/highmem.c +++ b/mm/highmem.c @@ -367,9 +367,67 @@ void kunmap_high(struct page *page) if (need_wakeup) wake_up(pkmap_map_wait); } - EXPORT_SYMBOL(kunmap_high); -#endif + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +void zero_user_segments(struct page *page, unsigned start1, unsigned end1, + unsigned start2, unsigned end2) +{ + unsigned int i; + + BUG_ON(end1 > thp_size(page) || end2 > thp_size(page)); + + for (i = 0; i < thp_nr_pages(page); i++) { + void *kaddr; + unsigned this_end; + + if (end1 == 0 && start2 >= PAGE_SIZE) { + start2 -= PAGE_SIZE; + end2 -= PAGE_SIZE; + continue; + } + + if (start1 >= PAGE_SIZE) { + start1 -= PAGE_SIZE; + end1 -= PAGE_SIZE; + if (start2) { + start2 -= PAGE_SIZE; + end2 -= PAGE_SIZE; + } + continue; + } + + kaddr = kmap_atomic(page + i); + + this_end = min_t(unsigned, end1, PAGE_SIZE); + if (end1 > start1) + memset(kaddr + start1, 0, this_end - start1); + end1 -= this_end; + start1 = 0; + + if (start2 >= PAGE_SIZE) { + start2 -= PAGE_SIZE; + end2 -= PAGE_SIZE; + } else { + this_end = min_t(unsigned, end2, PAGE_SIZE); + if (end2 > start2) + memset(kaddr + start2, 0, this_end - start2); + end2 -= this_end; + start2 = 0; + } + + kunmap_atomic(kaddr); + flush_dcache_page(page + i); + + if (!end1 && !end2) + break; + } + + BUG_ON((start1 | start2 | end1 | end2) != 0); +} +EXPORT_SYMBOL(zero_user_segments); +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +#endif /* CONFIG_HIGHMEM */ #if defined(HASHED_PAGE_VIRTUAL)