From patchwork Wed Jun 10 20:13:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wilcox X-Patchwork-Id: 11598681 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 2D8D014B7 for ; Wed, 10 Jun 2020 20:14:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EC28120820 for ; Wed, 10 Jun 2020 20:14:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="HoAnYGt/" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EC28120820 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F3BF18D0013; Wed, 10 Jun 2020 16:13:54 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id EF1118D000E; Wed, 10 Jun 2020 16:13:54 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DB1AB8D0013; Wed, 10 Jun 2020 16:13:54 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0006.hostedemail.com [216.40.44.6]) by kanga.kvack.org (Postfix) with ESMTP id C3ABB8D000E for ; Wed, 10 Jun 2020 16:13:54 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 8469B8013B2D for ; Wed, 10 Jun 2020 20:13:54 +0000 (UTC) X-FDA: 76914403188.03.door87_231000926dce Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin03.hostedemail.com (Postfix) with ESMTP id 515F31359D for ; Wed, 10 Jun 2020 20:13:54 +0000 (UTC) X-Spam-Summary: 2,0,0,5b1f0866eaead1d6,d41d8cd98f00b204,willy@infradead.org,,RULES_HIT:41:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1543:1711:1730:1747:1777:1792:2393:2559:2562:2898:3138:3139:3140:3141:3142:3355:3743:3866:3867:3868:3870:3871:3872:4250:4321:4605:5007:6117:6119:6261:6653:7576:7875:7901:9592:10004:10946:11026:11232:11473:11658:11914:12043:12291:12297:12438:12555:12683:12895:12986:13894:14096:14181:14394:14721:21080:21451:21627:30054:30070,0,RBL:198.137.202.133:@infradead.org:.lbl8.mailshell.net-62.8.0.100 64.201.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:0,LUA_SUMMARY:none X-HE-Tag: door87_231000926dce X-Filterd-Recvd-Size: 5023 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf01.hostedemail.com (Postfix) with ESMTP for ; Wed, 10 Jun 2020 20:13:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=5TnSyW1Zm44ghLTpiBh1vxVIfpAZiM8VRVnGQzkBos0=; b=HoAnYGt/Qu5AdQzbQrAIYqUMJi S65+OguqCX6m+LtbZNviyhLWcfjuK2yg8Jm2YDpYlFi9ZsaboH4MWW7atwRUtvermomPgj9uKul0E UIk2S3YAk3keYTFtdRm3+nAIZBlt7WzGvRdl4HF9lNWiWwbGou/8huNYl8xfN35fg8fVC8lNBSGKG 9yUFSRc+bwpCmiROjqFot5CRXDcwinRTCfWvyDrJ1PuhiMnyo1bgNqnSUMk+rG8ca/ZRzU6d8s/mE osmjA58+XEj9hHMuruLNCtn7GxkRnB5Iu9bdZFDaih6HQ4zas9EgJBav4CMdC14I7+/d8dTeXZDi9 UXNDk8Ag==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jj76Z-0003Ui-UY; Wed, 10 Jun 2020 20:13:47 +0000 From: Matthew Wilcox To: linux-fsdevel@vger.kernel.org Cc: "Matthew Wilcox (Oracle)" , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v6 18/51] mm: Support THPs in zero_user_segments Date: Wed, 10 Jun 2020 13:13:12 -0700 Message-Id: <20200610201345.13273-19-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200610201345.13273-1-willy@infradead.org> References: <20200610201345.13273-1-willy@infradead.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: 515F31359D X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam04 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" We can only kmap() one subpage of a THP at a time, so loop over all relevant subpages, skipping ones which don't need to be zeroed. This is too large to inline when THPs are enabled and we actually need highmem, so put it in highmem.c. Signed-off-by: Matthew Wilcox (Oracle) --- include/linux/highmem.h | 11 ++++++-- mm/highmem.c | 62 +++++++++++++++++++++++++++++++++++++++-- 2 files changed, 68 insertions(+), 5 deletions(-) diff --git a/include/linux/highmem.h b/include/linux/highmem.h index d6e82e3de027..f05589513103 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -284,13 +284,17 @@ static inline void clear_highpage(struct page *page) kunmap_atomic(kaddr); } +#if defined(CONFIG_HIGHMEM) && defined(CONFIG_TRANSPARENT_HUGEPAGE) +void zero_user_segments(struct page *page, unsigned start1, unsigned end1, + unsigned start2, unsigned end2); +#else /* !HIGHMEM || !TRANSPARENT_HUGEPAGE */ static inline void zero_user_segments(struct page *page, - unsigned start1, unsigned end1, - unsigned start2, unsigned end2) + unsigned start1, unsigned end1, + unsigned start2, unsigned end2) { void *kaddr = kmap_atomic(page); - BUG_ON(end1 > PAGE_SIZE || end2 > PAGE_SIZE); + BUG_ON(end1 > thp_size(page) || end2 > thp_size(page)); if (end1 > start1) memset(kaddr + start1, 0, end1 - start1); @@ -301,6 +305,7 @@ static inline void zero_user_segments(struct page *page, kunmap_atomic(kaddr); flush_dcache_page(page); } +#endif /* !HIGHMEM || !TRANSPARENT_HUGEPAGE */ static inline void zero_user_segment(struct page *page, unsigned start, unsigned end) diff --git a/mm/highmem.c b/mm/highmem.c index 64d8dea47dd1..686cae2f1ba5 100644 --- a/mm/highmem.c +++ b/mm/highmem.c @@ -367,9 +367,67 @@ void kunmap_high(struct page *page) if (need_wakeup) wake_up(pkmap_map_wait); } - EXPORT_SYMBOL(kunmap_high); -#endif + +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +void zero_user_segments(struct page *page, unsigned start1, unsigned end1, + unsigned start2, unsigned end2) +{ + unsigned int i; + + BUG_ON(end1 > thp_size(page) || end2 > thp_size(page)); + + for (i = 0; i < thp_nr_pages(page); i++) { + void *kaddr; + unsigned this_end; + + if (end1 == 0 && start2 >= PAGE_SIZE) { + start2 -= PAGE_SIZE; + end2 -= PAGE_SIZE; + continue; + } + + if (start1 >= PAGE_SIZE) { + start1 -= PAGE_SIZE; + end1 -= PAGE_SIZE; + if (start2) { + start2 -= PAGE_SIZE; + end2 -= PAGE_SIZE; + } + continue; + } + + kaddr = kmap_atomic(page + i); + + this_end = min_t(unsigned, end1, PAGE_SIZE); + if (end1 > start1) + memset(kaddr + start1, 0, this_end - start1); + end1 -= this_end; + start1 = 0; + + if (start2 >= PAGE_SIZE) { + start2 -= PAGE_SIZE; + end2 -= PAGE_SIZE; + } else { + this_end = min_t(unsigned, end2, PAGE_SIZE); + if (end2 > start2) + memset(kaddr + start2, 0, this_end - start2); + end2 -= this_end; + start2 = 0; + } + + kunmap_atomic(kaddr); + + if (!end1 && !end2) + break; + } + flush_dcache_page(page); + + BUG_ON((start1 | start2 | end1 | end2) != 0); +} +EXPORT_SYMBOL(zero_user_segments); +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +#endif /* CONFIG_HIGHMEM */ #if defined(HASHED_PAGE_VIRTUAL)