From patchwork Wed Sep 16 07:35:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 11779207 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 08A19112E for ; Wed, 16 Sep 2020 07:36:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A354F21741 for ; Wed, 16 Sep 2020 07:36:20 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="R0SOTHNC" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A354F21741 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id D65C86B005C; Wed, 16 Sep 2020 03:36:19 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id D3CC78E0003; Wed, 16 Sep 2020 03:36:19 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C53308E0001; Wed, 16 Sep 2020 03:36:19 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0061.hostedemail.com [216.40.44.61]) by kanga.kvack.org (Postfix) with ESMTP id AE4176B005C for ; Wed, 16 Sep 2020 03:36:19 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 7175312EF for ; Wed, 16 Sep 2020 07:36:19 +0000 (UTC) X-FDA: 77268116478.12.light08_02048ea27118 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin12.hostedemail.com (Postfix) with ESMTP id CF83D1801510E for ; Wed, 16 Sep 2020 07:36:13 +0000 (UTC) X-Spam-Summary: 1,0,0,22a19ac3b985d86a,d41d8cd98f00b204,rppt@kernel.org,,RULES_HIT:41:355:379:541:800:960:968:973:988:989:1260:1311:1314:1345:1359:1437:1515:1534:1541:1711:1730:1747:1777:1792:2393:2559:2562:2693:3138:3139:3140:3141:3142:3352:3865:3867:3870:3872:4321:4605:5007:6261:6653:6742:6743:7576:7903:8957:10004:11026:11658:11914:12114:12296:12297:12438:12517:12519:12555:12679:12895:12986:13069:13311:13357:13894:14096:14181:14384:14394:14721:21080:21627:21990:30054,0,RBL:198.145.29.99:@kernel.org:.lbl8.mailshell.net-64.100.201.201 62.2.0.100;04yfq1fsigxusmm1dwp15afxcbh3gocbwue3bqp3puw8ubrqxoa96drmsfy5cpz.bahfa5ym388w6jidbchbk6us6zokk8jp1przeirr8cyp1fkgfea7bhfb4mko7rj.r-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:24,LUA_SUMMARY:none X-HE-Tag: light08_02048ea27118 X-Filterd-Recvd-Size: 3954 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Sep 2020 07:36:13 +0000 (UTC) Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A30DC21D1B; Wed, 16 Sep 2020 07:36:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1600241772; bh=myvdsYeTVwfevSA9o3DuyixsLzfxCR/KChSuKW5WQjY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=R0SOTHNCE0m6BH0/ZaBpzddz5BwXTJmu2prbD7R0pgGNmfNd3d8jZFVDCn+X0CY8f EzsD0svKFO2rYc1nH0gRXZuNfM6SPVdaOdDs5t7uyLxWSZd01L33sWUQF0nSDBvlTw vsCZF6Evbl7MSzrx2fX1V0GNfKJ8KE5TOHpyH0+U= From: Mike Rapoport To: Andrew Morton Cc: Alexander Viro , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christopher Lameter , Dan Williams , Dave Hansen , David Hildenbrand , Elena Reshetova , "H. Peter Anvin" , Idan Yaniv , Ingo Molnar , James Bottomley , "Kirill A. Shutemov" , Matthew Wilcox , Mark Rutland , Mike Rapoport , Mike Rapoport , Michael Kerrisk , Palmer Dabbelt , Paul Walmsley , Peter Zijlstra , Thomas Gleixner , Tycho Andersen , Will Deacon , linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org, x86@kernel.org Subject: [PATCH v5 2/5] mmap: make mlock_future_check() global Date: Wed, 16 Sep 2020 10:35:36 +0300 Message-Id: <20200916073539.3552-3-rppt@kernel.org> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200916073539.3552-1-rppt@kernel.org> References: <20200916073539.3552-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: CF83D1801510E X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport It will be used by the upcoming secret memory implementation. Signed-off-by: Mike Rapoport --- mm/internal.h | 3 +++ mm/mmap.c | 5 ++--- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/mm/internal.h b/mm/internal.h index 10c677655912..40544fbf49c9 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -350,6 +350,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma) extern void mlock_vma_page(struct page *page); extern unsigned int munlock_vma_page(struct page *page); +extern int mlock_future_check(struct mm_struct *mm, unsigned long flags, + unsigned long len); + /* * Clear the page's PageMlocked(). This can be useful in a situation where * we want to unconditionally remove a page from the pagecache -- e.g., diff --git a/mm/mmap.c b/mm/mmap.c index 40248d84ad5f..190761920142 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1310,9 +1310,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint) return hint; } -static inline int mlock_future_check(struct mm_struct *mm, - unsigned long flags, - unsigned long len) +int mlock_future_check(struct mm_struct *mm, unsigned long flags, + unsigned long len) { unsigned long locked, lock_limit;