From patchwork Mon Aug 31 07:14:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 11745593 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 3F288166C for ; Mon, 31 Aug 2020 07:14:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0D2C32072D for ; Mon, 31 Aug 2020 07:14:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=nvidia.com header.i=@nvidia.com header.b="kjUinpiM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0D2C32072D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=nvidia.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7CB346B0003; Mon, 31 Aug 2020 03:14:49 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 4AA6A6B0007; Mon, 31 Aug 2020 03:14:49 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2ADCF6B0002; Mon, 31 Aug 2020 03:14:49 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0247.hostedemail.com [216.40.44.247]) by kanga.kvack.org (Postfix) with ESMTP id 0693D6B0005 for ; Mon, 31 Aug 2020 03:14:49 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C25AC3635 for ; Mon, 31 Aug 2020 07:14:48 +0000 (UTC) X-FDA: 77210001456.17.boat09_570c1042708d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 880A5180D0180 for ; Mon, 31 Aug 2020 07:14:48 +0000 (UTC) X-Spam-Summary: 1,0,0,,d41d8cd98f00b204,jhubbard@nvidia.com,,RULES_HIT:30005:30054:30064:30090,0,RBL:216.228.121.143:@nvidia.com:.lbl8.mailshell.net-62.18.0.100 64.10.201.10;04yfzkwpn5g41r7cpb6q5xnq6qihzyc9ngqyrnn5az8uxwbqikfb1u144h3p8xd.uw8jc7gd8k9g8mszokgb3derzbqfj135mdjfcz3ciibpne36fak8nhehmh5jdpi.n-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:ft,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:25,LUA_SUMMARY:none X-HE-Tag: boat09_570c1042708d X-Filterd-Recvd-Size: 5491 Received: from hqnvemgate24.nvidia.com (hqnvemgate24.nvidia.com [216.228.121.143]) by imf23.hostedemail.com (Postfix) with ESMTP for ; Mon, 31 Aug 2020 07:14:47 +0000 (UTC) Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Mon, 31 Aug 2020 00:12:40 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Mon, 31 Aug 2020 00:14:46 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Mon, 31 Aug 2020 00:14:46 -0700 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL111.nvidia.com (172.20.187.18) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Mon, 31 Aug 2020 07:14:46 +0000 Received: from rnnvemgw01.nvidia.com (10.128.109.123) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Mon, 31 Aug 2020 07:14:45 +0000 Received: from sandstorm.nvidia.com (Not Verified[10.2.61.194]) by rnnvemgw01.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Mon, 31 Aug 2020 00:14:45 -0700 From: John Hubbard To: Andrew Morton CC: Alexander Viro , Christoph Hellwig , Ilya Dryomov , Jens Axboe , , , , , LKML , John Hubbard Subject: [PATCH v3 1/3] mm/gup: introduce pin_page() Date: Mon, 31 Aug 2020 00:14:37 -0700 Message-ID: <20200831071439.1014766-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20200831071439.1014766-1-jhubbard@nvidia.com> References: <20200831071439.1014766-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1598857960; bh=g/hUeZFRhmcyiBdywMfjY5hsszErlS3vn8kKwBqUBqw=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: In-Reply-To:References:MIME-Version:X-NVConfidentiality: Content-Transfer-Encoding:Content-Type; b=kjUinpiMx6nsErCxVCskj43Z3DmEoRKcD9i9ivUNEYiZZUvdAQHLzGVFbnxLz5nhr gU9ZVgqVz0+ENK96+ymx5ri0Qu3f+reKGD4pt8qDZjXFXqVIl2GN2MtTQIV46VD7l0 vRRZcwo3jXjvLR6MZCdGCc/SD7/QCWrvHe3XFUal/qZtcgTs7Auk+30WD6C3NRO8so J9oszswXn1+MED+pJTlgHtKokzefEkarkTGqOpWg6hOtomvqMdvyLTH8W5VoHDFKQu zMwxsVRna0Ka+n9/VhYUUDZeT8gnPNL2ZwT/7z2QMTYjoIHFeLcUfmvIMNZpQdS46R VQuBC2Om1nakw== X-Rspamd-Queue-Id: 880A5180D0180 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: pin_page() is the FOLL_PIN equivalent of get_page(). This was always a missing piece of the pin/unpin API calls (early reviewers of pin_user_pages() asked about it, in fact), but until now, it just wasn't needed. Finally though, now that the Direct IO pieces in block/bio are about to be converted to use FOLL_PIN, it turns out that there are some cases in which get_page() and get_user_pages_fast() were both used. Converting those sites requires a drop-in replacement for get_page(), which this patch supplies. [1] and [2] provide some background about the overall effort to convert things to pin_user_page*() and unpin_user_page*(). [1] "Explicit pinning of user-space pages": https://lwn.net/Articles/807108/ [2] Documentation/core-api/pin_user_pages.rst Cc: Christoph Hellwig Signed-off-by: John Hubbard --- include/linux/mm.h | 2 ++ mm/gup.c | 33 +++++++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index ca6e6a81576b..24240cf66c44 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1154,6 +1154,8 @@ static inline void get_page(struct page *page) page_ref_inc(page); } +void pin_page(struct page *page); + bool __must_check try_grab_page(struct page *page, unsigned int flags); static inline __must_check bool try_get_page(struct page *page) diff --git a/mm/gup.c b/mm/gup.c index ae096ea7583f..a3a4bfae224a 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -123,6 +123,39 @@ static __maybe_unused struct page *try_grab_compound_head(struct page *page, return NULL; } +/* + * pin_page() - elevate the page refcount, and mark as FOLL_PIN + * + * This the FOLL_PIN equivalent of get_page(). It is intended for use when the + * page will be released via unpin_user_page(). + * + * Unlike pin_user_page*(), pin_page() may be used on nearly any page, not just + * userspace-allocated pages. + */ +void pin_page(struct page *page) +{ + int refs = 1; + + page = compound_head(page); + + VM_BUG_ON_PAGE(page_ref_count(page) <= 0, page); + + if (hpage_pincount_available(page)) + hpage_pincount_add(page, 1); + else + refs = GUP_PIN_COUNTING_BIAS; + + /* + * Similar to try_grab_compound_head(): even if using the + * hpage_pincount_add/_sub() routines, be sure to + * *also* increment the normal page refcount field at least + * once, so that the page really is pinned. + */ + page_ref_add(page, refs); + + mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_ACQUIRED, 1); +} + /** * try_grab_page() - elevate a page's refcount by a flag-dependent amount *