From patchwork Sun Feb 27 09:34:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: jhubbard.send.patches@gmail.com X-Patchwork-Id: 12761571 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21EA0C43217 for ; Sun, 27 Feb 2022 09:34:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A12268D0003; Sun, 27 Feb 2022 04:34:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 973AA8D0001; Sun, 27 Feb 2022 04:34:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8146A8D0003; Sun, 27 Feb 2022 04:34:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 7210D8D0001 for ; Sun, 27 Feb 2022 04:34:46 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 39942120A96 for ; Sun, 27 Feb 2022 09:34:46 +0000 (UTC) X-FDA: 79188050172.12.7E8FD53 Received: from mail-qv1-f54.google.com (mail-qv1-f54.google.com [209.85.219.54]) by imf30.hostedemail.com (Postfix) with ESMTP id B282080002 for ; Sun, 27 Feb 2022 09:34:45 +0000 (UTC) Received: by mail-qv1-f54.google.com with SMTP id h13so10137232qvk.12 for ; Sun, 27 Feb 2022 01:34:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8zKcW8+9Hpd8/WTuKX6kJj19EN0eRYxV6CdPpWTMmWo=; b=WOpg9gTO+cqDSlxgj/J7GtxzOfHFwviptZSNq675orbTb9EQJUNbTVROPdbRthWEoM Va3EvQ4DJDb8avmBXaWeHbUlEPC25Ikg0lkKu5AE0MNDJypAR4SCaVcamSLf0CSBEZUp uRAT1yERfWo6RlRtiepDJN/F0I5lbFwz1pnWyJSNvJiZIxKftwJ4Dk2YRqzR74Cx1VTa OOPvCVOfubSkaSQxAMpXLQTkJHqnRTVy5v5WaMXw1JK/rQbwW1Zng1Mc8rVNmfyYSnfz 1BhBRQ9Xvbb1dVXccO7+B31VNQ4ZWU6knxSwFOWUT2PnH+jL/APNRaedSB05aG1fxw9H /kgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8zKcW8+9Hpd8/WTuKX6kJj19EN0eRYxV6CdPpWTMmWo=; b=HJKNdJ0wZMPdQSbpNu3BBxcjkqMDZvZ4sHASBzOj3/b3hhTYIiTUD2coNWeMrr+nyK goxqwOcHIOSOXR4h/iFMLKaci0DQguq9eMUYdK5KetI2Q8QyTvcKKM/yrRSIwoBD9cnF lMIeqtGYHi0PP0lnsbZuJVhkYLljIKHPdWSkDCTxhU3PcP3rATqv5y/iUvK0lKIWyADN gFJauFkU/jchdV8KbPl6EAmPrXPusClZjPFglYYBcnrB6pd8r//g4wDzbbPTe57YXmR6 27TEUOS/pone7ZUjEOm3NwQRiesu9utu1qfKVzdHChPpA1jXcnnrP7sP0Y/JzPnZjGYl LtPQ== X-Gm-Message-State: AOAM533/QQqfDjEJvLqfIq/mruDsVbFW4Bt0y4T+WpLdck/4OiF05SP/ hxXUKesvJ5O6fI+9IsT+1NY= X-Google-Smtp-Source: ABdhPJwxsNO0VkI2Q6Rc/EsJGY48OCvFsl8Tm4nrId2YTjceZIBaohJzw61Max1v1+ry2LTsRNF0rw== X-Received: by 2002:ad4:5561:0:b0:432:bbc0:8d5d with SMTP id w1-20020ad45561000000b00432bbc08d5dmr8783186qvy.105.1645954484945; Sun, 27 Feb 2022 01:34:44 -0800 (PST) Received: from sandstorm.attlocal.net (76-242-90-12.lightspeed.sntcca.sbcglobal.net. [76.242.90.12]) by smtp.gmail.com with ESMTPSA id h3-20020a05622a170300b002e008a93f8fsm469815qtk.91.2022.02.27.01.34.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Feb 2022 01:34:44 -0800 (PST) From: jhubbard.send.patches@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Jens Axboe , Jan Kara , Christoph Hellwig , Dave Chinner , "Darrick J . Wong" , Theodore Ts'o , Alexander Viro , Miklos Szeredi , Andrew Morton , Chaitanya Kulkarni Cc: linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, LKML , John Hubbard Subject: [PATCH 1/6] mm/gup: introduce pin_user_page() Date: Sun, 27 Feb 2022 01:34:29 -0800 Message-Id: <20220227093434.2889464-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220227093434.2889464-1-jhubbard@nvidia.com> References: <20220227093434.2889464-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: B282080002 X-Stat-Signature: 4m8pa1wwutugroxhhungerptscagy4h3 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=WOpg9gTO; spf=pass (imf30.hostedemail.com: domain of jhubbard.send.patches@gmail.com designates 209.85.219.54 as permitted sender) smtp.mailfrom=jhubbard.send.patches@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1645954485-745635 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: John Hubbard pin_user_page() is an externally-usable version of try_grab_page(), but with semantics that match get_page(), so that it can act as a drop-in replacement for get_page(). Specifically, pin_user_page() has a void return type. pin_user_page() elevates a page's refcount is using FOLL_PIN rules. This means that the caller must release the page via unpin_user_page(). Signed-off-by: John Hubbard --- include/linux/mm.h | 1 + mm/gup.c | 34 ++++++++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index c9bada4096ac..367d7fd28fd0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1946,6 +1946,7 @@ long pin_user_pages_remote(struct mm_struct *mm, long get_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas); +void pin_user_page(struct page *page); long pin_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas); diff --git a/mm/gup.c b/mm/gup.c index 428c587acfa2..13c0dced2aee 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -3035,6 +3035,40 @@ long pin_user_pages(unsigned long start, unsigned long nr_pages, } EXPORT_SYMBOL(pin_user_pages); +/** + * pin_user_page() - apply a FOLL_PIN reference to a page () + * + * @page: the page to be pinned. + * + * Similar to get_user_pages(), in that the page's refcount is elevated using + * FOLL_PIN rules. + * + * IMPORTANT: That means that the caller must release the page via + * unpin_user_page(). + * + */ +void pin_user_page(struct page *page) +{ + struct folio *folio = page_folio(page); + + WARN_ON_ONCE(folio_ref_count(folio) <= 0); + + /* + * Similar to try_grab_page(): be sure to *also* + * increment the normal page refcount field at least once, + * so that the page really is pinned. + */ + if (folio_test_large(folio)) { + folio_ref_add(folio, 1); + atomic_add(1, folio_pincount_ptr(folio)); + } else { + folio_ref_add(folio, GUP_PIN_COUNTING_BIAS); + } + + node_stat_mod_folio(folio, NR_FOLL_PIN_ACQUIRED, 1); +} +EXPORT_SYMBOL(pin_user_page); + /* * pin_user_pages_unlocked() is the FOLL_PIN variant of * get_user_pages_unlocked(). Behavior is the same, except that this one sets