From patchwork Sun Feb 27 09:34:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: jhubbard.send.patches@gmail.com X-Patchwork-Id: 12761571 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21EA0C43217 for ; Sun, 27 Feb 2022 09:34:47 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A12268D0003; Sun, 27 Feb 2022 04:34:46 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 973AA8D0001; Sun, 27 Feb 2022 04:34:46 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8146A8D0003; Sun, 27 Feb 2022 04:34:46 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id 7210D8D0001 for ; Sun, 27 Feb 2022 04:34:46 -0500 (EST) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay12.hostedemail.com (Postfix) with ESMTP id 39942120A96 for ; Sun, 27 Feb 2022 09:34:46 +0000 (UTC) X-FDA: 79188050172.12.7E8FD53 Received: from mail-qv1-f54.google.com (mail-qv1-f54.google.com [209.85.219.54]) by imf30.hostedemail.com (Postfix) with ESMTP id B282080002 for ; Sun, 27 Feb 2022 09:34:45 +0000 (UTC) Received: by mail-qv1-f54.google.com with SMTP id h13so10137232qvk.12 for ; Sun, 27 Feb 2022 01:34:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=8zKcW8+9Hpd8/WTuKX6kJj19EN0eRYxV6CdPpWTMmWo=; b=WOpg9gTO+cqDSlxgj/J7GtxzOfHFwviptZSNq675orbTb9EQJUNbTVROPdbRthWEoM Va3EvQ4DJDb8avmBXaWeHbUlEPC25Ikg0lkKu5AE0MNDJypAR4SCaVcamSLf0CSBEZUp uRAT1yERfWo6RlRtiepDJN/F0I5lbFwz1pnWyJSNvJiZIxKftwJ4Dk2YRqzR74Cx1VTa OOPvCVOfubSkaSQxAMpXLQTkJHqnRTVy5v5WaMXw1JK/rQbwW1Zng1Mc8rVNmfyYSnfz 1BhBRQ9Xvbb1dVXccO7+B31VNQ4ZWU6knxSwFOWUT2PnH+jL/APNRaedSB05aG1fxw9H /kgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8zKcW8+9Hpd8/WTuKX6kJj19EN0eRYxV6CdPpWTMmWo=; b=HJKNdJ0wZMPdQSbpNu3BBxcjkqMDZvZ4sHASBzOj3/b3hhTYIiTUD2coNWeMrr+nyK goxqwOcHIOSOXR4h/iFMLKaci0DQguq9eMUYdK5KetI2Q8QyTvcKKM/yrRSIwoBD9cnF lMIeqtGYHi0PP0lnsbZuJVhkYLljIKHPdWSkDCTxhU3PcP3rATqv5y/iUvK0lKIWyADN gFJauFkU/jchdV8KbPl6EAmPrXPusClZjPFglYYBcnrB6pd8r//g4wDzbbPTe57YXmR6 27TEUOS/pone7ZUjEOm3NwQRiesu9utu1qfKVzdHChPpA1jXcnnrP7sP0Y/JzPnZjGYl LtPQ== X-Gm-Message-State: AOAM533/QQqfDjEJvLqfIq/mruDsVbFW4Bt0y4T+WpLdck/4OiF05SP/ hxXUKesvJ5O6fI+9IsT+1NY= X-Google-Smtp-Source: ABdhPJwxsNO0VkI2Q6Rc/EsJGY48OCvFsl8Tm4nrId2YTjceZIBaohJzw61Max1v1+ry2LTsRNF0rw== X-Received: by 2002:ad4:5561:0:b0:432:bbc0:8d5d with SMTP id w1-20020ad45561000000b00432bbc08d5dmr8783186qvy.105.1645954484945; Sun, 27 Feb 2022 01:34:44 -0800 (PST) Received: from sandstorm.attlocal.net (76-242-90-12.lightspeed.sntcca.sbcglobal.net. [76.242.90.12]) by smtp.gmail.com with ESMTPSA id h3-20020a05622a170300b002e008a93f8fsm469815qtk.91.2022.02.27.01.34.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Feb 2022 01:34:44 -0800 (PST) From: jhubbard.send.patches@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Jens Axboe , Jan Kara , Christoph Hellwig , Dave Chinner , "Darrick J . Wong" , Theodore Ts'o , Alexander Viro , Miklos Szeredi , Andrew Morton , Chaitanya Kulkarni Cc: linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, LKML , John Hubbard Subject: [PATCH 1/6] mm/gup: introduce pin_user_page() Date: Sun, 27 Feb 2022 01:34:29 -0800 Message-Id: <20220227093434.2889464-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220227093434.2889464-1-jhubbard@nvidia.com> References: <20220227093434.2889464-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: B282080002 X-Stat-Signature: 4m8pa1wwutugroxhhungerptscagy4h3 Authentication-Results: imf30.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=WOpg9gTO; spf=pass (imf30.hostedemail.com: domain of jhubbard.send.patches@gmail.com designates 209.85.219.54 as permitted sender) smtp.mailfrom=jhubbard.send.patches@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-HE-Tag: 1645954485-745635 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: John Hubbard pin_user_page() is an externally-usable version of try_grab_page(), but with semantics that match get_page(), so that it can act as a drop-in replacement for get_page(). Specifically, pin_user_page() has a void return type. pin_user_page() elevates a page's refcount is using FOLL_PIN rules. This means that the caller must release the page via unpin_user_page(). Signed-off-by: John Hubbard --- include/linux/mm.h | 1 + mm/gup.c | 34 ++++++++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index c9bada4096ac..367d7fd28fd0 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1946,6 +1946,7 @@ long pin_user_pages_remote(struct mm_struct *mm, long get_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas); +void pin_user_page(struct page *page); long pin_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, struct vm_area_struct **vmas); diff --git a/mm/gup.c b/mm/gup.c index 428c587acfa2..13c0dced2aee 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -3035,6 +3035,40 @@ long pin_user_pages(unsigned long start, unsigned long nr_pages, } EXPORT_SYMBOL(pin_user_pages); +/** + * pin_user_page() - apply a FOLL_PIN reference to a page () + * + * @page: the page to be pinned. + * + * Similar to get_user_pages(), in that the page's refcount is elevated using + * FOLL_PIN rules. + * + * IMPORTANT: That means that the caller must release the page via + * unpin_user_page(). + * + */ +void pin_user_page(struct page *page) +{ + struct folio *folio = page_folio(page); + + WARN_ON_ONCE(folio_ref_count(folio) <= 0); + + /* + * Similar to try_grab_page(): be sure to *also* + * increment the normal page refcount field at least once, + * so that the page really is pinned. + */ + if (folio_test_large(folio)) { + folio_ref_add(folio, 1); + atomic_add(1, folio_pincount_ptr(folio)); + } else { + folio_ref_add(folio, GUP_PIN_COUNTING_BIAS); + } + + node_stat_mod_folio(folio, NR_FOLL_PIN_ACQUIRED, 1); +} +EXPORT_SYMBOL(pin_user_page); + /* * pin_user_pages_unlocked() is the FOLL_PIN variant of * get_user_pages_unlocked(). Behavior is the same, except that this one sets From patchwork Sun Feb 27 09:34:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: jhubbard.send.patches@gmail.com X-Patchwork-Id: 12761572 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D8328C433F5 for ; Sun, 27 Feb 2022 09:34:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 4E1E18D0005; Sun, 27 Feb 2022 04:34:48 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 46C018D0001; Sun, 27 Feb 2022 04:34:48 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E5D18D0005; Sun, 27 Feb 2022 04:34:48 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.28]) by kanga.kvack.org (Postfix) with ESMTP id 19E188D0001 for ; Sun, 27 Feb 2022 04:34:48 -0500 (EST) Received: from smtpin14.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id E047C60A77 for ; Sun, 27 Feb 2022 09:34:47 +0000 (UTC) X-FDA: 79188050214.14.F750D90 Received: from mail-qv1-f52.google.com (mail-qv1-f52.google.com [209.85.219.52]) by imf31.hostedemail.com (Postfix) with ESMTP id 7543220005 for ; Sun, 27 Feb 2022 09:34:47 +0000 (UTC) Received: by mail-qv1-f52.google.com with SMTP id j11so10302356qvy.0 for ; Sun, 27 Feb 2022 01:34:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ItEq05ZSN1A4hvZTi4QQudlRbzhu6qS+hrN6bdKB7cI=; b=PZBjPgxyjRX0Or72xti11PNZ7haqtTUPt+l6GSVDIjPZvvHFsuzjj7D3LZpd2dlpuI QDBarJXEQFKVIDhBbsFsDTJyelJD1y+jkqX4QIJBbstGPEW9bNw3Mhxz2Sik5EOa4Vap 1Y9yLi5/mhlvOIJws68gsbtd9FOSaFcfZPIZzZe56mv7cJ5I1xpv2K/+kRldOPg5pFfy fp1XHYBBzolf8lr3/PZ9ccGYD+Et5/folofDnHM/8wny4qSLItKGNhYaeSkRR1F0h3RS TLQ+YwNORWbEjLneJOR03aiX2RAeaRgvo6Iw5rkoztgQICzpVEs9hhgA89D1X0DA80Xj zWaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ItEq05ZSN1A4hvZTi4QQudlRbzhu6qS+hrN6bdKB7cI=; b=pjXT6GmHnZxCjfbIwhq6qxlPl6c5xcxogvQu4HqL7IPCQxO+cM49wYwkDHVxlVwM9r prpE7101WKQSddodVKxYu+tJNvxCZJDqf4n9VkWip7owdOtD8ErDn7hjscb549+Z/SSI 7cegRtW3CIodmibe2jLW+mIjj8K/bS2Z07MUUTTlVAYCPkbqXTsOXVBrcVjnAgEyr1IF hifnlz3gGLuTSfP0kSDNkCiEzzFsHenX2f5GRChjeXvMrbVTduBTKwUEW3AgRJn6jsme y7NmYFKzEqPEHcOwXEt3oSVv+D8XoANEFJSGIOnD56x24hGFoYidA/LTwOP26CCyTrqV 5Crw== X-Gm-Message-State: AOAM5315OozkgSAL9QVn7TXMkhqEcQyaDqbpnMykyxi4wFS4eYWOfMF/ bzwrK/Xt5Rc6Mlg0uCrzvh7n2pYDwikPcJuWoTwxBg== X-Google-Smtp-Source: ABdhPJzG6l0z2OorcknGPNxVbjfzbPk10PdLFwWxQT69r0yaHLiCk0a02692yrZUIxBpFRCVItUsNg== X-Received: by 2002:a05:6214:4103:b0:42d:7ad0:44ff with SMTP id kc3-20020a056214410300b0042d7ad044ffmr10910530qvb.42.1645954486813; Sun, 27 Feb 2022 01:34:46 -0800 (PST) Received: from sandstorm.attlocal.net (76-242-90-12.lightspeed.sntcca.sbcglobal.net. [76.242.90.12]) by smtp.gmail.com with ESMTPSA id h3-20020a05622a170300b002e008a93f8fsm469815qtk.91.2022.02.27.01.34.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Feb 2022 01:34:46 -0800 (PST) From: jhubbard.send.patches@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Jens Axboe , Jan Kara , Christoph Hellwig , Dave Chinner , "Darrick J . Wong" , Theodore Ts'o , Alexander Viro , Miklos Szeredi , Andrew Morton , Chaitanya Kulkarni Cc: linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, LKML , John Hubbard Subject: [PATCH 2/6] iov_iter: new iov_iter_pin_pages*(), for FOLL_PIN pages Date: Sun, 27 Feb 2022 01:34:30 -0800 Message-Id: <20220227093434.2889464-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220227093434.2889464-1-jhubbard@nvidia.com> References: <20220227093434.2889464-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Rspamd-Queue-Id: 7543220005 X-Stat-Signature: egsrxhit7cw654k31pho3851p36gkjin X-Rspam-User: Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=PZBjPgxy; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf31.hostedemail.com: domain of jhubbard.send.patches@gmail.com designates 209.85.219.52 as permitted sender) smtp.mailfrom=jhubbard.send.patches@gmail.com X-Rspamd-Server: rspam03 X-HE-Tag: 1645954487-366911 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: John Hubbard Provide two new specialized routines that only handle user space pages, and invoke pin_user_pages_fast() on them: iov_iter_pin_pages() and iov_iter_pin_pages_alloc(). This allows subsequent patches to convert various callers of iov_iter_get_pages*(), to the new calls, without having to attempt a mass conversion all at once. Signed-off-by: John Hubbard --- include/linux/uio.h | 4 +++ lib/iov_iter.c | 78 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 82 insertions(+) diff --git a/include/linux/uio.h b/include/linux/uio.h index 739285fe5a2f..208020c2b75a 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -236,6 +236,10 @@ ssize_t iov_iter_get_pages(struct iov_iter *i, struct page **pages, size_t maxsize, unsigned maxpages, size_t *start); ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, struct page ***pages, size_t maxsize, size_t *start); +ssize_t iov_iter_pin_pages(struct iov_iter *i, struct page **pages, + size_t maxsize, unsigned int maxpages, size_t *start); +ssize_t iov_iter_pin_pages_alloc(struct iov_iter *i, struct page ***pages, + size_t maxsize, size_t *start); int iov_iter_npages(const struct iov_iter *i, int maxpages); void iov_iter_restore(struct iov_iter *i, struct iov_iter_state *state); diff --git a/lib/iov_iter.c b/lib/iov_iter.c index 6dd5330f7a99..e64e8e4edd0c 100644 --- a/lib/iov_iter.c +++ b/lib/iov_iter.c @@ -1560,6 +1560,41 @@ ssize_t iov_iter_get_pages(struct iov_iter *i, } EXPORT_SYMBOL(iov_iter_get_pages); +ssize_t iov_iter_pin_pages(struct iov_iter *i, + struct page **pages, size_t maxsize, unsigned int maxpages, + size_t *start) +{ + size_t len; + int n, res; + + if (maxsize > i->count) + maxsize = i->count; + if (!maxsize) + return 0; + + WARN_ON_ONCE(!iter_is_iovec(i)); + + if (likely(iter_is_iovec(i))) { + unsigned int gup_flags = 0; + unsigned long addr; + + if (iov_iter_rw(i) != WRITE) + gup_flags |= FOLL_WRITE; + if (i->nofault) + gup_flags |= FOLL_NOFAULT; + + addr = first_iovec_segment(i, &len, start, maxsize, maxpages); + n = DIV_ROUND_UP(len, PAGE_SIZE); + res = pin_user_pages_fast(addr, n, gup_flags, pages); + if (unlikely(res <= 0)) + return res; + return (res == n ? len : res * PAGE_SIZE) - *start; + } + + return -EFAULT; +} +EXPORT_SYMBOL(iov_iter_pin_pages); + static struct page **get_pages_array(size_t n) { return kvmalloc_array(n, sizeof(struct page *), GFP_KERNEL); @@ -1696,6 +1731,49 @@ ssize_t iov_iter_get_pages_alloc(struct iov_iter *i, } EXPORT_SYMBOL(iov_iter_get_pages_alloc); +ssize_t iov_iter_pin_pages_alloc(struct iov_iter *i, + struct page ***pages, size_t maxsize, + size_t *start) +{ + struct page **p; + size_t len; + int n, res; + + if (maxsize > i->count) + maxsize = i->count; + if (!maxsize) + return 0; + + WARN_ON_ONCE(!iter_is_iovec(i)); + + if (likely(iter_is_iovec(i))) { + unsigned int gup_flags = 0; + unsigned long addr; + + if (iov_iter_rw(i) != WRITE) + gup_flags |= FOLL_WRITE; + if (i->nofault) + gup_flags |= FOLL_NOFAULT; + + addr = first_iovec_segment(i, &len, start, maxsize, ~0U); + n = DIV_ROUND_UP(len, PAGE_SIZE); + p = get_pages_array(n); + if (!p) + return -ENOMEM; + res = pin_user_pages_fast(addr, n, gup_flags, p); + if (unlikely(res <= 0)) { + kvfree(p); + *pages = NULL; + return res; + } + *pages = p; + return (res == n ? len : res * PAGE_SIZE) - *start; + } + + return -EFAULT; +} +EXPORT_SYMBOL(iov_iter_pin_pages_alloc); + size_t csum_and_copy_from_iter(void *addr, size_t bytes, __wsum *csum, struct iov_iter *i) { From patchwork Sun Feb 27 09:34:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: jhubbard.send.patches@gmail.com X-Patchwork-Id: 12761573 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BF188C433EF for ; Sun, 27 Feb 2022 09:34:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 5A5F08D0006; Sun, 27 Feb 2022 04:34:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 509F68D0001; Sun, 27 Feb 2022 04:34:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 383708D0006; Sun, 27 Feb 2022 04:34:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0133.hostedemail.com [216.40.44.133]) by kanga.kvack.org (Postfix) with ESMTP id 1FC7C8D0001 for ; Sun, 27 Feb 2022 04:34:50 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C29C7181AC9CC for ; Sun, 27 Feb 2022 09:34:49 +0000 (UTC) X-FDA: 79188050298.18.2AF09E3 Received: from mail-qt1-f193.google.com (mail-qt1-f193.google.com [209.85.160.193]) by imf23.hostedemail.com (Postfix) with ESMTP id 45F15140003 for ; Sun, 27 Feb 2022 09:34:49 +0000 (UTC) Received: by mail-qt1-f193.google.com with SMTP id bt3so6502130qtb.0 for ; Sun, 27 Feb 2022 01:34:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2eyP4rVDSLxrBLm0L+IwXDpvwWoEKTOhBFmzO4cexJA=; b=SN1X/MqfJvhVYBUGyOf9BOYeLe+NmaeROrl70bvkK+f00RgsVrca1L7kNXuD/wcg9L xKCmZ2Pz8vWSqvhZGTSGz0bn+yPC7pzdLsxbAHOC7niJ2xUIRbSdTA006UFl43cSvClB Eu/WiEesp2A/IaFSnGxRHNzF65Nbr5OEyJxl/q5scyullQobittuRA9RsL64YhSAmZeE yblyQp16l6Hjn1nxs1PK37HlbUKIs3dYdip0CwpH9Jr/Npemuiakh89gz0E28sfp/eVK AuxWm+yRqP6ewJ+xN63QF5PgRvRcwe0IkEbB5ckzEiEAwC6gCacSZreeldbSK5SU8twf hn8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2eyP4rVDSLxrBLm0L+IwXDpvwWoEKTOhBFmzO4cexJA=; b=e7KodQ1iMl3XzdNPLdWwiVMd3EFcMJ8YE2js3F0b4xlqvQ7rcZl5klDxOq83Q5mjdd FyGYJ/xg9FLmCAOzqdX0WgxJTNxv0PlpKZC0f9HAIVagC68r+GAbaU2GjlPukSG30+J6 0MrHQWvYQy6TCy4eUNee202E3Y/XzmzjjyBQ/6jYO+/T0GMqUwZihUnZiQxklgzBr4tS 8r59Wjeg/DbjXQ1qeSu29isgZhhFbYNG3bMahxZgSKL3WYrzvMAfFsmjYlyHNra+w4uu xCnMhGN9+VSMpW/DH0tsegcncK70wNmSfi8nPP6RaQL0DpJx76L3rov4+BorZaiDINpp 2sLQ== X-Gm-Message-State: AOAM530U9KKZ7JpV/mkIDODn9HOA3qP/40r0mpwqmnt6iTTU6r0Y4N/f r2dTF7QQtXvLReyUsinvyDo= X-Google-Smtp-Source: ABdhPJyOSOodlyfcTfw39UkAF/bKoLbJjNNh1s2ZWZwG1dKH8ifuQJeJtZRTl4AMh8in27Ux3laN9Q== X-Received: by 2002:a05:622a:2c6:b0:2dd:2d3d:11cd with SMTP id a6-20020a05622a02c600b002dd2d3d11cdmr12732370qtx.638.1645954488604; Sun, 27 Feb 2022 01:34:48 -0800 (PST) Received: from sandstorm.attlocal.net (76-242-90-12.lightspeed.sntcca.sbcglobal.net. [76.242.90.12]) by smtp.gmail.com with ESMTPSA id h3-20020a05622a170300b002e008a93f8fsm469815qtk.91.2022.02.27.01.34.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Feb 2022 01:34:48 -0800 (PST) From: jhubbard.send.patches@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Jens Axboe , Jan Kara , Christoph Hellwig , Dave Chinner , "Darrick J . Wong" , Theodore Ts'o , Alexander Viro , Miklos Szeredi , Andrew Morton , Chaitanya Kulkarni Cc: linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, LKML , John Hubbard Subject: [PATCH 3/6] block, fs: assert that key paths use iovecs, and nothing else Date: Sun, 27 Feb 2022 01:34:31 -0800 Message-Id: <20220227093434.2889464-4-jhubbard@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220227093434.2889464-1-jhubbard@nvidia.com> References: <20220227093434.2889464-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 45F15140003 X-Stat-Signature: wn6mjf9zmqx5o1enen5dea9ot77a311p Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="SN1X/Mqf"; spf=pass (imf23.hostedemail.com: domain of jhubbard.send.patches@gmail.com designates 209.85.160.193 as permitted sender) smtp.mailfrom=jhubbard.send.patches@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-HE-Tag: 1645954489-460615 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: John Hubbard Upcoming changes to Direct IO will change it from acquiring pages via get_user_pages_fast(), to calling pin_user_pages_fast() instead. Place a few assertions at key points, that the pages are IOVEC (user pages), to enforce the assumptions that there are no kernel or pipe or other odd variations being passed. Signed-off-by: John Hubbard --- block/bio.c | 4 ++++ fs/direct-io.c | 2 ++ 2 files changed, 6 insertions(+) diff --git a/block/bio.c b/block/bio.c index b15f5466ce08..4679d6539e2d 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1167,6 +1167,8 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2); pages += entries_left * (PAGE_PTRS_PER_BVEC - 1); + WARN_ON_ONCE(!iter_is_iovec(iter)); + size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset); if (unlikely(size <= 0)) return size ? size : -EFAULT; @@ -1217,6 +1219,8 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter) BUILD_BUG_ON(PAGE_PTRS_PER_BVEC < 2); pages += entries_left * (PAGE_PTRS_PER_BVEC - 1); + WARN_ON_ONCE(!iter_is_iovec(iter)); + size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset); if (unlikely(size <= 0)) return size ? size : -EFAULT; diff --git a/fs/direct-io.c b/fs/direct-io.c index 38bca4980a1c..7dbbbfef300d 100644 --- a/fs/direct-io.c +++ b/fs/direct-io.c @@ -169,6 +169,8 @@ static inline int dio_refill_pages(struct dio *dio, struct dio_submit *sdio) { ssize_t ret; + WARN_ON_ONCE(!iter_is_iovec(sdio->iter)); + ret = iov_iter_get_pages(sdio->iter, dio->pages, LONG_MAX, DIO_PAGES, &sdio->from); From patchwork Sun Feb 27 09:34:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: jhubbard.send.patches@gmail.com X-Patchwork-Id: 12761574 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 976B4C433EF for ; Sun, 27 Feb 2022 09:34:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2E9118D0007; Sun, 27 Feb 2022 04:34:52 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 26F518D0001; Sun, 27 Feb 2022 04:34:52 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0C1EC8D0007; Sun, 27 Feb 2022 04:34:52 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0100.hostedemail.com [216.40.44.100]) by kanga.kvack.org (Postfix) with ESMTP id EB95A8D0001 for ; Sun, 27 Feb 2022 04:34:51 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id A36408249980 for ; Sun, 27 Feb 2022 09:34:51 +0000 (UTC) X-FDA: 79188050382.17.F66C2A4 Received: from mail-qt1-f173.google.com (mail-qt1-f173.google.com [209.85.160.173]) by imf05.hostedemail.com (Postfix) with ESMTP id 28EBB100003 for ; Sun, 27 Feb 2022 09:34:50 +0000 (UTC) Received: by mail-qt1-f173.google.com with SMTP id e2so6512199qte.12 for ; Sun, 27 Feb 2022 01:34:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BKC7TKW5D2QOJ++xII+i0UjXczoG+fDp5qDZuDgW9Qg=; b=DKVgMJUowfllLYw+jZ6fwPSQE8pq7A4uDQIJeJ7SOJomkZ9wAOQ0ZqDC4ryiLeNc4m NM+kmzgnIO1+/FBuqY7m+xQSrVFmkgbdxCxi4osZzgjQNzLo65nXphdzKDuW08W6r66e 1azwYOhdDT9APBqcsZBUI+0wvClodMIiHUFtTrRfnfqzCFExQk6OGo15/wHMkUAGT/ci zJAnK7wKqFGJp6RcocTrogSwe2ZeIOyrPp9nZa1SwWSJfdB7NVKXvFUKKNly46lOejCE nJoGmtM6ORxApH0P3P5SqxHt2B8kAYe57b+fnEqZdeW9jXkWABIY5u7F03dx0GDDdE5O K6Vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BKC7TKW5D2QOJ++xII+i0UjXczoG+fDp5qDZuDgW9Qg=; b=1560v/rcdL5Vn/AfqHm/5VRYh8B7Wvq2L+Pwl8ULehE3/SL7yJ7BpJUYW4LVxAWZZL jXjSAtkMeq8QW4tgwNrG3mLXBXmpHFUzcoyUmiKwLGLRpgirBBT/fgNudEjp16U6YubW 9eFRopeziQPEkEuVau8iGBuBAF6ryfGWo5SdxEANAPiS2P/fHPfUuM6cbXeCMdmmQAHW YbsFlRzwGEXZLElkGe0kdY4QAGe3kShtWWoSLw5Ypmq09sWCcFg3uUDnd0CBeZqy6Ipm 3cWxincCCC9JVAhgt8uIxK2iChe/tXYkeaKBa6SLLihKu8moBS6blm5C9NTOesuX2AiZ Al4g== X-Gm-Message-State: AOAM531K4aAJfmrl0/pKhEFH0vAHwvBgUhlVhs6UEjXVfiCtMyBCd5kZ A2rkMHx8bOaaPc8a2TggGZI= X-Google-Smtp-Source: ABdhPJyfmd5ydQZXFebOoxbyj0eeWBFyyCNOd2/6AAPY8l/joK7YvF7bQZnGj3AW8Dd0Zdz58ckqIg== X-Received: by 2002:ac8:5f0c:0:b0:2de:2dc9:24e5 with SMTP id x12-20020ac85f0c000000b002de2dc924e5mr13127891qta.535.1645954490427; Sun, 27 Feb 2022 01:34:50 -0800 (PST) Received: from sandstorm.attlocal.net (76-242-90-12.lightspeed.sntcca.sbcglobal.net. [76.242.90.12]) by smtp.gmail.com with ESMTPSA id h3-20020a05622a170300b002e008a93f8fsm469815qtk.91.2022.02.27.01.34.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Feb 2022 01:34:50 -0800 (PST) From: jhubbard.send.patches@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Jens Axboe , Jan Kara , Christoph Hellwig , Dave Chinner , "Darrick J . Wong" , Theodore Ts'o , Alexander Viro , Miklos Szeredi , Andrew Morton , Chaitanya Kulkarni Cc: linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, LKML , John Hubbard Subject: [PATCH 4/6] block, bio, fs: convert most filesystems to pin_user_pages_fast() Date: Sun, 27 Feb 2022 01:34:32 -0800 Message-Id: <20220227093434.2889464-5-jhubbard@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220227093434.2889464-1-jhubbard@nvidia.com> References: <20220227093434.2889464-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 28EBB100003 X-Rspam-User: Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=DKVgMJUo; spf=pass (imf05.hostedemail.com: domain of jhubbard.send.patches@gmail.com designates 209.85.160.173 as permitted sender) smtp.mailfrom=jhubbard.send.patches@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Stat-Signature: 1y855gtfkko56mqd3rcppe8t6ckkbsbj X-HE-Tag: 1645954490-922828 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: John Hubbard Use pin_user_pages_fast(), pin_user_page(), and unpin_user_page() calls, in place of get_user_pages_fast(), get_page() and put_page(). This converts the Direct IO parts of most filesystems over to using FOLL_PIN (pin_user_page*()) page pinning. Signed-off-by: John Hubbard --- block/bio.c | 25 ++++++++++++------------- block/blk-map.c | 6 +++--- fs/direct-io.c | 26 +++++++++++++------------- fs/iomap/direct-io.c | 2 +- 4 files changed, 29 insertions(+), 30 deletions(-) diff --git a/block/bio.c b/block/bio.c index 4679d6539e2d..d078f992a9c9 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1102,7 +1102,7 @@ void __bio_release_pages(struct bio *bio, bool mark_dirty) bio_for_each_segment_all(bvec, bio, iter_all) { if (mark_dirty && !PageCompound(bvec->bv_page)) set_page_dirty_lock(bvec->bv_page); - put_page(bvec->bv_page); + unpin_user_page(bvec->bv_page); } } EXPORT_SYMBOL_GPL(__bio_release_pages); @@ -1130,10 +1130,9 @@ void bio_iov_bvec_set(struct bio *bio, struct iov_iter *iter) static void bio_put_pages(struct page **pages, size_t size, size_t off) { - size_t i, nr = DIV_ROUND_UP(size + (off & ~PAGE_MASK), PAGE_SIZE); + size_t nr = DIV_ROUND_UP(size + (off & ~PAGE_MASK), PAGE_SIZE); - for (i = 0; i < nr; i++) - put_page(pages[i]); + unpin_user_pages(pages, nr); } #define PAGE_PTRS_PER_BVEC (sizeof(struct bio_vec) / sizeof(struct page *)) @@ -1144,9 +1143,9 @@ static void bio_put_pages(struct page **pages, size_t size, size_t off) * @iter: iov iterator describing the region to be mapped * * Pins pages from *iter and appends them to @bio's bvec array. The - * pages will have to be released using put_page() when done. - * For multi-segment *iter, this function only adds pages from the - * next non-empty segment of the iov iterator. + * pages will have to be released using unpin_user_page() when done. For + * multi-segment *iter, this function only adds pages from the next non-empty + * segment of the iov iterator. */ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) { @@ -1169,7 +1168,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) WARN_ON_ONCE(!iter_is_iovec(iter)); - size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset); + size = iov_iter_pin_pages(iter, pages, LONG_MAX, nr_pages, &offset); if (unlikely(size <= 0)) return size ? size : -EFAULT; @@ -1180,7 +1179,7 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) if (__bio_try_merge_page(bio, page, len, offset, &same_page)) { if (same_page) - put_page(page); + unpin_user_page(page); } else { if (WARN_ON_ONCE(bio_full(bio, len))) { bio_put_pages(pages + i, left, offset); @@ -1221,7 +1220,7 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter) WARN_ON_ONCE(!iter_is_iovec(iter)); - size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset); + size = iov_iter_pin_pages(iter, pages, LONG_MAX, nr_pages, &offset); if (unlikely(size <= 0)) return size ? size : -EFAULT; @@ -1237,7 +1236,7 @@ static int __bio_iov_append_get_pages(struct bio *bio, struct iov_iter *iter) break; } if (same_page) - put_page(page); + unpin_user_page(page); offset = 0; } @@ -1434,8 +1433,8 @@ void bio_set_pages_dirty(struct bio *bio) * the BIO and re-dirty the pages in process context. * * It is expected that bio_check_pages_dirty() will wholly own the BIO from - * here on. It will run one put_page() against each page and will run one - * bio_put() against the BIO. + * here on. It will run one unpin_user_page() against each page and will run + * one bio_put() against the BIO. */ static void bio_dirty_fn(struct work_struct *work); diff --git a/block/blk-map.c b/block/blk-map.c index c7f71d83eff1..ce450683994c 100644 --- a/block/blk-map.c +++ b/block/blk-map.c @@ -252,7 +252,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, size_t offs, added = 0; int npages; - bytes = iov_iter_get_pages_alloc(iter, &pages, LONG_MAX, &offs); + bytes = iov_iter_pin_pages_alloc(iter, &pages, LONG_MAX, &offs); if (unlikely(bytes <= 0)) { ret = bytes ? bytes : -EFAULT; goto out_unmap; @@ -275,7 +275,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, if (!bio_add_hw_page(rq->q, bio, page, n, offs, max_sectors, &same_page)) { if (same_page) - put_page(page); + unpin_user_page(page); break; } @@ -289,7 +289,7 @@ static int bio_map_user_iov(struct request *rq, struct iov_iter *iter, * release the pages we didn't map into the bio, if any */ while (j < npages) - put_page(pages[j++]); + unpin_user_page(pages[j++]); kvfree(pages); /* couldn't stuff something into bio? */ if (bytes) diff --git a/fs/direct-io.c b/fs/direct-io.c index 7dbbbfef300d..815407c26f57 100644 --- a/fs/direct-io.c +++ b/fs/direct-io.c @@ -171,7 +171,7 @@ static inline int dio_refill_pages(struct dio *dio, struct dio_submit *sdio) WARN_ON_ONCE(!iter_is_iovec(sdio->iter)); - ret = iov_iter_get_pages(sdio->iter, dio->pages, LONG_MAX, DIO_PAGES, + ret = iov_iter_pin_pages(sdio->iter, dio->pages, LONG_MAX, DIO_PAGES, &sdio->from); if (ret < 0 && sdio->blocks_available && (dio->op == REQ_OP_WRITE)) { @@ -183,7 +183,7 @@ static inline int dio_refill_pages(struct dio *dio, struct dio_submit *sdio) */ if (dio->page_errors == 0) dio->page_errors = ret; - get_page(page); + pin_user_page(page); dio->pages[0] = page; sdio->head = 0; sdio->tail = 1; @@ -452,7 +452,7 @@ static inline void dio_bio_submit(struct dio *dio, struct dio_submit *sdio) static inline void dio_cleanup(struct dio *dio, struct dio_submit *sdio) { while (sdio->head < sdio->tail) - put_page(dio->pages[sdio->head++]); + unpin_user_page(dio->pages[sdio->head++]); } /* @@ -717,7 +717,7 @@ static inline int dio_bio_add_page(struct dio_submit *sdio) */ if ((sdio->cur_page_len + sdio->cur_page_offset) == PAGE_SIZE) sdio->pages_in_io--; - get_page(sdio->cur_page); + pin_user_page(sdio->cur_page); sdio->final_block_in_bio = sdio->cur_page_block + (sdio->cur_page_len >> sdio->blkbits); ret = 0; @@ -832,13 +832,13 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page, */ if (sdio->cur_page) { ret = dio_send_cur_page(dio, sdio, map_bh); - put_page(sdio->cur_page); + unpin_user_page(sdio->cur_page); sdio->cur_page = NULL; if (ret) return ret; } - get_page(page); /* It is in dio */ + pin_user_page(page); /* It is in dio */ sdio->cur_page = page; sdio->cur_page_offset = offset; sdio->cur_page_len = len; @@ -853,7 +853,7 @@ submit_page_section(struct dio *dio, struct dio_submit *sdio, struct page *page, ret = dio_send_cur_page(dio, sdio, map_bh); if (sdio->bio) dio_bio_submit(dio, sdio); - put_page(sdio->cur_page); + unpin_user_page(sdio->cur_page); sdio->cur_page = NULL; } return ret; @@ -953,7 +953,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, ret = get_more_blocks(dio, sdio, map_bh); if (ret) { - put_page(page); + unpin_user_page(page); goto out; } if (!buffer_mapped(map_bh)) @@ -998,7 +998,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, /* AKPM: eargh, -ENOTBLK is a hack */ if (dio->op == REQ_OP_WRITE) { - put_page(page); + unpin_user_page(page); return -ENOTBLK; } @@ -1011,7 +1011,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, if (sdio->block_in_file >= i_size_aligned >> blkbits) { /* We hit eof */ - put_page(page); + unpin_user_page(page); goto out; } zero_user(page, from, 1 << blkbits); @@ -1051,7 +1051,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, sdio->next_block_for_io, map_bh); if (ret) { - put_page(page); + unpin_user_page(page); goto out; } sdio->next_block_for_io += this_chunk_blocks; @@ -1067,7 +1067,7 @@ static int do_direct_IO(struct dio *dio, struct dio_submit *sdio, } /* Drop the ref which was taken in get_user_pages() */ - put_page(page); + unpin_user_page(page); } out: return ret; @@ -1289,7 +1289,7 @@ do_blockdev_direct_IO(struct kiocb *iocb, struct inode *inode, ret2 = dio_send_cur_page(dio, &sdio, &map_bh); if (retval == 0) retval = ret2; - put_page(sdio.cur_page); + unpin_user_page(sdio.cur_page); sdio.cur_page = NULL; } if (sdio.bio) diff --git a/fs/iomap/direct-io.c b/fs/iomap/direct-io.c index 67cf9c16f80c..d0986a31a9d1 100644 --- a/fs/iomap/direct-io.c +++ b/fs/iomap/direct-io.c @@ -192,7 +192,7 @@ static void iomap_dio_zero(const struct iomap_iter *iter, struct iomap_dio *dio, bio->bi_private = dio; bio->bi_end_io = iomap_dio_bio_end_io; - get_page(page); + pin_user_page(page); __bio_add_page(bio, page, len, 0); iomap_dio_submit_bio(iter, dio, bio, pos); } From patchwork Sun Feb 27 09:34:33 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: jhubbard.send.patches@gmail.com X-Patchwork-Id: 12761575 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A415C433FE for ; Sun, 27 Feb 2022 09:34:54 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D6F038D0008; Sun, 27 Feb 2022 04:34:53 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id CF8A18D0001; Sun, 27 Feb 2022 04:34:53 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B6F888D0008; Sun, 27 Feb 2022 04:34:53 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0148.hostedemail.com [216.40.44.148]) by kanga.kvack.org (Postfix) with ESMTP id A7F858D0001 for ; Sun, 27 Feb 2022 04:34:53 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 5BC188249980 for ; Sun, 27 Feb 2022 09:34:53 +0000 (UTC) X-FDA: 79188050466.22.75B5D8C Received: from mail-qt1-f182.google.com (mail-qt1-f182.google.com [209.85.160.182]) by imf18.hostedemail.com (Postfix) with ESMTP id DEC7B1C0004 for ; Sun, 27 Feb 2022 09:34:52 +0000 (UTC) Received: by mail-qt1-f182.google.com with SMTP id q10so6510258qtw.4 for ; Sun, 27 Feb 2022 01:34:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=nzSg76ac6SX33uvAxMIqRNwFLbBvgCZfP9HnkSBqBXk=; b=QElJKo+jzPOXMMwQ05T0aZ9Dq+Kvu0K78b2J4HE8GzpiJ9OsxoVlgSb7OTHjLRFlbO hfstUvFFX3Wq8mrLlJUFci1T5XCppas8fx4HA7pTXVd6wen3pGaXgeDJghJJivrehVxE rnE6hQDNtZ63MA1OiEp56hEM1HVRVcewvUsUfa+Fw2I8VbDc2IUInGqVJgG0fLdqwTdp OourRBKBgdM7trbKTSRWNrAmdEcCO4d+m0SRLRArbvKBgxMOrVMSGBW771qiupDxE0oE Zo14yyY2Kz3hbKxgYVFTBhs31cDGrmuHAUlUCoKEiV8o3xTgaU3clrT/ohOSsFvdl1fH nJeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nzSg76ac6SX33uvAxMIqRNwFLbBvgCZfP9HnkSBqBXk=; b=LZkuNxPLdTRvfU8TYLq6EGXTXYnDSt4Of4qzsvTR0g+xI7LE1aTgwd6ocXS4gm7N6B y659oiqFhM8HZV4q4l+Gg9YpqVyxZ1MTCnuUEp8LEk7kJpGig7TTaIw1jKQatEvylcW5 hYcTMivO0pQ00NG2tUv7nlQAXQCreYjOROLIt2OYQ4nxFq5WX8AQh6EMv/wVbiIsVqw/ lK0vBGEUmKJMjBYHVUDWmu0Uo0stFr8h3fbyMFsBYdxhT5AM5Z88CTYgzmjkOMlTSUhF Zc6EOWhER7jdVIvVMHMuSx+k9qTErcOp9z5/HGNSBsIxkRvleh0C5Qd7p8c8NwpvSzJt BGNw== X-Gm-Message-State: AOAM531WQENWQvRBqM3HVnC1sbOMHCZ2v9dOVg65MQju9zCVUcZXBXNk /I20rSyIERKoXHA76/yW/J0= X-Google-Smtp-Source: ABdhPJyXNsyLxx3dgqMS1QCzsXJ4My1SC/a0A27ijb8+418Y5b2C8EQuXVl4YQq8TZrWVjwY6HfYnw== X-Received: by 2002:ac8:5e4b:0:b0:2dd:dc99:d22b with SMTP id i11-20020ac85e4b000000b002dddc99d22bmr12694439qtx.165.1645954492201; Sun, 27 Feb 2022 01:34:52 -0800 (PST) Received: from sandstorm.attlocal.net (76-242-90-12.lightspeed.sntcca.sbcglobal.net. [76.242.90.12]) by smtp.gmail.com with ESMTPSA id h3-20020a05622a170300b002e008a93f8fsm469815qtk.91.2022.02.27.01.34.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Feb 2022 01:34:51 -0800 (PST) From: jhubbard.send.patches@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Jens Axboe , Jan Kara , Christoph Hellwig , Dave Chinner , "Darrick J . Wong" , Theodore Ts'o , Alexander Viro , Miklos Szeredi , Andrew Morton , Chaitanya Kulkarni Cc: linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, LKML , John Hubbard Subject: [PATCH 5/6] NFS: direct-io: convert to FOLL_PIN pages Date: Sun, 27 Feb 2022 01:34:33 -0800 Message-Id: <20220227093434.2889464-6-jhubbard@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220227093434.2889464-1-jhubbard@nvidia.com> References: <20220227093434.2889464-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: DEC7B1C0004 X-Rspam-User: Authentication-Results: imf18.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=QElJKo+j; spf=pass (imf18.hostedemail.com: domain of jhubbard.send.patches@gmail.com designates 209.85.160.182 as permitted sender) smtp.mailfrom=jhubbard.send.patches@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Stat-Signature: 4nqyhqam6e5rgoqqq9gdy6391yh1634z X-HE-Tag: 1645954492-74468 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: John Hubbard Convert nfs-direct to use pin_user_pages_fast(), and unpin_user_pages(), in place of get_user_pages_fast() and put_page(). Signed-off-by: John Hubbard --- fs/nfs/direct.c | 15 ++++----------- 1 file changed, 4 insertions(+), 11 deletions(-) diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c index eabfdab543c8..42111a75c0f7 100644 --- a/fs/nfs/direct.c +++ b/fs/nfs/direct.c @@ -177,13 +177,6 @@ ssize_t nfs_direct_IO(struct kiocb *iocb, struct iov_iter *iter) return nfs_file_direct_write(iocb, iter); } -static void nfs_direct_release_pages(struct page **pages, unsigned int npages) -{ - unsigned int i; - for (i = 0; i < npages; i++) - put_page(pages[i]); -} - void nfs_init_cinfo_from_dreq(struct nfs_commit_info *cinfo, struct nfs_direct_req *dreq) { @@ -367,7 +360,7 @@ static ssize_t nfs_direct_read_schedule_iovec(struct nfs_direct_req *dreq, size_t pgbase; unsigned npages, i; - result = iov_iter_get_pages_alloc(iter, &pagevec, + result = iov_iter_pin_pages_alloc(iter, &pagevec, rsize, &pgbase); if (result < 0) break; @@ -398,7 +391,7 @@ static ssize_t nfs_direct_read_schedule_iovec(struct nfs_direct_req *dreq, pos += req_len; dreq->bytes_left -= req_len; } - nfs_direct_release_pages(pagevec, npages); + unpin_user_pages(pagevec, npages); kvfree(pagevec); if (result < 0) break; @@ -811,7 +804,7 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq, size_t pgbase; unsigned npages, i; - result = iov_iter_get_pages_alloc(iter, &pagevec, + result = iov_iter_pin_pages_alloc(iter, &pagevec, wsize, &pgbase); if (result < 0) break; @@ -850,7 +843,7 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq, pos += req_len; dreq->bytes_left -= req_len; } - nfs_direct_release_pages(pagevec, npages); + unpin_user_pages(pagevec, npages); kvfree(pagevec); if (result < 0) break; From patchwork Sun Feb 27 09:34:34 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: jhubbard.send.patches@gmail.com X-Patchwork-Id: 12761576 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 29E70C4167D for ; Sun, 27 Feb 2022 09:34:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B98AA8D0009; Sun, 27 Feb 2022 04:34:55 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B234C8D0001; Sun, 27 Feb 2022 04:34:55 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 99B778D0009; Sun, 27 Feb 2022 04:34:55 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0218.hostedemail.com [216.40.44.218]) by kanga.kvack.org (Postfix) with ESMTP id 8AC518D0001 for ; Sun, 27 Feb 2022 04:34:55 -0500 (EST) Received: from smtpin22.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 47078987BC for ; Sun, 27 Feb 2022 09:34:55 +0000 (UTC) X-FDA: 79188050550.22.C9771D2 Received: from mail-qk1-f175.google.com (mail-qk1-f175.google.com [209.85.222.175]) by imf07.hostedemail.com (Postfix) with ESMTP id B3F0740009 for ; Sun, 27 Feb 2022 09:34:54 +0000 (UTC) Received: by mail-qk1-f175.google.com with SMTP id t21so8191579qkg.6 for ; Sun, 27 Feb 2022 01:34:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3ydXRxafX1EOKVAeUm3spAkjR9HLvDHZhsiMVOXC7pM=; b=Ry/pcRhCOPtr1BHLw6EHtZnD+ABBqojtwAXfsBmfqclWrxUkX6MSl4hET36Zct4AbT 4uOAlghzb+c72+v2pSSvRm9khxVGZyzADyCaA/2p864tFi9InGHoLV0kZrMm87/zxJVh VSq5O3bnrSjGRHZCSfSHEs5jlLPJrKFDBq1EveU6WRvxDlNB/p5VOKzVXfQ629fxPNIr qawBh26DurhQt+kvkGKMpZaWMscQKuOJlDFAwqqL6UrQyJ9vSzYS3uQG5AL7kRhVM2jS wIj427P9ZAnm1FzBk73O1UcZn8Qg1HgpjyUioAIyoz4lhqRQZQr0yTULOXil6Kri7OeG NslQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3ydXRxafX1EOKVAeUm3spAkjR9HLvDHZhsiMVOXC7pM=; b=CuwdMpfEFr9mfVY8GFEoh+FZY2EWbESlyZfvy7WgddYxYnrWPhXvQVNFCZt5vxm4HS gLEMaNxiZBrFM8Vvy+5kLRUWH5mY2aRSTHTZLkg3BO4OujCSmXFjuq4FI2joygKR74Je U4YZTTG+XVOQOAr8C1yHz1+7RdgmSzbNYAzcqeIySjXH5HgmG1Zn/rDyb11c+/Cy0flK 9u5tscaKdQt90HbA7qY+FFotXB1FJsDuURWsQ3JCHLuAD4gf/YIVqNjUbME4VZek6j8v R/qqS6yb1ZmVrcrtdba7u23rY5fHGRiBvWaf1zcp46mVrgepOPYutn7gTcPQ/8UKTHH+ FIxQ== X-Gm-Message-State: AOAM532vUDXm7DYJ9wSk8KwwLCG/7hk9haKkMaKG4MPCrxfnHjDFfbdn poRdjws8I77USf8LFCUIa5s= X-Google-Smtp-Source: ABdhPJxMxpMStJYT/anaXMiOzH0PUmpS5c44ybggs8kRB3SW3X4/9Ls7JKzoSQwP278gUHwekI7TMA== X-Received: by 2002:a05:620a:469f:b0:648:f460:333c with SMTP id bq31-20020a05620a469f00b00648f460333cmr8758089qkb.36.1645954494005; Sun, 27 Feb 2022 01:34:54 -0800 (PST) Received: from sandstorm.attlocal.net (76-242-90-12.lightspeed.sntcca.sbcglobal.net. [76.242.90.12]) by smtp.gmail.com with ESMTPSA id h3-20020a05622a170300b002e008a93f8fsm469815qtk.91.2022.02.27.01.34.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 27 Feb 2022 01:34:53 -0800 (PST) From: jhubbard.send.patches@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Jens Axboe , Jan Kara , Christoph Hellwig , Dave Chinner , "Darrick J . Wong" , Theodore Ts'o , Alexander Viro , Miklos Szeredi , Andrew Morton , Chaitanya Kulkarni Cc: linux-block@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, LKML , John Hubbard Subject: [PATCH 6/6] fuse: convert direct IO paths to use FOLL_PIN Date: Sun, 27 Feb 2022 01:34:34 -0800 Message-Id: <20220227093434.2889464-7-jhubbard@nvidia.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220227093434.2889464-1-jhubbard@nvidia.com> References: <20220227093434.2889464-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: B3F0740009 X-Stat-Signature: pwukc4xusmwuoxhbzzggqzeh991ya8ui Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b="Ry/pcRhC"; spf=pass (imf07.hostedemail.com: domain of jhubbard.send.patches@gmail.com designates 209.85.222.175 as permitted sender) smtp.mailfrom=jhubbard.send.patches@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Rspam-User: X-HE-Tag: 1645954494-10506 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: John Hubbard Convert the fuse filesystem to support the new iov_iter_get_pages() behavior. That routine now invokes pin_user_pages_fast(), which means that such pages must be released via unpin_user_page(), rather than via put_page(). This commit also removes any possibility of kernel pages being handled, in the fuse_get_user_pages() call. Although this may seem like a steep price to pay, Christoph Hellwig actually recommended it a few years ago for nearly the same situation [1]. [1] https://lore.kernel.org/kvm/20190724061750.GA19397@infradead.org/ Signed-off-by: John Hubbard --- fs/fuse/dev.c | 7 +++++-- fs/fuse/file.c | 38 +++++++++----------------------------- 2 files changed, 14 insertions(+), 31 deletions(-) diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c index e1b4a846c90d..9db85c4d549a 100644 --- a/fs/fuse/dev.c +++ b/fs/fuse/dev.c @@ -675,7 +675,10 @@ static void fuse_copy_finish(struct fuse_copy_state *cs) flush_dcache_page(cs->pg); set_page_dirty_lock(cs->pg); } - put_page(cs->pg); + if (cs->pipebufs) + put_page(cs->pg); + else + unpin_user_page(cs->pg); } cs->pg = NULL; } @@ -730,7 +733,7 @@ static int fuse_copy_fill(struct fuse_copy_state *cs) } } else { size_t off; - err = iov_iter_get_pages(cs->iter, &page, PAGE_SIZE, 1, &off); + err = iov_iter_pin_pages(cs->iter, &page, PAGE_SIZE, 1, &off); if (err < 0) return err; BUG_ON(!err); diff --git a/fs/fuse/file.c b/fs/fuse/file.c index 94747bac3489..ecfa5bdde919 100644 --- a/fs/fuse/file.c +++ b/fs/fuse/file.c @@ -611,18 +611,6 @@ void fuse_read_args_fill(struct fuse_io_args *ia, struct file *file, loff_t pos, args->out_args[0].size = count; } -static void fuse_release_user_pages(struct fuse_args_pages *ap, - bool should_dirty) -{ - unsigned int i; - - for (i = 0; i < ap->num_pages; i++) { - if (should_dirty) - set_page_dirty_lock(ap->pages[i]); - put_page(ap->pages[i]); - } -} - static void fuse_io_release(struct kref *kref) { kfree(container_of(kref, struct fuse_io_priv, refcnt)); @@ -720,7 +708,8 @@ static void fuse_aio_complete_req(struct fuse_mount *fm, struct fuse_args *args, struct fuse_io_priv *io = ia->io; ssize_t pos = -1; - fuse_release_user_pages(&ia->ap, io->should_dirty); + unpin_user_pages_dirty_lock(ia->ap.pages, ia->ap.num_pages, + io->should_dirty); if (err) { /* Nothing */ @@ -1382,25 +1371,14 @@ static int fuse_get_user_pages(struct fuse_args_pages *ap, struct iov_iter *ii, size_t nbytes = 0; /* # bytes already packed in req */ ssize_t ret = 0; - /* Special case for kernel I/O: can copy directly into the buffer */ - if (iov_iter_is_kvec(ii)) { - unsigned long user_addr = fuse_get_user_addr(ii); - size_t frag_size = fuse_get_frag_size(ii, *nbytesp); - - if (write) - ap->args.in_args[1].value = (void *) user_addr; - else - ap->args.out_args[0].value = (void *) user_addr; - - iov_iter_advance(ii, frag_size); - *nbytesp = frag_size; - return 0; - } + /* Only user space buffers are allowed with fuse Direct IO. */ + if (WARN_ON_ONCE(!iter_is_iovec(ii))) + return -EOPNOTSUPP; while (nbytes < *nbytesp && ap->num_pages < max_pages) { unsigned npages; size_t start; - ret = iov_iter_get_pages(ii, &ap->pages[ap->num_pages], + ret = iov_iter_pin_pages(ii, &ap->pages[ap->num_pages], *nbytesp - nbytes, max_pages - ap->num_pages, &start); @@ -1484,7 +1462,9 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter, } if (!io->async || nres < 0) { - fuse_release_user_pages(&ia->ap, io->should_dirty); + unpin_user_pages_dirty_lock(ia->ap.pages, + ia->ap.num_pages, + io->should_dirty); fuse_io_free(ia); } ia = NULL;