From patchwork Wed Jul 24 04:25:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 11055887 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 28D356C5 for ; Wed, 24 Jul 2019 04:27:46 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1B56A2873F for ; Wed, 24 Jul 2019 04:27:46 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0F98128780; Wed, 24 Jul 2019 04:27:46 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4388C2873F for ; Wed, 24 Jul 2019 04:27:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726554AbfGXE1d (ORCPT ); Wed, 24 Jul 2019 00:27:33 -0400 Received: from mail-pg1-f195.google.com ([209.85.215.195]:41101 "EHLO mail-pg1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726120AbfGXEZX (ORCPT ); Wed, 24 Jul 2019 00:25:23 -0400 Received: by mail-pg1-f195.google.com with SMTP id x15so10163890pgg.8; Tue, 23 Jul 2019 21:25:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=5gbvfTIK/mSaKMjbkYWk6GHqvja6ZqCh33qCfyFDpP0=; b=j7zJYCw4U1caEVcFEuKobiyZPV9eIUv7xmxR7LmMSDvz1bpuny3yyGTV6MypZtlPXz hc8rQIVHKDgsnUZUb5lrASU4vG0LJ3zlXNrtENzNY/3JZVdn2F8Z6UW7GA6drfI4X+kU HfCw1mWmQ0dyLdYwGY7N4j8dUjykPuh/d18jdGK7qL5MelVHtqtu1D1/9ESQtjIF/2De OrLZp7NCa0U7FTfcG+7/YKz8vFSLJwWWkLMpOsucAPglQGYm4czK1tXxxEzvo3MYlUbA Fr6bL3aZsetr+/824apMhB0mMtuE0goOnNffDU8btojjZGmLhgM6qOHAfAzEGZdcC/fv U7WQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=5gbvfTIK/mSaKMjbkYWk6GHqvja6ZqCh33qCfyFDpP0=; b=Twvi4rkf/OCPooFgwHnDghkqR4b5otEZXGjIqjPCSvbFrMgU5mNsVyGKCLLx6H2d/v U4GATTyCUQ/3o35nOyWWLTgpbrj3JrUQupvQgRnoldiMbB+VnITUv6Jc0g4S4N1tLgwg hfjaKnyGUACmA6YP3RKaqjfHH4i9FOP8aZlwu/ICmFYkNoyISvUO3fSxHVal3tjHA8XC FHQ75MsJJBL4AxpHE+1/JNZSncfKzd5Mlv7EPTkMG0FpROFcYpslkQHLkj5mo240GNFX A5NUAPk1G0R8vw8SAu3pLtv7EaC/o/pDPMeuZ6EZq8KpQeRqchlfE8gQ1H//Q9iVLa3l Ztig== X-Gm-Message-State: APjAAAWncbcT6ArJCT9kYtSiz+hwiKyger/5rzGf8x1/auhwFIFCV1BP bFqRaPfEJ7dcFS7t5AL4qwg= X-Google-Smtp-Source: APXvYqz0bNGTzQTfix0lE4oONDy4wG8t04zCVf7WSKUYxPRWKH7PYhz8R1Do2uL48a+xrPBL7zQa4w== X-Received: by 2002:a65:504c:: with SMTP id k12mr80708644pgo.252.1563942322683; Tue, 23 Jul 2019 21:25:22 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id a15sm34153364pgw.3.2019.07.23.21.25.21 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 23 Jul 2019 21:25:22 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , Anna Schumaker , "David S . Miller" , Dominique Martinet , Eric Van Hensbergen , Jason Gunthorpe , Jason Wang , Jens Axboe , Latchesar Ionkov , "Michael S . Tsirkin" , Miklos Szeredi , Trond Myklebust , Christoph Hellwig , Matthew Wilcox , linux-mm@kvack.org, LKML , ceph-devel@vger.kernel.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, netdev@vger.kernel.org, samba-technical@lists.samba.org, v9fs-developer@lists.sourceforge.net, virtualization@lists.linux-foundation.org, John Hubbard , Jan Kara , Ira Weiny Subject: [PATCH 01/12] mm/gup: add make_dirty arg to put_user_pages_dirty_lock() Date: Tue, 23 Jul 2019 21:25:07 -0700 Message-Id: <20190724042518.14363-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190724042518.14363-1-jhubbard@nvidia.com> References: <20190724042518.14363-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard Provide more capable variation of put_user_pages_dirty_lock(), and delete put_user_pages_dirty(). This is based on the following: 1. Lots of call sites become simpler if a bool is passed into put_user_page*(), instead of making the call site choose which put_user_page*() variant to call. 2. Christoph Hellwig's observation that set_page_dirty_lock() is usually correct, and set_page_dirty() is usually a bug, or at least questionable, within a put_user_page*() calling chain. This leads to the following API choices: * put_user_pages_dirty_lock(page, npages, make_dirty) * There is no put_user_pages_dirty(). You have to hand code that, in the rare case that it's required. Cc: Matthew Wilcox Cc: Jan Kara Cc: Christoph Hellwig Cc: Ira Weiny Cc: Jason Gunthorpe Signed-off-by: John Hubbard --- drivers/infiniband/core/umem.c | 5 +- drivers/infiniband/hw/hfi1/user_pages.c | 5 +- drivers/infiniband/hw/qib/qib_user_pages.c | 5 +- drivers/infiniband/hw/usnic/usnic_uiom.c | 5 +- drivers/infiniband/sw/siw/siw_mem.c | 8 +- include/linux/mm.h | 5 +- mm/gup.c | 115 +++++++++------------ 7 files changed, 58 insertions(+), 90 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 08da840ed7ee..965cf9dea71a 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -54,10 +54,7 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d for_each_sg_page(umem->sg_head.sgl, &sg_iter, umem->sg_nents, 0) { page = sg_page_iter_page(&sg_iter); - if (umem->writable && dirty) - put_user_pages_dirty_lock(&page, 1); - else - put_user_page(page); + put_user_pages_dirty_lock(&page, 1, umem->writable && dirty); } sg_free_table(&umem->sg_head); diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/hw/hfi1/user_pages.c index b89a9b9aef7a..469acb961fbd 100644 --- a/drivers/infiniband/hw/hfi1/user_pages.c +++ b/drivers/infiniband/hw/hfi1/user_pages.c @@ -118,10 +118,7 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsigned long vaddr, size_t np void hfi1_release_user_pages(struct mm_struct *mm, struct page **p, size_t npages, bool dirty) { - if (dirty) - put_user_pages_dirty_lock(p, npages); - else - put_user_pages(p, npages); + put_user_pages_dirty_lock(p, npages, dirty); if (mm) { /* during close after signal, mm can be NULL */ atomic64_sub(npages, &mm->pinned_vm); diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c index bfbfbb7e0ff4..6bf764e41891 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -40,10 +40,7 @@ static void __qib_release_user_pages(struct page **p, size_t num_pages, int dirty) { - if (dirty) - put_user_pages_dirty_lock(p, num_pages); - else - put_user_pages(p, num_pages); + put_user_pages_dirty_lock(p, num_pages, dirty); } /** diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c index 0b0237d41613..62e6ffa9ad78 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -75,10 +75,7 @@ static void usnic_uiom_put_pages(struct list_head *chunk_list, int dirty) for_each_sg(chunk->page_list, sg, chunk->nents, i) { page = sg_page(sg); pa = sg_phys(sg); - if (dirty) - put_user_pages_dirty_lock(&page, 1); - else - put_user_page(page); + put_user_pages_dirty_lock(&page, 1, dirty); usnic_dbg("pa: %pa\n", &pa); } kfree(chunk); diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c index 67171c82b0c4..358d440efa11 100644 --- a/drivers/infiniband/sw/siw/siw_mem.c +++ b/drivers/infiniband/sw/siw/siw_mem.c @@ -65,13 +65,7 @@ static void siw_free_plist(struct siw_page_chunk *chunk, int num_pages, { struct page **p = chunk->plist; - while (num_pages--) { - if (!PageDirty(*p) && dirty) - put_user_pages_dirty_lock(p, 1); - else - put_user_page(*p); - p++; - } + put_user_pages_dirty_lock(chunk->plist, num_pages, dirty); } void siw_umem_release(struct siw_umem *umem, bool dirty) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0334ca97c584..9759b6a24420 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1057,8 +1057,9 @@ static inline void put_user_page(struct page *page) put_page(page); } -void put_user_pages_dirty(struct page **pages, unsigned long npages); -void put_user_pages_dirty_lock(struct page **pages, unsigned long npages); +void put_user_pages_dirty_lock(struct page **pages, unsigned long npages, + bool make_dirty); + void put_user_pages(struct page **pages, unsigned long npages); #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) diff --git a/mm/gup.c b/mm/gup.c index 98f13ab37bac..7fefd7ab02c4 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -29,85 +29,70 @@ struct follow_page_context { unsigned int page_mask; }; -typedef int (*set_dirty_func_t)(struct page *page); - -static void __put_user_pages_dirty(struct page **pages, - unsigned long npages, - set_dirty_func_t sdf) -{ - unsigned long index; - - for (index = 0; index < npages; index++) { - struct page *page = compound_head(pages[index]); - - /* - * Checking PageDirty at this point may race with - * clear_page_dirty_for_io(), but that's OK. Two key cases: - * - * 1) This code sees the page as already dirty, so it skips - * the call to sdf(). That could happen because - * clear_page_dirty_for_io() called page_mkclean(), - * followed by set_page_dirty(). However, now the page is - * going to get written back, which meets the original - * intention of setting it dirty, so all is well: - * clear_page_dirty_for_io() goes on to call - * TestClearPageDirty(), and write the page back. - * - * 2) This code sees the page as clean, so it calls sdf(). - * The page stays dirty, despite being written back, so it - * gets written back again in the next writeback cycle. - * This is harmless. - */ - if (!PageDirty(page)) - sdf(page); - - put_user_page(page); - } -} - /** - * put_user_pages_dirty() - release and dirty an array of gup-pinned pages - * @pages: array of pages to be marked dirty and released. + * put_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages + * @pages: array of pages to be maybe marked dirty, and definitely released. * @npages: number of pages in the @pages array. + * @make_dirty: whether to mark the pages dirty * * "gup-pinned page" refers to a page that has had one of the get_user_pages() * variants called on that page. * * For each page in the @pages array, make that page (or its head page, if a - * compound page) dirty, if it was previously listed as clean. Then, release - * the page using put_user_page(). + * compound page) dirty, if @make_dirty is true, and if the page was previously + * listed as clean. In any case, releases all pages using put_user_page(), + * possibly via put_user_pages(), for the non-dirty case. * * Please see the put_user_page() documentation for details. * - * set_page_dirty(), which does not lock the page, is used here. - * Therefore, it is the caller's responsibility to ensure that this is - * safe. If not, then put_user_pages_dirty_lock() should be called instead. + * set_page_dirty_lock() is used internally. If instead, set_page_dirty() is + * required, then the caller should a) verify that this is really correct, + * because _lock() is usually required, and b) hand code it: + * set_page_dirty_lock(), put_user_page(). * */ -void put_user_pages_dirty(struct page **pages, unsigned long npages) +void put_user_pages_dirty_lock(struct page **pages, unsigned long npages, + bool make_dirty) { - __put_user_pages_dirty(pages, npages, set_page_dirty); -} -EXPORT_SYMBOL(put_user_pages_dirty); + unsigned long index; -/** - * put_user_pages_dirty_lock() - release and dirty an array of gup-pinned pages - * @pages: array of pages to be marked dirty and released. - * @npages: number of pages in the @pages array. - * - * For each page in the @pages array, make that page (or its head page, if a - * compound page) dirty, if it was previously listed as clean. Then, release - * the page using put_user_page(). - * - * Please see the put_user_page() documentation for details. - * - * This is just like put_user_pages_dirty(), except that it invokes - * set_page_dirty_lock(), instead of set_page_dirty(). - * - */ -void put_user_pages_dirty_lock(struct page **pages, unsigned long npages) -{ - __put_user_pages_dirty(pages, npages, set_page_dirty_lock); + /* + * TODO: this can be optimized for huge pages: if a series of pages is + * physically contiguous and part of the same compound page, then a + * single operation to the head page should suffice. + */ + + if (!make_dirty) { + put_user_pages(pages, npages); + return; + } + + for (index = 0; index < npages; index++) { + struct page *page = compound_head(pages[index]); + /* + * Checking PageDirty at this point may race with + * clear_page_dirty_for_io(), but that's OK. Two key + * cases: + * + * 1) This code sees the page as already dirty, so it + * skips the call to set_page_dirty(). That could happen + * because clear_page_dirty_for_io() called + * page_mkclean(), followed by set_page_dirty(). + * However, now the page is going to get written back, + * which meets the original intention of setting it + * dirty, so all is well: clear_page_dirty_for_io() goes + * on to call TestClearPageDirty(), and write the page + * back. + * + * 2) This code sees the page as clean, so it calls + * set_page_dirty(). The page stays dirty, despite being + * written back, so it gets written back again in the + * next writeback cycle. This is harmless. + */ + if (!PageDirty(page)) + set_page_dirty_lock(page); + put_user_page(page); + } } EXPORT_SYMBOL(put_user_pages_dirty_lock); From patchwork Wed Jul 24 04:25:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 11055873 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 01ED813A4 for ; Wed, 24 Jul 2019 04:27:33 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E832B2873F for ; Wed, 24 Jul 2019 04:27:32 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DADEB2877F; Wed, 24 Jul 2019 04:27:32 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 792762873F for ; Wed, 24 Jul 2019 04:27:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726141AbfGXEZZ (ORCPT ); Wed, 24 Jul 2019 00:25:25 -0400 Received: from mail-pg1-f194.google.com ([209.85.215.194]:46538 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726139AbfGXEZZ (ORCPT ); Wed, 24 Jul 2019 00:25:25 -0400 Received: by mail-pg1-f194.google.com with SMTP id k189so1468509pgk.13; Tue, 23 Jul 2019 21:25:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qETbyvDGXmWMmkqN4sOdgKXeFkWOntX3r/NRFgCZoWE=; b=KG/F7Sd5J79dEipElYhdvyhMIk0TxQlRtZu9bi85rUwY/TE5KUGgUZYMnF+3jcDsXr jk+YAZfWVg6gnp2IiqLCse5lUUZrgh27wRKMfQRWXUdRnaO5ZlXDztHkfwCA1gRbK+Qa Vdl/mcDwYDvc/aIthkQn3dHIfadxoz9OGVoTMkITstrb8gRinr4GRivAlsVYAzJkRXx3 qijzZ8AyOYgc5Vk4JND+9sG/qLIVyx5s39WeHO+RM9Ey6ODD5O8kuqfL85vwwK9Febym eFolpJZ4Wkv3lsIz5mVKSWu4wewrN05ykT4POz6d2Mi/mkYtEQg5LGvW/0qsyHcM4WB5 F1og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qETbyvDGXmWMmkqN4sOdgKXeFkWOntX3r/NRFgCZoWE=; b=mDTylZHWeSo6qrwvCcaaiEvWc5nYOZAPJIEhmSGBd2+3uYQ9fqvcSgOQXKMtWaLtZZ uG+ul3tT5lrgX6P7vY2Oj5dRKQRb3w2JYwJj7edw/TgGdQDx5U+7EXgDS15AuTiRdWTk WlUrHvz5teCYDMTj4yHLxtB3VIfvVkPUOgMcEpm0o/AXgp4Nf9AtwdY11wRmIgIQ4eTs u4m4XHCx3bi1XbDX9QO0tvJNqyMJZBTuRtAIiP1aVOTqpWoZ61IcDRoQzI90qi9Y7dg4 D3V+frb/PFrG88bAUa5OArCSScHGnjLLGcIzGtOlUmZCf97hSubyGDV2+DMprowr8dM0 1z7g== X-Gm-Message-State: APjAAAU4HJTAQZiWrf0O120A4wP01gpn3YeIAagFeudhIO2z3ZfjKqPO WD4CpY+PZSWNwxE4vQ/6mEo= X-Google-Smtp-Source: APXvYqxkwpMUKWSZK/4NQcFDw+bDP7yRS/NluRzHQboqTdh3aehK42NKuj/IACv1wat7jP2wWUy+ng== X-Received: by 2002:a65:4489:: with SMTP id l9mr81979980pgq.207.1563942324157; Tue, 23 Jul 2019 21:25:24 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id a15sm34153364pgw.3.2019.07.23.21.25.22 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 23 Jul 2019 21:25:23 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , Anna Schumaker , "David S . Miller" , Dominique Martinet , Eric Van Hensbergen , Jason Gunthorpe , Jason Wang , Jens Axboe , Latchesar Ionkov , "Michael S . Tsirkin" , Miklos Szeredi , Trond Myklebust , Christoph Hellwig , Matthew Wilcox , linux-mm@kvack.org, LKML , ceph-devel@vger.kernel.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, netdev@vger.kernel.org, samba-technical@lists.samba.org, v9fs-developer@lists.sourceforge.net, virtualization@lists.linux-foundation.org, =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , John Hubbard , Jan Kara , Dan Williams , Johannes Thumshirn , Ming Lei , Dave Chinner Subject: [PATCH 02/12] iov_iter: add helper to test if an iter would use GUP v2 Date: Tue, 23 Jul 2019 21:25:08 -0700 Message-Id: <20190724042518.14363-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190724042518.14363-1-jhubbard@nvidia.com> References: <20190724042518.14363-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jérôme Glisse Add a helper to test if call to iov_iter_get_pages*() with a given iter would result in calls to GUP (get_user_pages*()). We want to use different tracking of page references if they are coming from GUP (get_user_pages*()) and thus we need to know when GUP is used for a given iter. Changes since Jérôme's original patch: * iov_iter_get_pages_use_gup(): do not return true for the ITER_PIPE case, because iov_iter_get_pages() calls pipe_get_pages(), which in turn uses get_page(), not get_user_pages(). * Remove some obsolete code, as part of rebasing onto Linux 5.3. * Fix up the kerneldoc comment to "Return:" rather than "Returns:", and a few other grammatical tweaks. Signed-off-by: Jérôme Glisse Signed-off-by: John Hubbard Cc: linux-fsdevel@vger.kernel.org Cc: linux-block@vger.kernel.org Cc: linux-mm@kvack.org Cc: John Hubbard Cc: Jan Kara Cc: Dan Williams Cc: Alexander Viro Cc: Johannes Thumshirn Cc: Christoph Hellwig Cc: Jens Axboe Cc: Ming Lei Cc: Dave Chinner Cc: Jason Gunthorpe Cc: Matthew Wilcox --- include/linux/uio.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/linux/uio.h b/include/linux/uio.h index ab5f523bc0df..2a179af8e5a7 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -86,6 +86,17 @@ static inline unsigned char iov_iter_rw(const struct iov_iter *i) return i->type & (READ | WRITE); } +/** + * iov_iter_get_pages_use_gup - report if iov_iter_get_pages(i) uses GUP + * @i: iterator + * Return: true if a call to iov_iter_get_pages*() with the iter provided in + * the argument would result in the use of get_user_pages*() + */ +static inline bool iov_iter_get_pages_use_gup(const struct iov_iter *i) +{ + return iov_iter_type(i) == ITER_IOVEC; +} + /* * Total number of bytes covered by an iovec. * From patchwork Wed Jul 24 04:25:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 11055863 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id EC4656C5 for ; Wed, 24 Jul 2019 04:27:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id DE3572873F for ; Wed, 24 Jul 2019 04:27:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D16642877F; Wed, 24 Jul 2019 04:27:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 613432873F for ; Wed, 24 Jul 2019 04:27:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726139AbfGXEZ1 (ORCPT ); Wed, 24 Jul 2019 00:25:27 -0400 Received: from mail-pg1-f196.google.com ([209.85.215.196]:45271 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726179AbfGXEZ1 (ORCPT ); Wed, 24 Jul 2019 00:25:27 -0400 Received: by mail-pg1-f196.google.com with SMTP id o13so20495620pgp.12; Tue, 23 Jul 2019 21:25:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KTRb5TwAvlEMgKUD4ybaWyu+OqmkL6O8558wldZQN2o=; b=ebcJECgWyU9zYGN2pJY78eWtcvtnox+Oa1zooAOzP6xSmTu5g5nerb6ZfvRSMWFmb+ nprY3fdZfkPasdJ9q4KtJrJq8tRNMrlLU/iHYusPMn3+9BiXMb/qeZvJ3C/ltb4T4DOy ur2oHnU3+1kwTjn9nSXg+vBXMtZIrJ+7IxZcmGAs0pw/RJBl9ZFjfw+QBa7rsXW2W9lk 0w+ANgMMW5FgsBW5Bc0ygGww1qHrY3+ToVb43K1oJqh8E1qTVYlCJnF6+no0vWLUn6Ku wUWcMe8XiTWbkXVO+hHw3kXkBi6lbd9CK09V0F+9Edlqf99NXHcyx09tJPd3+4Lpdei7 xJTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KTRb5TwAvlEMgKUD4ybaWyu+OqmkL6O8558wldZQN2o=; b=lrivXTV2rInVok/iQeqVGjgvce1BZ92gVIqe9JAVP2saaiqLrDlXKMg47+a5qEk/eL d25LNqBFZedgaFgFrF29tdu61Pt4TAMARDnz1+qNoy0FOWa2epKhTWZXjaHTb0CUOz0n oqVLuOEuMOFXRUToLOaufhme6tt4G3wIgbfai77Mstq16NplNYtYU+kC8Ef9hRfZDA7u WZ9BdxlcYhO55zVwRMrm5Yc7c2SwYynxWbOF6gzSUgZmVMakHI0IHAEWb2lHR3h58kYX cm+GTuVnX78gC+/7Osg6p/Y0muNGMMuVUbxTg8eAxEzDG4Z1euqoDcIIwOMplN6N4rIt PatQ== X-Gm-Message-State: APjAAAWoc9u1yKWWLBY5wtOUMlayhw9y2qz5Rqxpt9oz+vofkhqasqg2 xImD4a1Rmv8SZ6mmKz6tM+4= X-Google-Smtp-Source: APXvYqzBe/J5Vy13pSVSxifZgvBoznxAqrjVmFcElFPWOR9wwqxoPED73uSDyHAUwm8NHJdocMjUeA== X-Received: by 2002:a62:7641:: with SMTP id r62mr9177588pfc.35.1563942325690; Tue, 23 Jul 2019 21:25:25 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id a15sm34153364pgw.3.2019.07.23.21.25.24 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 23 Jul 2019 21:25:25 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , Anna Schumaker , "David S . Miller" , Dominique Martinet , Eric Van Hensbergen , Jason Gunthorpe , Jason Wang , Jens Axboe , Latchesar Ionkov , "Michael S . Tsirkin" , Miklos Szeredi , Trond Myklebust , Christoph Hellwig , Matthew Wilcox , linux-mm@kvack.org, LKML , ceph-devel@vger.kernel.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, netdev@vger.kernel.org, samba-technical@lists.samba.org, v9fs-developer@lists.sourceforge.net, virtualization@lists.linux-foundation.org, John Hubbard , Christoph Hellwig , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Minwoo Im Subject: [PATCH 03/12] block: bio_release_pages: use flags arg instead of bool Date: Tue, 23 Jul 2019 21:25:09 -0700 Message-Id: <20190724042518.14363-4-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190724042518.14363-1-jhubbard@nvidia.com> References: <20190724042518.14363-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard In commit d241a95f3514 ("block: optionally mark pages dirty in bio_release_pages"), new "bool mark_dirty" argument was added to bio_release_pages. In upcoming work, another bool argument (to indicate that the pages came from get_user_pages) is going to be added. That's one bool too many, because it's not desirable have calls of the form: foo(true, false, true, etc); In order to prepare for that, change the argument from a bool, to a typesafe (enum-based) flags argument. Cc: Christoph Hellwig Cc: Jérôme Glisse Cc: Minwoo Im Cc: Jens Axboe Signed-off-by: John Hubbard --- block/bio.c | 12 ++++++------ fs/block_dev.c | 4 ++-- fs/direct-io.c | 2 +- include/linux/bio.h | 13 ++++++++++++- 4 files changed, 21 insertions(+), 10 deletions(-) diff --git a/block/bio.c b/block/bio.c index 299a0e7651ec..7675e2de509d 100644 --- a/block/bio.c +++ b/block/bio.c @@ -833,7 +833,7 @@ int bio_add_page(struct bio *bio, struct page *page, } EXPORT_SYMBOL(bio_add_page); -void bio_release_pages(struct bio *bio, bool mark_dirty) +void bio_release_pages(struct bio *bio, enum bio_rp_flags_t flags) { struct bvec_iter_all iter_all; struct bio_vec *bvec; @@ -842,7 +842,7 @@ void bio_release_pages(struct bio *bio, bool mark_dirty) return; bio_for_each_segment_all(bvec, bio, iter_all) { - if (mark_dirty && !PageCompound(bvec->bv_page)) + if ((flags & BIO_RP_MARK_DIRTY) && !PageCompound(bvec->bv_page)) set_page_dirty_lock(bvec->bv_page); put_page(bvec->bv_page); } @@ -1421,7 +1421,7 @@ struct bio *bio_map_user_iov(struct request_queue *q, return bio; out_unmap: - bio_release_pages(bio, false); + bio_release_pages(bio, BIO_RP_NORMAL); bio_put(bio); return ERR_PTR(ret); } @@ -1437,7 +1437,7 @@ struct bio *bio_map_user_iov(struct request_queue *q, */ void bio_unmap_user(struct bio *bio) { - bio_release_pages(bio, bio_data_dir(bio) == READ); + bio_release_pages(bio, bio_rp_dirty_flag(bio_data_dir(bio) == READ)); bio_put(bio); bio_put(bio); } @@ -1683,7 +1683,7 @@ static void bio_dirty_fn(struct work_struct *work) while ((bio = next) != NULL) { next = bio->bi_private; - bio_release_pages(bio, true); + bio_release_pages(bio, BIO_RP_MARK_DIRTY); bio_put(bio); } } @@ -1699,7 +1699,7 @@ void bio_check_pages_dirty(struct bio *bio) goto defer; } - bio_release_pages(bio, false); + bio_release_pages(bio, BIO_RP_NORMAL); bio_put(bio); return; defer: diff --git a/fs/block_dev.c b/fs/block_dev.c index 4707dfff991b..9fe6616f8788 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -259,7 +259,7 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter, } __set_current_state(TASK_RUNNING); - bio_release_pages(&bio, should_dirty); + bio_release_pages(&bio, bio_rp_dirty_flag(should_dirty)); if (unlikely(bio.bi_status)) ret = blk_status_to_errno(bio.bi_status); @@ -329,7 +329,7 @@ static void blkdev_bio_end_io(struct bio *bio) if (should_dirty) { bio_check_pages_dirty(bio); } else { - bio_release_pages(bio, false); + bio_release_pages(bio, BIO_RP_NORMAL); bio_put(bio); } } diff --git a/fs/direct-io.c b/fs/direct-io.c index ae196784f487..423ef431ddda 100644 --- a/fs/direct-io.c +++ b/fs/direct-io.c @@ -551,7 +551,7 @@ static blk_status_t dio_bio_complete(struct dio *dio, struct bio *bio) if (dio->is_async && should_dirty) { bio_check_pages_dirty(bio); /* transfers ownership */ } else { - bio_release_pages(bio, should_dirty); + bio_release_pages(bio, bio_rp_dirty_flag(should_dirty)); bio_put(bio); } return err; diff --git a/include/linux/bio.h b/include/linux/bio.h index 3cdb84cdc488..2715e55679c1 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -440,7 +440,18 @@ bool __bio_try_merge_page(struct bio *bio, struct page *page, void __bio_add_page(struct bio *bio, struct page *page, unsigned int len, unsigned int off); int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter); -void bio_release_pages(struct bio *bio, bool mark_dirty); + +enum bio_rp_flags_t { + BIO_RP_NORMAL = 0, + BIO_RP_MARK_DIRTY = 1, +}; + +static inline enum bio_rp_flags_t bio_rp_dirty_flag(bool mark_dirty) +{ + return mark_dirty ? BIO_RP_MARK_DIRTY : BIO_RP_NORMAL; +} + +void bio_release_pages(struct bio *bio, enum bio_rp_flags_t flags); struct rq_map_data; extern struct bio *bio_map_user_iov(struct request_queue *, struct iov_iter *, gfp_t); From patchwork Wed Jul 24 04:25:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 11055861 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 20A4E6C5 for ; Wed, 24 Jul 2019 04:27:21 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 125792873F for ; Wed, 24 Jul 2019 04:27:21 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0680E2877F; Wed, 24 Jul 2019 04:27:21 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 8BE0D28774 for ; Wed, 24 Jul 2019 04:27:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726357AbfGXE1J (ORCPT ); Wed, 24 Jul 2019 00:27:09 -0400 Received: from mail-pg1-f195.google.com ([209.85.215.195]:46543 "EHLO mail-pg1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726256AbfGXEZ2 (ORCPT ); Wed, 24 Jul 2019 00:25:28 -0400 Received: by mail-pg1-f195.google.com with SMTP id k189so1468587pgk.13; Tue, 23 Jul 2019 21:25:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=VoKYI+UUiS6YcrnvVkznU7iujzeTV7cPJsbn4q5uYok=; b=srCEjuKdmgTtGGJA378IGBXmCqyTbsbIjMqfoeenayBrGBy5vehEwtDrFvuijZFoTd rScSFUBXDFgZSITel7jOkrOPuVU8F5rry3vtfpMM2D7CeG0RrJuH7hgFj+PRndl7N1gW deZirUtKYKLuIRRXAf1U+l262rmyF1+wQhnXfVmQ91LL1+9RzmuQ9g6DIyRbodwSmwkf luAgCx4Jh2U/kIoDrgk0a2bpkEIwsDMjxtnPpArcF/E8dH/PwetBRncfIHEHnvxafXic wOzPw8hX8ys1ZDuhUpTDpwmfZFOimd4ZZ9s54WJn7/lu6Or6c+DjhaXkLZqlThRstapF vwMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=VoKYI+UUiS6YcrnvVkznU7iujzeTV7cPJsbn4q5uYok=; b=XD+Nfve8tUVvuxREl6qoxvr5Cwjy9J+brs1Bv+dfA7ieaXZzV8FhppIJJx8kgE4bpz /Vnpc9/LQqZ8RB87nEUOpGFZpl8zxuJ3iVY5k9rmBy4UHeDY5bKtdtWn9tyRDHzVJjyF iTANlWi1MfcwPNx12ASHFavDwoYwjdLrhVDdYn52i8T1ghqHMMfdd3Wh5zwnhLs1WoJV fOqOvs+Tvz1RWgYZcYziuFJ1fuw74ebsDpHoqPchKMoej1BGj25dDQq9/BEf04yMzASC RMfOYJhJ3yPW7iZQ0WtaXDb8JSnjNeRDXsapbYrZ6gs3GRzZ6/rj38h1VU2Q8TYHNTP0 +gvw== X-Gm-Message-State: APjAAAUjXaFwgRlOzpTLqXqvVA19Z6IHikTCv7oY9z7DkGmaAx9cjguL 2iaDo05VcsLYyTv6dP7XQzXrTioH X-Google-Smtp-Source: APXvYqyK6xSNDGD+vqS3B3uhhp2vi85DZLeiUnQNXlrooXu4OGV2TbsZCsKTXuxuk/1vZZ0xEw9zNg== X-Received: by 2002:aa7:9834:: with SMTP id q20mr9351362pfl.196.1563942326983; Tue, 23 Jul 2019 21:25:26 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id a15sm34153364pgw.3.2019.07.23.21.25.25 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 23 Jul 2019 21:25:26 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , Anna Schumaker , "David S . Miller" , Dominique Martinet , Eric Van Hensbergen , Jason Gunthorpe , Jason Wang , Jens Axboe , Latchesar Ionkov , "Michael S . Tsirkin" , Miklos Szeredi , Trond Myklebust , Christoph Hellwig , Matthew Wilcox , linux-mm@kvack.org, LKML , ceph-devel@vger.kernel.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, netdev@vger.kernel.org, samba-technical@lists.samba.org, v9fs-developer@lists.sourceforge.net, virtualization@lists.linux-foundation.org, =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , John Hubbard , Christoph Hellwig , Minwoo Im Subject: [PATCH 04/12] block: bio_release_pages: convert put_page() to put_user_page*() Date: Tue, 23 Jul 2019 21:25:10 -0700 Message-Id: <20190724042518.14363-5-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190724042518.14363-1-jhubbard@nvidia.com> References: <20190724042518.14363-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jérôme Glisse For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page() or release_pages(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). Changes from Jérôme's original patch: * reworked to be compatible with recent bio_release_pages() changes, * refactored slightly to remove some code duplication, * use an approach that changes fewer bio_check_pages_dirty() callers. Signed-off-by: Jérôme Glisse Signed-off-by: John Hubbard Cc: Christoph Hellwig Cc: Minwoo Im Cc: Jens Axboe --- block/bio.c | 60 ++++++++++++++++++++++++++++++++++++--------- include/linux/bio.h | 1 + 2 files changed, 49 insertions(+), 12 deletions(-) diff --git a/block/bio.c b/block/bio.c index 7675e2de509d..74f9eba2583b 100644 --- a/block/bio.c +++ b/block/bio.c @@ -844,7 +844,11 @@ void bio_release_pages(struct bio *bio, enum bio_rp_flags_t flags) bio_for_each_segment_all(bvec, bio, iter_all) { if ((flags & BIO_RP_MARK_DIRTY) && !PageCompound(bvec->bv_page)) set_page_dirty_lock(bvec->bv_page); - put_page(bvec->bv_page); + + if (flags & BIO_RP_FROM_GUP) + put_user_page(bvec->bv_page); + else + put_page(bvec->bv_page); } } @@ -1667,28 +1671,50 @@ static void bio_dirty_fn(struct work_struct *work); static DECLARE_WORK(bio_dirty_work, bio_dirty_fn); static DEFINE_SPINLOCK(bio_dirty_lock); static struct bio *bio_dirty_list; +static struct bio *bio_gup_dirty_list; -/* - * This runs in process context - */ -static void bio_dirty_fn(struct work_struct *work) +static void __bio_dirty_fn(struct work_struct *work, + struct bio **dirty_list, + enum bio_rp_flags_t flags) { struct bio *bio, *next; spin_lock_irq(&bio_dirty_lock); - next = bio_dirty_list; - bio_dirty_list = NULL; + next = *dirty_list; + *dirty_list = NULL; spin_unlock_irq(&bio_dirty_lock); while ((bio = next) != NULL) { next = bio->bi_private; - bio_release_pages(bio, BIO_RP_MARK_DIRTY); + bio_release_pages(bio, BIO_RP_MARK_DIRTY | flags); bio_put(bio); } } -void bio_check_pages_dirty(struct bio *bio) +/* + * This runs in process context + */ +static void bio_dirty_fn(struct work_struct *work) +{ + __bio_dirty_fn(work, &bio_dirty_list, BIO_RP_NORMAL); + __bio_dirty_fn(work, &bio_gup_dirty_list, BIO_RP_FROM_GUP); +} + +/** + * __bio_check_pages_dirty() - queue up pages on a workqueue to dirty them + * @bio: the bio struct containing the pages we should dirty + * @from_gup: did the pages in the bio came from GUP (get_user_pages*()) + * + * This will go over all pages in the bio, and for each non dirty page, the + * bio is added to a list of bio's that need to get their pages dirtied. + * + * We also need to know if the pages in the bio are coming from GUP or not, + * as GUPed pages need to be released via put_user_page(), instead of + * put_page(). Please see Documentation/vm/get_user_pages.rst for details + * on that. + */ +void __bio_check_pages_dirty(struct bio *bio, bool from_gup) { struct bio_vec *bvec; unsigned long flags; @@ -1699,17 +1725,27 @@ void bio_check_pages_dirty(struct bio *bio) goto defer; } - bio_release_pages(bio, BIO_RP_NORMAL); + bio_release_pages(bio, from_gup ? BIO_RP_FROM_GUP : BIO_RP_NORMAL); bio_put(bio); return; defer: spin_lock_irqsave(&bio_dirty_lock, flags); - bio->bi_private = bio_dirty_list; - bio_dirty_list = bio; + if (from_gup) { + bio->bi_private = bio_gup_dirty_list; + bio_gup_dirty_list = bio; + } else { + bio->bi_private = bio_dirty_list; + bio_dirty_list = bio; + } spin_unlock_irqrestore(&bio_dirty_lock, flags); schedule_work(&bio_dirty_work); } +void bio_check_pages_dirty(struct bio *bio) +{ + __bio_check_pages_dirty(bio, false); +} + void update_io_ticks(struct hd_struct *part, unsigned long now) { unsigned long stamp; diff --git a/include/linux/bio.h b/include/linux/bio.h index 2715e55679c1..d68a40c2c9d4 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -444,6 +444,7 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter); enum bio_rp_flags_t { BIO_RP_NORMAL = 0, BIO_RP_MARK_DIRTY = 1, + BIO_RP_FROM_GUP = 2, }; static inline enum bio_rp_flags_t bio_rp_dirty_flag(bool mark_dirty) From patchwork Wed Jul 24 04:25:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 11055823 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5AB9D6C5 for ; Wed, 24 Jul 2019 04:26:49 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4C32426242 for ; Wed, 24 Jul 2019 04:26:49 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3F0302877F; Wed, 24 Jul 2019 04:26:49 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B89F026242 for ; Wed, 24 Jul 2019 04:26:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726364AbfGXEZb (ORCPT ); Wed, 24 Jul 2019 00:25:31 -0400 Received: from mail-pl1-f195.google.com ([209.85.214.195]:34213 "EHLO mail-pl1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726316AbfGXEZ3 (ORCPT ); Wed, 24 Jul 2019 00:25:29 -0400 Received: by mail-pl1-f195.google.com with SMTP id i2so21482631plt.1; Tue, 23 Jul 2019 21:25:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=CtboR7Ufy9O9fDFcRIuCIk1t8EIvDdcqQ3tCIzPpVTU=; b=BFNUQlsly9hFNZnfDRf69ho0wDvCVUqoj8dGJpNcOiGYgy46obMbORHEMN/I5tUQuy K86xGPjY4b4QO+54v/Zv7nQ40O3fVB4kil84sULCnYrDgJVTXlsEFmALWYldAbZAgMBP fvIXv3sy0T/Ns+lBLfXM4dLPELbccf6aa9owKLeIZQZMk4ea+nMOdmYcBOB36dALtS4z 0SnWHD/Ra8zQZlPYXX9NedMId4XeBIsKRL/2j4wXYAsw4ctlt6Th0vAum9effAlS+CNc g0HRax01vlO2YL0JWYjboqvm7PfjfxsTzbm2nbAhIfIebOVYGUr1FquaKAPIEQbTrE+e 0YTg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CtboR7Ufy9O9fDFcRIuCIk1t8EIvDdcqQ3tCIzPpVTU=; b=DGgaTpaQ44/dNT0RGpfVhyiNyCI+atbx/yYS62gUPiAbwoblGbIUKFgBg/SoGrA1OF uURIjf2vKwUIlbIkugvm6g38wHrd0f8tYz+PI8xxW2q6WhAuBGEvU9gcDfAhlQ0/OrRF r9N2hZDAE5jCE/t9tLVnKEkWgF2XuId+x7ydjVMKZrcBvIQW3Lo98Eusvet9vcMvUqxm L+kqwBxzfU28a9Bbu+5rbMtYbkYs9IicGtosyOFG3jbqSp0c7MsW0+si0d36p7mE1j8+ AVgRTOasuBzki1TsqYfrZhI2j0YRm+oAh5qG1Xq34yagm5hT4rcPBL+Jxb3z+lMewBXH OHHg== X-Gm-Message-State: APjAAAWBro4S/E6UiKux7c7uwvnnT4WqNgGgtj9fuswNTbtHOYwRykGS 6Dthta66I8haQq5Nc6AZfEZ5tgx5 X-Google-Smtp-Source: APXvYqw2fBb16YWxNCvrkJttPP2qps3nLx7+Bn7G811lLUhuqyz8ucaB8iJLC4by+I8yTQYjbx90Ag== X-Received: by 2002:a17:902:2ec5:: with SMTP id r63mr82774537plb.21.1563942328383; Tue, 23 Jul 2019 21:25:28 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id a15sm34153364pgw.3.2019.07.23.21.25.27 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 23 Jul 2019 21:25:27 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , Anna Schumaker , "David S . Miller" , Dominique Martinet , Eric Van Hensbergen , Jason Gunthorpe , Jason Wang , Jens Axboe , Latchesar Ionkov , "Michael S . Tsirkin" , Miklos Szeredi , Trond Myklebust , Christoph Hellwig , Matthew Wilcox , linux-mm@kvack.org, LKML , ceph-devel@vger.kernel.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, netdev@vger.kernel.org, samba-technical@lists.samba.org, v9fs-developer@lists.sourceforge.net, virtualization@lists.linux-foundation.org, =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , John Hubbard , Jan Kara , Dan Williams , Johannes Thumshirn , Ming Lei , Dave Chinner , Boaz Harrosh Subject: [PATCH 05/12] block_dev: convert put_page() to put_user_page*() Date: Tue, 23 Jul 2019 21:25:11 -0700 Message-Id: <20190724042518.14363-6-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190724042518.14363-1-jhubbard@nvidia.com> References: <20190724042518.14363-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jérôme Glisse For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page() or release_pages(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). Changes from Jérôme's original patch: * reworked to be compatible with recent bio_release_pages() changes. Signed-off-by: Jérôme Glisse Signed-off-by: John Hubbard Cc: linux-fsdevel@vger.kernel.org Cc: linux-block@vger.kernel.org Cc: linux-mm@kvack.org Cc: Jan Kara Cc: Dan Williams Cc: Alexander Viro Cc: Johannes Thumshirn Cc: Christoph Hellwig Cc: Jens Axboe Cc: Ming Lei Cc: Dave Chinner Cc: Jason Gunthorpe Cc: Matthew Wilcox Cc: Boaz Harrosh --- block/bio.c | 13 +++++++++++++ fs/block_dev.c | 22 +++++++++++++++++----- include/linux/bio.h | 8 ++++++++ 3 files changed, 38 insertions(+), 5 deletions(-) diff --git a/block/bio.c b/block/bio.c index 74f9eba2583b..3b9f66e64bc1 100644 --- a/block/bio.c +++ b/block/bio.c @@ -1746,6 +1746,19 @@ void bio_check_pages_dirty(struct bio *bio) __bio_check_pages_dirty(bio, false); } +enum bio_rp_flags_t bio_rp_flags(struct iov_iter *iter, bool mark_dirty) +{ + enum bio_rp_flags_t flags = BIO_RP_NORMAL; + + if (mark_dirty) + flags |= BIO_RP_MARK_DIRTY; + + if (iov_iter_get_pages_use_gup(iter)) + flags |= BIO_RP_FROM_GUP; + + return flags; +} + void update_io_ticks(struct hd_struct *part, unsigned long now) { unsigned long stamp; diff --git a/fs/block_dev.c b/fs/block_dev.c index 9fe6616f8788..d53abaf31e54 100644 --- a/fs/block_dev.c +++ b/fs/block_dev.c @@ -259,7 +259,7 @@ __blkdev_direct_IO_simple(struct kiocb *iocb, struct iov_iter *iter, } __set_current_state(TASK_RUNNING); - bio_release_pages(&bio, bio_rp_dirty_flag(should_dirty)); + bio_release_pages(&bio, bio_rp_flags(iter, should_dirty)); if (unlikely(bio.bi_status)) ret = blk_status_to_errno(bio.bi_status); @@ -295,7 +295,7 @@ static int blkdev_iopoll(struct kiocb *kiocb, bool wait) return blk_poll(q, READ_ONCE(kiocb->ki_cookie), wait); } -static void blkdev_bio_end_io(struct bio *bio) +static void _blkdev_bio_end_io(struct bio *bio, bool from_gup) { struct blkdev_dio *dio = bio->bi_private; bool should_dirty = dio->should_dirty; @@ -327,13 +327,23 @@ static void blkdev_bio_end_io(struct bio *bio) } if (should_dirty) { - bio_check_pages_dirty(bio); + __bio_check_pages_dirty(bio, from_gup); } else { - bio_release_pages(bio, BIO_RP_NORMAL); + bio_release_pages(bio, bio_rp_gup_flag(from_gup)); bio_put(bio); } } +static void blkdev_bio_end_io(struct bio *bio) +{ + _blkdev_bio_end_io(bio, false); +} + +static void blkdev_bio_from_gup_end_io(struct bio *bio) +{ + _blkdev_bio_end_io(bio, true); +} + static ssize_t __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) { @@ -380,7 +390,9 @@ __blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter, int nr_pages) bio->bi_iter.bi_sector = pos >> 9; bio->bi_write_hint = iocb->ki_hint; bio->bi_private = dio; - bio->bi_end_io = blkdev_bio_end_io; + bio->bi_end_io = iov_iter_get_pages_use_gup(iter) ? + blkdev_bio_from_gup_end_io : + blkdev_bio_end_io; bio->bi_ioprio = iocb->ki_ioprio; ret = bio_iov_iter_get_pages(bio, iter); diff --git a/include/linux/bio.h b/include/linux/bio.h index d68a40c2c9d4..b9460d1a4679 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -452,6 +452,13 @@ static inline enum bio_rp_flags_t bio_rp_dirty_flag(bool mark_dirty) return mark_dirty ? BIO_RP_MARK_DIRTY : BIO_RP_NORMAL; } +static inline enum bio_rp_flags_t bio_rp_gup_flag(bool from_gup) +{ + return from_gup ? BIO_RP_FROM_GUP : BIO_RP_NORMAL; +} + +enum bio_rp_flags_t bio_rp_flags(struct iov_iter *iter, bool mark_dirty); + void bio_release_pages(struct bio *bio, enum bio_rp_flags_t flags); struct rq_map_data; extern struct bio *bio_map_user_iov(struct request_queue *, @@ -463,6 +470,7 @@ extern struct bio *bio_copy_kern(struct request_queue *, void *, unsigned int, gfp_t, int); extern void bio_set_pages_dirty(struct bio *bio); extern void bio_check_pages_dirty(struct bio *bio); +void __bio_check_pages_dirty(struct bio *bio, bool from_gup); void generic_start_io_acct(struct request_queue *q, int op, unsigned long sectors, struct hd_struct *part); From patchwork Wed Jul 24 04:25:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 11055837 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 243A513B1 for ; Wed, 24 Jul 2019 04:27:02 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 15FB026242 for ; Wed, 24 Jul 2019 04:27:02 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 08B602877F; Wed, 24 Jul 2019 04:27:02 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 9AF3226242 for ; Wed, 24 Jul 2019 04:27:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726410AbfGXE0u (ORCPT ); Wed, 24 Jul 2019 00:26:50 -0400 Received: from mail-pg1-f194.google.com ([209.85.215.194]:35428 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726351AbfGXEZa (ORCPT ); Wed, 24 Jul 2019 00:25:30 -0400 Received: by mail-pg1-f194.google.com with SMTP id s1so14201142pgr.2; Tue, 23 Jul 2019 21:25:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sE4MQj+XhMD6UPTZi8xJ0bNX4cYnp7r/5/gkEQk0tas=; b=N74tfFByAZggYZcsUYTvgYBqmp8bGoZSPNbPePQDgI9xHOKtDI+JBR5Z/IiOFQ1dt4 oWVFdgGyfImD0LwBldj16IkrMRl7/f0OT/Q4l6ls05QNNVgjJIUWu+aeK2CLI7WrmOO1 TCwZxwbVGwL+85kP8X47MDnZlnY5CwsAaw+XO6UEjqntQs8fO4Oc87gAiS/1YP6rdwM3 9NVpw94/A/DYK3lq7v7X4QjwLe8vgVS0TAdSk8ZgDg7r0yVNM7SKDo6TuP97iAlISt79 gTDmZw4nL2v0p7bxSBy6D4Zi0oq/Mz2I8pSvkTef9m0MiEMvo0WtG93wFrP09qcMFi9e QIZA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=sE4MQj+XhMD6UPTZi8xJ0bNX4cYnp7r/5/gkEQk0tas=; b=iqYADaZtIG7BUtRGDNhhm0cJujxU1dDY8swrKM2zvEA+pV9PZUyC5R9+W9Zc7/SvzK KsJTuu1xjy4PKSIbhtNPPLRu7O5fzLyQsHcaAeUM1hudtJwVplcis7Ror9LYadTQuBog 3BWfZgNLA+XhFgSyo+d35y6qoHKbfU/7QUPY0FEDu+rZ6svYQwExjfHa1Va64Bu0Eutx /3FQqE0bSo90WH7Z5mljWp6sLeBrZO8cQNMsWpSPKF3rDyZfFCVkkbZEv3aisrSNNkZL bjI9+aWI7EKqvXzKhGjv/OH6iltPHQHdPD9CfaFbV2+RfrLdPbIAWFyZ+HSCeXuRQFaR RkPw== X-Gm-Message-State: APjAAAU10qn9GNfZkl9QGRVAa/uRfjmas9wonDo9j7v0BdG9Jkxu+PgT XeVYF6LWoH+8tZTC3nv6BHU= X-Google-Smtp-Source: APXvYqxW3prZwkWHWgnoMEubD7xNwrSIeIAF1uZdix4e6/fDDx4fF84KKWNkiOUj/KQ81cpzXumLMA== X-Received: by 2002:a62:4d85:: with SMTP id a127mr9256862pfb.148.1563942329748; Tue, 23 Jul 2019 21:25:29 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id a15sm34153364pgw.3.2019.07.23.21.25.28 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 23 Jul 2019 21:25:29 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , Anna Schumaker , "David S . Miller" , Dominique Martinet , Eric Van Hensbergen , Jason Gunthorpe , Jason Wang , Jens Axboe , Latchesar Ionkov , "Michael S . Tsirkin" , Miklos Szeredi , Trond Myklebust , Christoph Hellwig , Matthew Wilcox , linux-mm@kvack.org, LKML , ceph-devel@vger.kernel.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, netdev@vger.kernel.org, samba-technical@lists.samba.org, v9fs-developer@lists.sourceforge.net, virtualization@lists.linux-foundation.org, =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , John Hubbard , Jan Kara , Dan Williams , Johannes Thumshirn , Ming Lei , Dave Chinner , Boaz Harrosh Subject: [PATCH 06/12] fs/nfs: convert put_page() to put_user_page*() Date: Tue, 23 Jul 2019 21:25:12 -0700 Message-Id: <20190724042518.14363-7-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190724042518.14363-1-jhubbard@nvidia.com> References: <20190724042518.14363-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jérôme Glisse For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page() or release_pages(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). Signed-off-by: Jérôme Glisse Signed-off-by: John Hubbard Cc: linux-fsdevel@vger.kernel.org Cc: linux-block@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-nfs@vger.kernel.org Cc: Jan Kara Cc: Dan Williams Cc: Alexander Viro Cc: Johannes Thumshirn Cc: Christoph Hellwig Cc: Jens Axboe Cc: Ming Lei Cc: Dave Chinner Cc: Jason Gunthorpe Cc: Matthew Wilcox Cc: Boaz Harrosh Cc: Trond Myklebust Cc: Anna Schumaker --- fs/nfs/direct.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-) diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c index 0cb442406168..35f30fe2900f 100644 --- a/fs/nfs/direct.c +++ b/fs/nfs/direct.c @@ -512,7 +512,10 @@ static ssize_t nfs_direct_read_schedule_iovec(struct nfs_direct_req *dreq, pos += req_len; dreq->bytes_left -= req_len; } - nfs_direct_release_pages(pagevec, npages); + if (iov_iter_get_pages_use_gup(iter)) + put_user_pages(pagevec, npages); + else + nfs_direct_release_pages(pagevec, npages); kvfree(pagevec); if (result < 0) break; @@ -935,7 +938,10 @@ static ssize_t nfs_direct_write_schedule_iovec(struct nfs_direct_req *dreq, pos += req_len; dreq->bytes_left -= req_len; } - nfs_direct_release_pages(pagevec, npages); + if (iov_iter_get_pages_use_gup(iter)) + put_user_pages(pagevec, npages); + else + nfs_direct_release_pages(pagevec, npages); kvfree(pagevec); if (result < 0) break; From patchwork Wed Jul 24 04:25:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 11055809 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 26D0D6C5 for ; Wed, 24 Jul 2019 04:26:45 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 180B72873F for ; Wed, 24 Jul 2019 04:26:45 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 0B72228785; Wed, 24 Jul 2019 04:26:45 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 978B62873F for ; Wed, 24 Jul 2019 04:26:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726763AbfGXE0b (ORCPT ); Wed, 24 Jul 2019 00:26:31 -0400 Received: from mail-pl1-f196.google.com ([209.85.214.196]:40826 "EHLO mail-pl1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726410AbfGXEZc (ORCPT ); Wed, 24 Jul 2019 00:25:32 -0400 Received: by mail-pl1-f196.google.com with SMTP id a93so21418454pla.7; Tue, 23 Jul 2019 21:25:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BJP8QYXxp7VfUt7Eaw5PwJTw08pU7M/uVeuocSBpfVU=; b=JkQ04mWEAa/01zFb0tlIM6Uj/lB55wPCkF6GMvnL5gHq24gOeLxpu2syudv1ZeubVR o/2/SbGfs0MBuqHwUIG/UqiIYwjvwRdM1smByZv+PMbXD0dTKJlqPSntSI4p9XbD0/EP dCEFzTMQ+3jKQsrfKqZebfyOsyegrsrPG3hntSrqUGy/c3DI5jDFIn00D4+KacGWHwyg 5+f61ycblG2ySUuAZOjASXUKEg6G7NHj0n1tTzQ86l9qr5EKD1HA2V/yosWmHzLAWWAj WxbUEPr8ZwbT/LJAb0AztRPHVuEiMh11F1nlDvqA1UHHjZuRTFB0keSEGorINX0fFgb2 sO/w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BJP8QYXxp7VfUt7Eaw5PwJTw08pU7M/uVeuocSBpfVU=; b=thFSL7sx+r4akzxXmiEiA2LrLmSrS9j5zLA+uw2lkNI3L4Apc86cvdqIuxGvixua93 duDNYw+/Pc4iL2C/Ad6rfKQb+Q2NI8gA5V3CXQuwE3YthEp4BjHaFNg6T78UtRiPCFJ6 1Dfc5u6MWa4n7AEiL8VqokdDjh2Aef8J2fCRVlD5L9n1zAWTw4PmeFdotycXAOOIpkal mRU1KFYM1+783i7E4MdDyY/U15XwuMzcDCsjWQ6/6mcm9+8pDqowRpeaQIBFsTB8pkzn ewxnhyrH9DpyBMwQUs7EI1IowHQ8dLePK9c9GKhQ1n5dZdg1YltIO7DnhNTnycfw2aP5 sy4A== X-Gm-Message-State: APjAAAU7VCyAr9Bo2cxQSduPNOD1zAXPdJmX6NL2jGq7peXohtXEdC01 2m72QEVJRJAYucQEIXGTr4Y= X-Google-Smtp-Source: APXvYqwp+g/SqY8cK+RhyrKnPd9NNEiEh2F/BoaBOdfDPn2MZtzIjgPt3GPtA4VFQfpQXahpL/YYYg== X-Received: by 2002:a17:902:ac85:: with SMTP id h5mr84794603plr.198.1563942331371; Tue, 23 Jul 2019 21:25:31 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id a15sm34153364pgw.3.2019.07.23.21.25.29 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 23 Jul 2019 21:25:30 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , Anna Schumaker , "David S . Miller" , Dominique Martinet , Eric Van Hensbergen , Jason Gunthorpe , Jason Wang , Jens Axboe , Latchesar Ionkov , "Michael S . Tsirkin" , Miklos Szeredi , Trond Myklebust , Christoph Hellwig , Matthew Wilcox , linux-mm@kvack.org, LKML , ceph-devel@vger.kernel.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, netdev@vger.kernel.org, samba-technical@lists.samba.org, v9fs-developer@lists.sourceforge.net, virtualization@lists.linux-foundation.org, =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , John Hubbard , Jan Kara , Dan Williams , Johannes Thumshirn , Ming Lei , Dave Chinner , Boaz Harrosh , Paolo Bonzini , Stefan Hajnoczi Subject: [PATCH 07/12] vhost-scsi: convert put_page() to put_user_page*() Date: Tue, 23 Jul 2019 21:25:13 -0700 Message-Id: <20190724042518.14363-8-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190724042518.14363-1-jhubbard@nvidia.com> References: <20190724042518.14363-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jérôme Glisse For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). Changes from Jérôme's original patch: * Changed a WARN_ON to a BUG_ON. Signed-off-by: Jérôme Glisse Signed-off-by: John Hubbard Cc: virtualization@lists.linux-foundation.org Cc: linux-fsdevel@vger.kernel.org Cc: linux-block@vger.kernel.org Cc: linux-mm@kvack.org Cc: Jan Kara Cc: Dan Williams Cc: Alexander Viro Cc: Johannes Thumshirn Cc: Christoph Hellwig Cc: Jens Axboe Cc: Ming Lei Cc: Dave Chinner Cc: Jason Gunthorpe Cc: Matthew Wilcox Cc: Boaz Harrosh Cc: Miklos Szeredi Cc: "Michael S. Tsirkin" Cc: Jason Wang Cc: Paolo Bonzini Cc: Stefan Hajnoczi Acked-by: Michael S. Tsirkin --- drivers/vhost/scsi.c | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c index a9caf1bc3c3e..282565ab5e3f 100644 --- a/drivers/vhost/scsi.c +++ b/drivers/vhost/scsi.c @@ -329,11 +329,11 @@ static void vhost_scsi_release_cmd(struct se_cmd *se_cmd) if (tv_cmd->tvc_sgl_count) { for (i = 0; i < tv_cmd->tvc_sgl_count; i++) - put_page(sg_page(&tv_cmd->tvc_sgl[i])); + put_user_page(sg_page(&tv_cmd->tvc_sgl[i])); } if (tv_cmd->tvc_prot_sgl_count) { for (i = 0; i < tv_cmd->tvc_prot_sgl_count; i++) - put_page(sg_page(&tv_cmd->tvc_prot_sgl[i])); + put_user_page(sg_page(&tv_cmd->tvc_prot_sgl[i])); } vhost_scsi_put_inflight(tv_cmd->inflight); @@ -630,6 +630,13 @@ vhost_scsi_map_to_sgl(struct vhost_scsi_cmd *cmd, size_t offset; unsigned int npages = 0; + /* + * Here in all cases we should have an IOVEC which use GUP. If that is + * not the case then we will wrongly call put_user_page() and the page + * refcount will go wrong (this is in vhost_scsi_release_cmd()) + */ + WARN_ON(!iov_iter_get_pages_use_gup(iter)); + bytes = iov_iter_get_pages(iter, pages, LONG_MAX, VHOST_SCSI_PREALLOC_UPAGES, &offset); /* No pages were pinned */ @@ -681,7 +688,7 @@ vhost_scsi_iov_to_sgl(struct vhost_scsi_cmd *cmd, bool write, while (p < sg) { struct page *page = sg_page(p++); if (page) - put_page(page); + put_user_page(page); } return ret; } From patchwork Wed Jul 24 04:25:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 11055797 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 122D713A4 for ; Wed, 24 Jul 2019 04:26:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 01E9726242 for ; Wed, 24 Jul 2019 04:26:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E92B32877F; Wed, 24 Jul 2019 04:26:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5D9BE26242 for ; Wed, 24 Jul 2019 04:26:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726434AbfGXEZf (ORCPT ); Wed, 24 Jul 2019 00:25:35 -0400 Received: from mail-pg1-f194.google.com ([209.85.215.194]:45281 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726432AbfGXEZe (ORCPT ); Wed, 24 Jul 2019 00:25:34 -0400 Received: by mail-pg1-f194.google.com with SMTP id o13so20495797pgp.12; Tue, 23 Jul 2019 21:25:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=KSu4c8K87otioLO5Qi7krPYXd9lfPqD3DBn+R95C1d0=; b=vK9pooKon6/mOtb3SY3teg2KToasteUZ4QJ6ySDYhY+bolLRcoJ0V7EApmOF6ww8+5 uA5XzDj5MgYzDTTUihWBvQPudQbRA6pP0Np2RjOxAW+25aABpMhwK96bgYN77XodLsbz TjJmoxROhxmbYmas70jZF8oQ9Vvbm2huGu9ybaSW1Ge8TFjgZCxQqbtBWvoBjF//tpwO vlXCDNjVvcKyqLnAYMUPy1UP+NivsNAfedccjedMaIPaHnlBhFEHQeh7YTlVhbGBE4Ft 9r/ZEWoW7VQgYw3mrzI+TQqbyaEji4PuYSqkM7tx8/3qaXyX/w+rFyy3S1NH5tXtFcGX 418w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=KSu4c8K87otioLO5Qi7krPYXd9lfPqD3DBn+R95C1d0=; b=jHvxMcDMmynPvyd3pGflw5xKGBbZUO499eWQv5xoKmArgUCdkEpW7bWsXw7NWuprFO TuHjPI+wZfcw9NzNbevzc61fdqkHqmAuzRNwuBIQXl4TE6f50r8tLVnLreB4gHg5M6CR 2dy4eO9or4oqHJDxax/GxpCd7lZGPbAabiBS4cIbRhUZWsiwGWodCrcszDD+KRt63VDx e3Eil3QnUuAzfGGyp3wvWRfcmSTJQWFlyEOyKl35bLgahQFwPNTCAoPGSu4QAX1I0VJ/ GqU1xLYtADGTeIKLUsbQV7nSC5u/M9Lwa1IreuwjSwAT6da4hMQCyGie8FKRgvSaszbY bCBA== X-Gm-Message-State: APjAAAVZJLZ3wBanbPZC0nDhnq3eOvSsbzzL1tpnaYxpro3NDJwMwaWh /PSon6jtWr9hJvqjSTFev0s= X-Google-Smtp-Source: APXvYqxBFutwlAV+HAhH1stPSp3r5XPR8jEYK6DoryZdcZaAiPzxQPw8sNObOsdod3FjgjrDKk14TQ== X-Received: by 2002:a17:90a:1b48:: with SMTP id q66mr82032950pjq.83.1563942332936; Tue, 23 Jul 2019 21:25:32 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id a15sm34153364pgw.3.2019.07.23.21.25.31 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 23 Jul 2019 21:25:32 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , Anna Schumaker , "David S . Miller" , Dominique Martinet , Eric Van Hensbergen , Jason Gunthorpe , Jason Wang , Jens Axboe , Latchesar Ionkov , "Michael S . Tsirkin" , Miklos Szeredi , Trond Myklebust , Christoph Hellwig , Matthew Wilcox , linux-mm@kvack.org, LKML , ceph-devel@vger.kernel.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, netdev@vger.kernel.org, samba-technical@lists.samba.org, v9fs-developer@lists.sourceforge.net, virtualization@lists.linux-foundation.org, =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , John Hubbard , Jan Kara , Dan Williams , Johannes Thumshirn , Ming Lei , Dave Chinner , Boaz Harrosh , Steve French Subject: [PATCH 08/12] fs/cifs: convert put_page() to put_user_page*() Date: Tue, 23 Jul 2019 21:25:14 -0700 Message-Id: <20190724042518.14363-9-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190724042518.14363-1-jhubbard@nvidia.com> References: <20190724042518.14363-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jérôme Glisse For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). Signed-off-by: Jérôme Glisse Signed-off-by: John Hubbard Cc: linux-fsdevel@vger.kernel.org Cc: linux-block@vger.kernel.org Cc: linux-mm@kvack.org Cc: linux-cifs@vger.kernel.org Cc: Jan Kara Cc: Dan Williams Cc: Alexander Viro Cc: Johannes Thumshirn Cc: Christoph Hellwig Cc: Jens Axboe Cc: Ming Lei Cc: Dave Chinner Cc: Jason Gunthorpe Cc: Matthew Wilcox Cc: Boaz Harrosh Cc: Steve French --- fs/cifs/cifsglob.h | 3 +++ fs/cifs/file.c | 22 +++++++++++++++++----- fs/cifs/misc.c | 19 +++++++++++++++---- 3 files changed, 35 insertions(+), 9 deletions(-) diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h index fe610e7e3670..e95cb82bfa50 100644 --- a/fs/cifs/cifsglob.h +++ b/fs/cifs/cifsglob.h @@ -1283,6 +1283,7 @@ struct cifs_aio_ctx { * If yes, iter is a copy of the user passed iov_iter */ bool direct_io; + bool from_gup; }; struct cifs_readdata; @@ -1317,6 +1318,7 @@ struct cifs_readdata { struct cifs_credits credits; unsigned int nr_pages; struct page **pages; + bool from_gup; }; struct cifs_writedata; @@ -1343,6 +1345,7 @@ struct cifs_writedata { struct cifs_credits credits; unsigned int nr_pages; struct page **pages; + bool from_gup; }; /* diff --git a/fs/cifs/file.c b/fs/cifs/file.c index 97090693d182..84fa7e0a578f 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -2571,8 +2571,13 @@ cifs_uncached_writedata_release(struct kref *refcount) struct cifs_writedata, refcount); kref_put(&wdata->ctx->refcount, cifs_aio_ctx_release); - for (i = 0; i < wdata->nr_pages; i++) - put_page(wdata->pages[i]); + if (wdata->from_gup) { + for (i = 0; i < wdata->nr_pages; i++) + put_user_page(wdata->pages[i]); + } else { + for (i = 0; i < wdata->nr_pages; i++) + put_page(wdata->pages[i]); + } cifs_writedata_release(refcount); } @@ -2781,7 +2786,7 @@ cifs_write_from_iter(loff_t offset, size_t len, struct iov_iter *from, break; } - + wdata->from_gup = iov_iter_get_pages_use_gup(from); wdata->page_offset = start; wdata->tailsz = nr_pages > 1 ? @@ -2797,6 +2802,7 @@ cifs_write_from_iter(loff_t offset, size_t len, struct iov_iter *from, add_credits_and_wake_if(server, credits, 0); break; } + wdata->from_gup = false; rc = cifs_write_allocate_pages(wdata->pages, nr_pages); if (rc) { @@ -3238,8 +3244,12 @@ cifs_uncached_readdata_release(struct kref *refcount) unsigned int i; kref_put(&rdata->ctx->refcount, cifs_aio_ctx_release); - for (i = 0; i < rdata->nr_pages; i++) { - put_page(rdata->pages[i]); + if (rdata->from_gup) { + for (i = 0; i < rdata->nr_pages; i++) + put_user_page(rdata->pages[i]); + } else { + for (i = 0; i < rdata->nr_pages; i++) + put_page(rdata->pages[i]); } cifs_readdata_release(refcount); } @@ -3502,6 +3512,7 @@ cifs_send_async_read(loff_t offset, size_t len, struct cifsFileInfo *open_file, break; } + rdata->from_gup = iov_iter_get_pages_use_gup(&direct_iov); npages = (cur_len + start + PAGE_SIZE-1) / PAGE_SIZE; rdata->page_offset = start; rdata->tailsz = npages > 1 ? @@ -3519,6 +3530,7 @@ cifs_send_async_read(loff_t offset, size_t len, struct cifsFileInfo *open_file, rc = -ENOMEM; break; } + rdata->from_gup = false; rc = cifs_read_allocate_pages(rdata, npages); if (rc) { diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c index f383877a6511..5a04c34fea05 100644 --- a/fs/cifs/misc.c +++ b/fs/cifs/misc.c @@ -822,10 +822,18 @@ cifs_aio_ctx_release(struct kref *refcount) if (ctx->bv) { unsigned i; - for (i = 0; i < ctx->npages; i++) { - if (ctx->should_dirty) - set_page_dirty(ctx->bv[i].bv_page); - put_page(ctx->bv[i].bv_page); + if (ctx->from_gup) { + for (i = 0; i < ctx->npages; i++) { + if (ctx->should_dirty) + set_page_dirty(ctx->bv[i].bv_page); + put_user_page(ctx->bv[i].bv_page); + } + } else { + for (i = 0; i < ctx->npages; i++) { + if (ctx->should_dirty) + set_page_dirty(ctx->bv[i].bv_page); + put_page(ctx->bv[i].bv_page); + } } kvfree(ctx->bv); } @@ -881,6 +889,9 @@ setup_aio_ctx_iter(struct cifs_aio_ctx *ctx, struct iov_iter *iter, int rw) saved_len = count; + /* This is only use by cifs_aio_ctx_release() */ + ctx->from_gup = iov_iter_get_pages_use_gup(iter); + while (count && npages < max_pages) { rc = iov_iter_get_pages(iter, pages, count, max_pages, &start); if (rc < 0) { From patchwork Wed Jul 24 04:25:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 11055793 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 32EB46C5 for ; Wed, 24 Jul 2019 04:26:30 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2616C26242 for ; Wed, 24 Jul 2019 04:26:30 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 1A26A28780; Wed, 24 Jul 2019 04:26:30 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5F25526242 for ; Wed, 24 Jul 2019 04:26:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726712AbfGXE0S (ORCPT ); Wed, 24 Jul 2019 00:26:18 -0400 Received: from mail-pg1-f193.google.com ([209.85.215.193]:46555 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726316AbfGXEZf (ORCPT ); Wed, 24 Jul 2019 00:25:35 -0400 Received: by mail-pg1-f193.google.com with SMTP id k189so1468766pgk.13; Tue, 23 Jul 2019 21:25:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=O+df2I4TC5wSBN7Em5oodjAicQ0h4N55TIXEwhFfOE8=; b=dT/q1EiIaZOtDdL3HV/QOzHzsDHfdLExn0xz4e9uVNKqEUH1JPdJd8CuNeW8B/wxSe uVovaNDsULEFnzgjkLvzzmokCVO++hnkf5KmdsGeROd1Zl7OxAF1TJ6k4W3ZltfNSNzJ BEl5n6nZLhGNy0bQqdCPtgnUqKrkFQ4zYMsaZbX9RQNSqSiltrmZvWMpRLjx7ScpS3eM klezQZVLZFX/jERYm+o8MnnWWHew0e2Yh6imZT7Cp1CTuM7HqL88XzFBNHAgj8Opq+Cg IFLb/mOFgqGRQ8epFvAEHa2kjJnPTABX27HKV6/WBPLTwIXLB/v/JUKm6klwQGVw8iie cq9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=O+df2I4TC5wSBN7Em5oodjAicQ0h4N55TIXEwhFfOE8=; b=VeqrZac2lfPQbC3Jssd+WvCqNf2bZaz3cT0+7NCPudJw9KNIM505DPc25HAziFNqPA RLVuoMy8mWGyfwnoHP4J8rZxJmG3l03Qm4N1WoRnAyq5CH+qaMy+AfDtEqH5Tt4HfeKA iYQHYQswotZVajg2dgIeHbuHNyWuJNHMrcfS5l8YiXOLHUAxhT5Rh/JBLCPxSiRvOdsn XbLG7kGdC4MilZyCBcTV/oig6spMa9bZOiIK0jFDCyUNuJcn38L1JAuho59W+zsXs4+z 5sWBEd4B+/rPsGoQL+rQ+b437ieKNvE1HIUl4m72nbw8iYtR1BCpZ0W/CArB6p0hJxws RDCg== X-Gm-Message-State: APjAAAUG7s+fqh2c5uCRZLf5cEzWnGFE2GtkaTR5ABhZCsMakaqmsIQV Wxt07rf7HfnMcxdjimwKK20= X-Google-Smtp-Source: APXvYqyvgCv1eDp6dy773mz9vUzmLnKPDzcw6J3eIwRmKYsm312digfMgY//0G6lCk4CuS5A+gszrw== X-Received: by 2002:a17:90a:384d:: with SMTP id l13mr1798787pjf.86.1563942334345; Tue, 23 Jul 2019 21:25:34 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id a15sm34153364pgw.3.2019.07.23.21.25.32 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 23 Jul 2019 21:25:33 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , Anna Schumaker , "David S . Miller" , Dominique Martinet , Eric Van Hensbergen , Jason Gunthorpe , Jason Wang , Jens Axboe , Latchesar Ionkov , "Michael S . Tsirkin" , Miklos Szeredi , Trond Myklebust , Christoph Hellwig , Matthew Wilcox , linux-mm@kvack.org, LKML , ceph-devel@vger.kernel.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, netdev@vger.kernel.org, samba-technical@lists.samba.org, v9fs-developer@lists.sourceforge.net, virtualization@lists.linux-foundation.org, =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , John Hubbard , Jan Kara , Dan Williams , Johannes Thumshirn , Ming Lei , Dave Chinner , Boaz Harrosh Subject: [PATCH 09/12] fs/fuse: convert put_page() to put_user_page*() Date: Tue, 23 Jul 2019 21:25:15 -0700 Message-Id: <20190724042518.14363-10-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190724042518.14363-1-jhubbard@nvidia.com> References: <20190724042518.14363-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jérôme Glisse For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). Changes from Jérôme's original patch: * Use the enhanced put_user_pages_dirty_lock(). Signed-off-by: Jérôme Glisse Signed-off-by: John Hubbard Cc: linux-fsdevel@vger.kernel.org Cc: linux-block@vger.kernel.org Cc: linux-mm@kvack.org Cc: Jan Kara Cc: Dan Williams Cc: Alexander Viro Cc: Johannes Thumshirn Cc: Christoph Hellwig Cc: Jens Axboe Cc: Ming Lei Cc: Dave Chinner Cc: Jason Gunthorpe Cc: Matthew Wilcox Cc: Boaz Harrosh Cc: Miklos Szeredi --- fs/fuse/dev.c | 22 +++++++++++++++++---- fs/fuse/file.c | 53 +++++++++++++++++++++++++++++++++++++------------- 2 files changed, 57 insertions(+), 18 deletions(-) diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c index ea8237513dfa..8ef65c9cd3f6 100644 --- a/fs/fuse/dev.c +++ b/fs/fuse/dev.c @@ -780,6 +780,7 @@ struct fuse_copy_state { unsigned len; unsigned offset; unsigned move_pages:1; + bool from_gup; }; static void fuse_copy_init(struct fuse_copy_state *cs, int write, @@ -800,13 +801,22 @@ static void fuse_copy_finish(struct fuse_copy_state *cs) buf->len = PAGE_SIZE - cs->len; cs->currbuf = NULL; } else if (cs->pg) { - if (cs->write) { - flush_dcache_page(cs->pg); - set_page_dirty_lock(cs->pg); + if (cs->from_gup) { + if (cs->write) { + flush_dcache_page(cs->pg); + put_user_pages_dirty_lock(&cs->pg, 1, true); + } else + put_user_page(cs->pg); + } else { + if (cs->write) { + flush_dcache_page(cs->pg); + set_page_dirty_lock(cs->pg); + } + put_page(cs->pg); } - put_page(cs->pg); } cs->pg = NULL; + cs->from_gup = false; } /* @@ -834,6 +844,7 @@ static int fuse_copy_fill(struct fuse_copy_state *cs) BUG_ON(!cs->nr_segs); cs->currbuf = buf; cs->pg = buf->page; + cs->from_gup = false; cs->offset = buf->offset; cs->len = buf->len; cs->pipebufs++; @@ -851,6 +862,7 @@ static int fuse_copy_fill(struct fuse_copy_state *cs) buf->len = 0; cs->currbuf = buf; + cs->from_gup = false; cs->pg = page; cs->offset = 0; cs->len = PAGE_SIZE; @@ -866,6 +878,7 @@ static int fuse_copy_fill(struct fuse_copy_state *cs) cs->len = err; cs->offset = off; cs->pg = page; + cs->from_gup = iov_iter_get_pages_use_gup(cs->iter); iov_iter_advance(cs->iter, err); } @@ -1000,6 +1013,7 @@ static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep) unlock_page(newpage); out_fallback: cs->pg = buf->page; + cs->from_gup = false; cs->offset = buf->offset; err = lock_request(cs->req); diff --git a/fs/fuse/file.c b/fs/fuse/file.c index 5ae2828beb00..c34c22ac5b22 100644 --- a/fs/fuse/file.c +++ b/fs/fuse/file.c @@ -543,12 +543,20 @@ void fuse_read_fill(struct fuse_req *req, struct file *file, loff_t pos, req->out.args[0].size = count; } -static void fuse_release_user_pages(struct fuse_req *req, bool should_dirty) +static void fuse_release_user_pages(struct fuse_req *req, bool should_dirty, + bool from_gup) { unsigned i; + if (from_gup) { + put_user_pages_dirty_lock(req->pages, req->num_pages, + should_dirty); + return; + } + for (i = 0; i < req->num_pages; i++) { struct page *page = req->pages[i]; + if (should_dirty) set_page_dirty_lock(page); put_page(page); @@ -621,12 +629,13 @@ static void fuse_aio_complete(struct fuse_io_priv *io, int err, ssize_t pos) kref_put(&io->refcnt, fuse_io_release); } -static void fuse_aio_complete_req(struct fuse_conn *fc, struct fuse_req *req) +static void _fuse_aio_complete_req(struct fuse_conn *fc, struct fuse_req *req, + bool from_gup) { struct fuse_io_priv *io = req->io; ssize_t pos = -1; - fuse_release_user_pages(req, io->should_dirty); + fuse_release_user_pages(req, io->should_dirty, from_gup); if (io->write) { if (req->misc.write.in.size != req->misc.write.out.size) @@ -641,8 +650,18 @@ static void fuse_aio_complete_req(struct fuse_conn *fc, struct fuse_req *req) fuse_aio_complete(io, req->out.h.error, pos); } +static void fuse_aio_from_gup_complete_req(struct fuse_conn *fc, struct fuse_req *req) +{ + _fuse_aio_complete_req(fc, req, true); +} + +static void fuse_aio_complete_req(struct fuse_conn *fc, struct fuse_req *req) +{ + _fuse_aio_complete_req(fc, req, false); +} + static size_t fuse_async_req_send(struct fuse_conn *fc, struct fuse_req *req, - size_t num_bytes, struct fuse_io_priv *io) + size_t num_bytes, struct fuse_io_priv *io, bool from_gup) { spin_lock(&io->lock); kref_get(&io->refcnt); @@ -651,7 +670,8 @@ static size_t fuse_async_req_send(struct fuse_conn *fc, struct fuse_req *req, spin_unlock(&io->lock); req->io = io; - req->end = fuse_aio_complete_req; + req->end = from_gup ? fuse_aio_from_gup_complete_req : + fuse_aio_complete_req; __fuse_get_request(req); fuse_request_send_background(fc, req); @@ -660,7 +680,8 @@ static size_t fuse_async_req_send(struct fuse_conn *fc, struct fuse_req *req, } static size_t fuse_send_read(struct fuse_req *req, struct fuse_io_priv *io, - loff_t pos, size_t count, fl_owner_t owner) + loff_t pos, size_t count, fl_owner_t owner, + bool from_gup) { struct file *file = io->iocb->ki_filp; struct fuse_file *ff = file->private_data; @@ -675,7 +696,7 @@ static size_t fuse_send_read(struct fuse_req *req, struct fuse_io_priv *io, } if (io->async) - return fuse_async_req_send(fc, req, count, io); + return fuse_async_req_send(fc, req, count, io, from_gup); fuse_request_send(fc, req); return req->out.args[0].size; @@ -755,7 +776,7 @@ static int fuse_do_readpage(struct file *file, struct page *page) req->page_descs[0].length = count; init_sync_kiocb(&iocb, file); io = (struct fuse_io_priv) FUSE_IO_PRIV_SYNC(&iocb); - num_read = fuse_send_read(req, &io, pos, count, NULL); + num_read = fuse_send_read(req, &io, pos, count, NULL, false); err = req->out.h.error; if (!err) { @@ -976,7 +997,8 @@ static void fuse_write_fill(struct fuse_req *req, struct fuse_file *ff, } static size_t fuse_send_write(struct fuse_req *req, struct fuse_io_priv *io, - loff_t pos, size_t count, fl_owner_t owner) + loff_t pos, size_t count, fl_owner_t owner, + bool from_gup) { struct kiocb *iocb = io->iocb; struct file *file = iocb->ki_filp; @@ -996,7 +1018,7 @@ static size_t fuse_send_write(struct fuse_req *req, struct fuse_io_priv *io, } if (io->async) - return fuse_async_req_send(fc, req, count, io); + return fuse_async_req_send(fc, req, count, io, from_gup); fuse_request_send(fc, req); return req->misc.write.out.size; @@ -1031,7 +1053,7 @@ static size_t fuse_send_write_pages(struct fuse_req *req, struct kiocb *iocb, for (i = 0; i < req->num_pages; i++) fuse_wait_on_page_writeback(inode, req->pages[i]->index); - res = fuse_send_write(req, &io, pos, count, NULL); + res = fuse_send_write(req, &io, pos, count, NULL, false); offset = req->page_descs[0].offset; count = res; @@ -1351,6 +1373,7 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter, ssize_t res = 0; struct fuse_req *req; int err = 0; + bool from_gup = iov_iter_get_pages_use_gup(iter); if (io->async) req = fuse_get_req_for_background(fc, iov_iter_npages(iter, @@ -1384,13 +1407,15 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter, inarg = &req->misc.write.in; inarg->write_flags |= FUSE_WRITE_KILL_PRIV; } - nres = fuse_send_write(req, io, pos, nbytes, owner); + nres = fuse_send_write(req, io, pos, nbytes, owner, + from_gup); } else { - nres = fuse_send_read(req, io, pos, nbytes, owner); + nres = fuse_send_read(req, io, pos, nbytes, owner, + from_gup); } if (!io->async) - fuse_release_user_pages(req, io->should_dirty); + fuse_release_user_pages(req, io->should_dirty, from_gup); if (req->out.h.error) { err = req->out.h.error; break; From patchwork Wed Jul 24 04:25:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 11055767 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id DE51213A4 for ; Wed, 24 Jul 2019 04:26:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CE8F226242 for ; Wed, 24 Jul 2019 04:26:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id C24F72877F; Wed, 24 Jul 2019 04:26:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2F29226242 for ; Wed, 24 Jul 2019 04:26:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726621AbfGXEZx (ORCPT ); Wed, 24 Jul 2019 00:25:53 -0400 Received: from mail-pl1-f194.google.com ([209.85.214.194]:33377 "EHLO mail-pl1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726463AbfGXEZh (ORCPT ); Wed, 24 Jul 2019 00:25:37 -0400 Received: by mail-pl1-f194.google.com with SMTP id c14so21403885plo.0; Tue, 23 Jul 2019 21:25:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=ConQbG+Oem1Q8xZVMGorLyFV++CtkhuR2d2QhctIH1U=; b=rzlDdGXtnPNF/N6VGSj5OlnkKQVNzuJZSX88h3k0QaSU7x9XxY6d4zWea62ZR3fNtS gNMZr45iIvfH5HQRUOMwyNN/jLIH836GcUp8Lf+aNFxxaV9vdrr625gCfyA8XZW5PN/Q pwujb2Pq5pgz78btiFjK3sxUmpLDWtYh/fYuVQy1tXkvqQxjANX5Mpn0h7bykgaCkfn1 tCFvQq5xWW5YoxVJrXQ4tXX8LNF75ePai0HBYrs+WY7yUOWhn6yI6XBVflq1lgPq7qJv MiEt1P/meAHMnJy5EKWXyM1HfNJ7Mhgk5eJlB+ONmrDSnY2e4Df2D52oQWcyMMCO39pJ Eo+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=ConQbG+Oem1Q8xZVMGorLyFV++CtkhuR2d2QhctIH1U=; b=gqq0hC8pLvpViSGHgy+tAyVN/mm+fRNg+vzn50fDt9BmPpFApXwp5FHuFt+Za9nX9y X21X64CPz2ZYFs1DnLNoJ64aXkp3zvIYLm9t7KYrzIagPwG3Lv1GFQchZ3iowL9bg8P7 eTc+qmgj5PArB3EDuImgpFg4p0+OootFwXovyoiivicH95JvGqhyTsRhMbkX824O6bOs Y1EDaVrZ9rBFTnux2Mui27ZM+IeehIXnXHrimHAEY6uY5W3wCGyjln7KNf5ZLG9VzDGJ 2HJbm/fqdarLXaHThhhzRYSrfMasmpk+4m13RlLbW2uMNjspcGUkmysgSoNlwC1y/iEG yLCw== X-Gm-Message-State: APjAAAWjz9QfEhD2hP8DaeEAwp0/0woWdKGAk9B0L+OcZDvARY1O44zK 6Z7AOGZQB2zKOFln9R51UX8= X-Google-Smtp-Source: APXvYqyQhtv0FnqZ/I/TW3TWJg4eP/mju41WYZg75l/6hWymbmp//VEgOy1EsZaAb3L+vlqi+tLiHg== X-Received: by 2002:a17:902:6b44:: with SMTP id g4mr83239156plt.152.1563942335854; Tue, 23 Jul 2019 21:25:35 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id a15sm34153364pgw.3.2019.07.23.21.25.34 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 23 Jul 2019 21:25:35 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , Anna Schumaker , "David S . Miller" , Dominique Martinet , Eric Van Hensbergen , Jason Gunthorpe , Jason Wang , Jens Axboe , Latchesar Ionkov , "Michael S . Tsirkin" , Miklos Szeredi , Trond Myklebust , Christoph Hellwig , Matthew Wilcox , linux-mm@kvack.org, LKML , ceph-devel@vger.kernel.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, netdev@vger.kernel.org, samba-technical@lists.samba.org, v9fs-developer@lists.sourceforge.net, virtualization@lists.linux-foundation.org, =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , John Hubbard , Jan Kara , Dan Williams , Johannes Thumshirn , Ming Lei , Dave Chinner , Boaz Harrosh , "Yan, Zheng" , Sage Weil , Ilya Dryomov Subject: [PATCH 10/12] fs/ceph: convert put_page() to put_user_page*() Date: Tue, 23 Jul 2019 21:25:16 -0700 Message-Id: <20190724042518.14363-11-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190724042518.14363-1-jhubbard@nvidia.com> References: <20190724042518.14363-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jérôme Glisse For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). Changes from Jérôme's original patch: * Use the enhanced put_user_pages_dirty_lock(). Signed-off-by: Jérôme Glisse Signed-off-by: John Hubbard Cc: linux-fsdevel@vger.kernel.org Cc: linux-block@vger.kernel.org Cc: linux-mm@kvack.org Cc: ceph-devel@vger.kernel.org Cc: Jan Kara Cc: Dan Williams Cc: Alexander Viro Cc: Johannes Thumshirn Cc: Christoph Hellwig Cc: Jens Axboe Cc: Ming Lei Cc: Dave Chinner Cc: Jason Gunthorpe Cc: Matthew Wilcox Cc: Boaz Harrosh Cc: "Yan, Zheng" Cc: Sage Weil Cc: Ilya Dryomov --- fs/ceph/file.c | 62 ++++++++++++++++++++++++++++++++++++++------------ 1 file changed, 48 insertions(+), 14 deletions(-) diff --git a/fs/ceph/file.c b/fs/ceph/file.c index 685a03cc4b77..c628a1f96978 100644 --- a/fs/ceph/file.c +++ b/fs/ceph/file.c @@ -158,18 +158,26 @@ static ssize_t iter_get_bvecs_alloc(struct iov_iter *iter, size_t maxsize, return bytes; } -static void put_bvecs(struct bio_vec *bvecs, int num_bvecs, bool should_dirty) +static void put_bvecs(struct bio_vec *bv, int num_bvecs, bool should_dirty, + bool from_gup) { int i; + for (i = 0; i < num_bvecs; i++) { - if (bvecs[i].bv_page) { + if (!bv[i].bv_page) + continue; + + if (from_gup) { + put_user_pages_dirty_lock(&bv[i].bv_page, 1, + should_dirty); + } else { if (should_dirty) - set_page_dirty_lock(bvecs[i].bv_page); - put_page(bvecs[i].bv_page); + set_page_dirty_lock(bv[i].bv_page); + put_page(bv[i].bv_page); } } - kvfree(bvecs); + kvfree(bv); } /* @@ -730,6 +738,7 @@ struct ceph_aio_work { }; static void ceph_aio_retry_work(struct work_struct *work); +static void ceph_aio_from_gup_retry_work(struct work_struct *work); static void ceph_aio_complete(struct inode *inode, struct ceph_aio_request *aio_req) @@ -774,7 +783,7 @@ static void ceph_aio_complete(struct inode *inode, kfree(aio_req); } -static void ceph_aio_complete_req(struct ceph_osd_request *req) +static void _ceph_aio_complete_req(struct ceph_osd_request *req, bool from_gup) { int rc = req->r_result; struct inode *inode = req->r_inode; @@ -793,7 +802,9 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req) aio_work = kmalloc(sizeof(*aio_work), GFP_NOFS); if (aio_work) { - INIT_WORK(&aio_work->work, ceph_aio_retry_work); + INIT_WORK(&aio_work->work, from_gup ? + ceph_aio_from_gup_retry_work : + ceph_aio_retry_work); aio_work->req = req; queue_work(ceph_inode_to_client(inode)->inode_wq, &aio_work->work); @@ -830,7 +841,7 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req) } put_bvecs(osd_data->bvec_pos.bvecs, osd_data->num_bvecs, - aio_req->should_dirty); + aio_req->should_dirty, from_gup); ceph_osdc_put_request(req); if (rc < 0) @@ -840,7 +851,17 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req) return; } -static void ceph_aio_retry_work(struct work_struct *work) +static void ceph_aio_complete_req(struct ceph_osd_request *req) +{ + _ceph_aio_complete_req(req, false); +} + +static void ceph_aio_from_gup_complete_req(struct ceph_osd_request *req) +{ + _ceph_aio_complete_req(req, true); +} + +static void _ceph_aio_retry_work(struct work_struct *work, bool from_gup) { struct ceph_aio_work *aio_work = container_of(work, struct ceph_aio_work, work); @@ -891,7 +912,8 @@ static void ceph_aio_retry_work(struct work_struct *work) ceph_osdc_put_request(orig_req); - req->r_callback = ceph_aio_complete_req; + req->r_callback = from_gup ? ceph_aio_from_gup_complete_req : + ceph_aio_complete_req; req->r_inode = inode; req->r_priv = aio_req; @@ -899,13 +921,23 @@ static void ceph_aio_retry_work(struct work_struct *work) out: if (ret < 0) { req->r_result = ret; - ceph_aio_complete_req(req); + _ceph_aio_complete_req(req, from_gup); } ceph_put_snap_context(snapc); kfree(aio_work); } +static void ceph_aio_retry_work(struct work_struct *work) +{ + _ceph_aio_retry_work(work, false); +} + +static void ceph_aio_from_gup_retry_work(struct work_struct *work) +{ + _ceph_aio_retry_work(work, true); +} + static ssize_t ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter, struct ceph_snap_context *snapc, @@ -927,6 +959,7 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter, loff_t pos = iocb->ki_pos; bool write = iov_iter_rw(iter) == WRITE; bool should_dirty = !write && iter_is_iovec(iter); + bool from_gup = iov_iter_get_pages_use_gup(iter); if (write && ceph_snap(file_inode(file)) != CEPH_NOSNAP) return -EROFS; @@ -1023,7 +1056,8 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter, aio_req->num_reqs++; atomic_inc(&aio_req->pending_reqs); - req->r_callback = ceph_aio_complete_req; + req->r_callback = !from_gup ? ceph_aio_complete_req : + ceph_aio_from_gup_complete_req; req->r_inode = inode; req->r_priv = aio_req; list_add_tail(&req->r_private_item, &aio_req->osd_reqs); @@ -1054,7 +1088,7 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter, len = ret; } - put_bvecs(bvecs, num_pages, should_dirty); + put_bvecs(bvecs, num_pages, should_dirty, from_gup); ceph_osdc_put_request(req); if (ret < 0) break; @@ -1093,7 +1127,7 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter, req, false); if (ret < 0) { req->r_result = ret; - ceph_aio_complete_req(req); + _ceph_aio_complete_req(req, from_gup); } } return -EIOCBQUEUED; From patchwork Wed Jul 24 04:25:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 11055769 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7401413B1 for ; Wed, 24 Jul 2019 04:26:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 65D2326242 for ; Wed, 24 Jul 2019 04:26:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 59D012873F; Wed, 24 Jul 2019 04:26:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id D079028780 for ; Wed, 24 Jul 2019 04:26:12 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726614AbfGXEZx (ORCPT ); Wed, 24 Jul 2019 00:25:53 -0400 Received: from mail-pg1-f196.google.com ([209.85.215.196]:37491 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726508AbfGXEZi (ORCPT ); Wed, 24 Jul 2019 00:25:38 -0400 Received: by mail-pg1-f196.google.com with SMTP id i70so9766797pgd.4; Tue, 23 Jul 2019 21:25:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=02PLAmxEK74OFGQcxHpfkZPpYQRAkLSlGOrHVK7lAMw=; b=ePnIO78ePHju7ebrVW+7E2J/vpPNCVR6NpW8CfFYZY8+SSmWHWKTO0NxnamQnOd1xO 1OGkt7YIiEVXHuQP8H6qf68cFYAXKIBbDAmLkkhsEsKXGaZ+gv+hELN1BEUzOzwBMd5C 7+HvBq9DyjWP27jxx3AYMLHiRTGVA6MLCEKX2iEJB87gzmXSiGwi7V4lsrRtpgVdiW2p S3MHstLpNUFBNRlULB+VzXVBuGOOpmhbNeL8feVmi88K1eJfS5jDJOTI4suhlSh6IsAx VQkyNWZZC80jbTtz6ZOUtfZ39jWAkyRwTKuVKYxnomrnDfjnpzs8p0BXzMO9GjL2eB9Q 8kCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=02PLAmxEK74OFGQcxHpfkZPpYQRAkLSlGOrHVK7lAMw=; b=Onb1IXn1PeMaVNURX9sRo4cVT8jwP/khTUV18K+D2qS9Gho1T/5cTtSiMHYALdI/7W I3Cm0nD5NJyTNMfT/Nzb85ShpUKzlRGUofhygOoyuqIVvM7fJGtLvZz/38EEL9IJks2O bqa2VRRKiHPQK4AXaJ62+sEueTk1D2Vt8ggi3bQBY8QMtjRCmmh7c7fnllf5D52j9Bo5 lHqCablp/4ZTghJsYu+JqAUtouoKraZr201xNyD8tVTdLBjAu4VOHiDTFexWquh3vjZe V3CqaAlwA+/W8T3mPIvWtVYdfcqvhVLe9+CR4zUjE0d4QzKS9Mlozl1Gq/cXEg6JHG5e V5BQ== X-Gm-Message-State: APjAAAVFON0PTnfiGI2kN3XCj7NrTun4q5s9xSFrk8gpOt1Mn+smVsBt 28p32owEvausVNTgifpiC1g= X-Google-Smtp-Source: APXvYqzqsCqiqq73YLTFw/re4Wrx/oAutFm/LKqajLsW6nTmQ7iAK5rTUPq3/lEGra2dE5Y0hbuJJw== X-Received: by 2002:a62:cdc3:: with SMTP id o186mr9322982pfg.168.1563942337240; Tue, 23 Jul 2019 21:25:37 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id a15sm34153364pgw.3.2019.07.23.21.25.35 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 23 Jul 2019 21:25:36 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , Anna Schumaker , "David S . Miller" , Dominique Martinet , Eric Van Hensbergen , Jason Gunthorpe , Jason Wang , Jens Axboe , Latchesar Ionkov , "Michael S . Tsirkin" , Miklos Szeredi , Trond Myklebust , Christoph Hellwig , Matthew Wilcox , linux-mm@kvack.org, LKML , ceph-devel@vger.kernel.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, netdev@vger.kernel.org, samba-technical@lists.samba.org, v9fs-developer@lists.sourceforge.net, virtualization@lists.linux-foundation.org, =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , John Hubbard , Jan Kara , Dan Williams , Johannes Thumshirn , Ming Lei , Dave Chinner , Boaz Harrosh Subject: [PATCH 11/12] 9p/net: convert put_page() to put_user_page*() Date: Tue, 23 Jul 2019 21:25:17 -0700 Message-Id: <20190724042518.14363-12-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190724042518.14363-1-jhubbard@nvidia.com> References: <20190724042518.14363-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jérôme Glisse For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). Signed-off-by: Jérôme Glisse Signed-off-by: John Hubbard Cc: linux-fsdevel@vger.kernel.org Cc: linux-block@vger.kernel.org Cc: linux-mm@kvack.org Cc: v9fs-developer@lists.sourceforge.net Cc: Jan Kara Cc: Dan Williams Cc: Alexander Viro Cc: Johannes Thumshirn Cc: Christoph Hellwig Cc: Jens Axboe Cc: Ming Lei Cc: Dave Chinner Cc: Jason Gunthorpe Cc: Matthew Wilcox Cc: Boaz Harrosh Cc: Eric Van Hensbergen Cc: Latchesar Ionkov Cc: Dominique Martinet --- net/9p/trans_common.c | 14 ++++++++++---- net/9p/trans_common.h | 3 ++- net/9p/trans_virtio.c | 18 +++++++++++++----- 3 files changed, 25 insertions(+), 10 deletions(-) diff --git a/net/9p/trans_common.c b/net/9p/trans_common.c index 3dff68f05fb9..e5c359c369a6 100644 --- a/net/9p/trans_common.c +++ b/net/9p/trans_common.c @@ -19,12 +19,18 @@ /** * p9_release_pages - Release pages after the transaction. */ -void p9_release_pages(struct page **pages, int nr_pages) +void p9_release_pages(struct page **pages, int nr_pages, bool from_gup) { int i; - for (i = 0; i < nr_pages; i++) - if (pages[i]) - put_page(pages[i]); + if (from_gup) { + for (i = 0; i < nr_pages; i++) + if (pages[i]) + put_user_page(pages[i]); + } else { + for (i = 0; i < nr_pages; i++) + if (pages[i]) + put_page(pages[i]); + } } EXPORT_SYMBOL(p9_release_pages); diff --git a/net/9p/trans_common.h b/net/9p/trans_common.h index c43babb3f635..dcf025867314 100644 --- a/net/9p/trans_common.h +++ b/net/9p/trans_common.h @@ -12,4 +12,5 @@ * */ -void p9_release_pages(struct page **, int); +void p9_release_pages(struct page **pages, int nr_pages, bool from_gup); + diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c index a3cd90a74012..3714ca5ecdc2 100644 --- a/net/9p/trans_virtio.c +++ b/net/9p/trans_virtio.c @@ -306,11 +306,14 @@ static int p9_get_mapped_pages(struct virtio_chan *chan, struct iov_iter *data, int count, size_t *offs, - int *need_drop) + int *need_drop, + bool *from_gup) { int nr_pages; int err; + *from_gup = false; + if (!iov_iter_count(data)) return 0; @@ -332,6 +335,7 @@ static int p9_get_mapped_pages(struct virtio_chan *chan, *need_drop = 1; nr_pages = DIV_ROUND_UP(n + *offs, PAGE_SIZE); atomic_add(nr_pages, &vp_pinned); + *from_gup = iov_iter_get_pages_use_gup(data); return n; } else { /* kernel buffer, no need to pin pages */ @@ -397,13 +401,15 @@ p9_virtio_zc_request(struct p9_client *client, struct p9_req_t *req, size_t offs; int need_drop = 0; int kicked = 0; + bool in_from_gup, out_from_gup; p9_debug(P9_DEBUG_TRANS, "virtio request\n"); if (uodata) { __le32 sz; int n = p9_get_mapped_pages(chan, &out_pages, uodata, - outlen, &offs, &need_drop); + outlen, &offs, &need_drop, + &out_from_gup); if (n < 0) { err = n; goto err_out; @@ -422,7 +428,8 @@ p9_virtio_zc_request(struct p9_client *client, struct p9_req_t *req, memcpy(&req->tc.sdata[0], &sz, sizeof(sz)); } else if (uidata) { int n = p9_get_mapped_pages(chan, &in_pages, uidata, - inlen, &offs, &need_drop); + inlen, &offs, &need_drop, + &in_from_gup); if (n < 0) { err = n; goto err_out; @@ -504,11 +511,12 @@ p9_virtio_zc_request(struct p9_client *client, struct p9_req_t *req, err_out: if (need_drop) { if (in_pages) { - p9_release_pages(in_pages, in_nr_pages); + p9_release_pages(in_pages, in_nr_pages, in_from_gup); atomic_sub(in_nr_pages, &vp_pinned); } if (out_pages) { - p9_release_pages(out_pages, out_nr_pages); + p9_release_pages(out_pages, out_nr_pages, + out_from_gup); atomic_sub(out_nr_pages, &vp_pinned); } /* wakeup anybody waiting for slots to pin pages */ From patchwork Wed Jul 24 04:25:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 11055739 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4A95A6C5 for ; Wed, 24 Jul 2019 04:25:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 3D07F26242 for ; Wed, 24 Jul 2019 04:25:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 2FEF028774; Wed, 24 Jul 2019 04:25:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_HI autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A4A722873F for ; Wed, 24 Jul 2019 04:25:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726550AbfGXEZj (ORCPT ); Wed, 24 Jul 2019 00:25:39 -0400 Received: from mail-pg1-f193.google.com ([209.85.215.193]:44293 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726534AbfGXEZj (ORCPT ); Wed, 24 Jul 2019 00:25:39 -0400 Received: by mail-pg1-f193.google.com with SMTP id i18so20511859pgl.11; Tue, 23 Jul 2019 21:25:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=I1wDMaqOj/gzGRufAfLYBvs0e80ibIJerHEY6qRx+9Q=; b=AJiRp1Uswxxxj2fjLfbqacoLwH06WJ9mszsK/zlNhFuhNoSLtvKRmhFo9KLI7JLl8B hxIxG1xThvmkqqGsj4H+PIkZgqzG39WvTfqHCYX31eWH9qZ8HHJptRyRkIYZTdhpSSJA nZA7WnB2fukiDrq9dsIiMnZ4oXRSDbx8C0GrWwEoTfJ4nGsrUGvTe9DUv98TbDoHZftU 2sPJK+pDMsCNPpAzyI5pMUhaliFeOadZMdLhzw1frV4a114QmQGutmlMg9ocT4gPTziz ZW1wdryUHfhxRuoKX5pPJ1KukLjcZAbAo24BFz30vzwXdEBQHRvTjXN7uylJkrR8anzL 2E2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=I1wDMaqOj/gzGRufAfLYBvs0e80ibIJerHEY6qRx+9Q=; b=Nm9JBfyjZZRpKI39U9r5n8J23vF8EW0fqfdDCkmf9aBgUf4hmqu3GV1JHLRrvCjgav 7ajV9Wig5bhzAkmozNLVjrsXLNeK8cSpNsOJpvJzi/44SrZEihQxrUi2mrn3lQvXtPCf VaEnVe5ByQDxiaRNmvbS550+6zgctRHVvxOaMsl6dxz8hdXh4b/EYxcU3YY3Ue83ZlMJ XifOgyxu4jw+pMjT1QMjJHrcDvdzPYMDWqpJ5LfUF9NUM2DmpYx9C0aWg8bbgS5IO0eP Zagn/++uqNhC+hRMg5wNk329CJhJ6RsO7IMsuO3mCl/YIE4Ch1W9QMMpuNii6HGlz9AQ SBlQ== X-Gm-Message-State: APjAAAVfcaP/BgUygBi+bAdiEZv1tCKB/k4SLrePn+5mLz4Ph4VhxJL7 gt3jOXEyc4mAYsIb6M9ll+4= X-Google-Smtp-Source: APXvYqwVZxneCPfv2nDeo72FQ4K0Ri4nG9i/Jh52gTbxtuFDM1ER91LNrK6Q0r6xmbSBuioIPNnbSg== X-Received: by 2002:a62:38c6:: with SMTP id f189mr9250236pfa.157.1563942338440; Tue, 23 Jul 2019 21:25:38 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id a15sm34153364pgw.3.2019.07.23.21.25.37 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Tue, 23 Jul 2019 21:25:38 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , Anna Schumaker , "David S . Miller" , Dominique Martinet , Eric Van Hensbergen , Jason Gunthorpe , Jason Wang , Jens Axboe , Latchesar Ionkov , "Michael S . Tsirkin" , Miklos Szeredi , Trond Myklebust , Christoph Hellwig , Matthew Wilcox , linux-mm@kvack.org, LKML , ceph-devel@vger.kernel.org, kvm@vger.kernel.org, linux-block@vger.kernel.org, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-nfs@vger.kernel.org, linux-rdma@vger.kernel.org, netdev@vger.kernel.org, samba-technical@lists.samba.org, v9fs-developer@lists.sourceforge.net, virtualization@lists.linux-foundation.org, John Hubbard Subject: [PATCH 12/12] fs/ceph: fix a build warning: returning a value from void function Date: Tue, 23 Jul 2019 21:25:18 -0700 Message-Id: <20190724042518.14363-13-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190724042518.14363-1-jhubbard@nvidia.com> References: <20190724042518.14363-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: linux-cifs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard Trivial build warning fix: don't return a value from a function whose type is "void". Signed-off-by: John Hubbard --- fs/ceph/debugfs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/ceph/debugfs.c b/fs/ceph/debugfs.c index 2eb88ed22993..fa14c8e8761d 100644 --- a/fs/ceph/debugfs.c +++ b/fs/ceph/debugfs.c @@ -294,7 +294,7 @@ void ceph_fs_debugfs_init(struct ceph_fs_client *fsc) void ceph_fs_debugfs_init(struct ceph_fs_client *fsc) { - return 0; + return; } void ceph_fs_debugfs_cleanup(struct ceph_fs_client *fsc)