From patchwork Fri Jan 31 06:13:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11359207 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id A158D13A4 for ; Fri, 31 Jan 2020 06:13:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 54D3E2082E for ; Fri, 31 Jan 2020 06:13:40 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="cLM4etWG" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 54D3E2082E Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C25376B0515; Fri, 31 Jan 2020 01:13:38 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id B8AD86B0517; Fri, 31 Jan 2020 01:13:38 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A51D66B0518; Fri, 31 Jan 2020 01:13:38 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0249.hostedemail.com [216.40.44.249]) by kanga.kvack.org (Postfix) with ESMTP id 816F76B0515 for ; Fri, 31 Jan 2020 01:13:38 -0500 (EST) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 3C4F4180AD802 for ; Fri, 31 Jan 2020 06:13:38 +0000 (UTC) X-FDA: 76436912916.18.legs25_1faec47f61802 X-Spam-Summary: 2,0,0,f47784e5f5ff8fb3,d41d8cd98f00b204,akpm@linux-foundation.org,:akpm@linux-foundation.org:alex.williamson@redhat.com:aneesh.kumar@linux.ibm.com:axboe@kernel.dk:bjorn.topel@intel.com:corbet@lwn.net:dan.j.williams@intel.com:daniel.vetter@ffwll.ch:hch@lst.de:hverkuil-cisco@xs4all.nl:ira.weiny@intel.com:jack@suse.cz:jgg@mellanox.com:jgg@ziepe.ca:jglisse@redhat.com:jhubbard@nvidia.com:kirill@shutemov.name:leonro@mellanox.com::mchehab@kernel.org:mm-commits@vger.kernel.org:rppt@linux.ibm.com:torvalds@linux-foundation.org,RULES_HIT:41:152:327:355:379:800:960:966:967:973:988:989:1260:1263:1277:1311:1313:1314:1345:1359:1381:1431:1437:1513:1515:1516:1518:1521:1593:1594:1605:1730:1747:1777:1792:1801:2196:2198:2199:2200:2393:2525:2553:2559:2563:2682:2685:2731:2859:2892:2902:2915:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4321:4385:4605:5007:6119:6261 :6653:67 X-HE-Tag: legs25_1faec47f61802 X-Filterd-Recvd-Size: 20633 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Fri, 31 Jan 2020 06:13:37 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id F20E7214D8; Fri, 31 Jan 2020 06:13:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1580451216; bh=1BOYSz4lnIDySogIBejkv+SJ5tT9J/Xh/EsuZL9067Y=; h=Date:From:To:Subject:In-Reply-To:From; b=cLM4etWGY76FYoecvLnGL8O6BdiymD/2rZIIIRZqI0fDS0vzjnJ8S7kLrZaMgZPiO 4M8/lhzfluTEGXI9WmdmlaUKv8zWeN+tIPrqGHdhSEl2Fq8l4urpJmyMw8b5u4y+Hr FTaC8fm59RoBx3uFr9+6/IqnnQBrqH2hUqxhntOU= Date: Thu, 30 Jan 2020 22:13:35 -0800 From: Andrew Morton To: akpm@linux-foundation.org, alex.williamson@redhat.com, aneesh.kumar@linux.ibm.com, axboe@kernel.dk, bjorn.topel@intel.com, corbet@lwn.net, dan.j.williams@intel.com, daniel.vetter@ffwll.ch, hch@lst.de, hverkuil-cisco@xs4all.nl, ira.weiny@intel.com, jack@suse.cz, jgg@mellanox.com, jgg@ziepe.ca, jglisse@redhat.com, jhubbard@nvidia.com, kirill@shutemov.name, leonro@mellanox.com, linux-mm@kvack.org, mchehab@kernel.org, mm-commits@vger.kernel.org, rppt@linux.ibm.com, torvalds@linux-foundation.org Subject: [patch 045/118] mm, tree-wide: rename put_user_page*() to unpin_user_page*() Message-ID: <20200131061335.RvG9Pys7F%akpm@linux-foundation.org> In-Reply-To: <20200130221021.5f0211c56346d5485af07923@linux-foundation.org> User-Agent: s-nail v14.8.16 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: John Hubbard Subject: mm, tree-wide: rename put_user_page*() to unpin_user_page*() In order to provide a clearer, more symmetric API for pinning and unpinning DMA pages. This way, pin_user_pages*() calls match up with unpin_user_pages*() calls, and the API is a lot closer to being self-explanatory. Link: http://lkml.kernel.org/r/20200107224558.2362728-23-jhubbard@nvidia.com Signed-off-by: John Hubbard Reviewed-by: Jan Kara Cc: Alex Williamson Cc: Aneesh Kumar K.V Cc: Björn Töpel Cc: Christoph Hellwig Cc: Daniel Vetter Cc: Dan Williams Cc: Hans Verkuil Cc: Ira Weiny Cc: Jason Gunthorpe Cc: Jason Gunthorpe Cc: Jens Axboe Cc: Jerome Glisse Cc: Jonathan Corbet Cc: Kirill A. Shutemov Cc: Leon Romanovsky Cc: Mauro Carvalho Chehab Cc: Mike Rapoport Signed-off-by: Andrew Morton --- Documentation/core-api/pin_user_pages.rst | 2 - arch/powerpc/mm/book3s64/iommu_api.c | 4 +- drivers/gpu/drm/via/via_dmablit.c | 4 +- drivers/infiniband/core/umem.c | 2 - drivers/infiniband/hw/hfi1/user_pages.c | 2 - drivers/infiniband/hw/mthca/mthca_memfree.c | 6 +-- drivers/infiniband/hw/qib/qib_user_pages.c | 2 - drivers/infiniband/hw/qib/qib_user_sdma.c | 6 +-- drivers/infiniband/hw/usnic/usnic_uiom.c | 2 - drivers/infiniband/sw/siw/siw_mem.c | 2 - drivers/media/v4l2-core/videobuf-dma-sg.c | 4 +- drivers/platform/goldfish/goldfish_pipe.c | 4 +- drivers/vfio/vfio_iommu_type1.c | 2 - fs/io_uring.c | 4 +- include/linux/mm.h | 26 +++++++------- mm/gup.c | 32 +++++++++--------- mm/process_vm_access.c | 4 +- net/xdp/xdp_umem.c | 2 - 18 files changed, 55 insertions(+), 55 deletions(-) --- a/arch/powerpc/mm/book3s64/iommu_api.c~mm-tree-wide-rename-put_user_page-to-unpin_user_page +++ a/arch/powerpc/mm/book3s64/iommu_api.c @@ -168,7 +168,7 @@ good_exit: free_exit: /* free the references taken */ - put_user_pages(mem->hpages, pinned); + unpin_user_pages(mem->hpages, pinned); vfree(mem->hpas); kfree(mem); @@ -214,7 +214,7 @@ static void mm_iommu_unpin(struct mm_iom if (mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY) SetPageDirty(page); - put_user_page(page); + unpin_user_page(page); mem->hpas[i] = 0; } --- a/Documentation/core-api/pin_user_pages.rst~mm-tree-wide-rename-put_user_page-to-unpin_user_page +++ a/Documentation/core-api/pin_user_pages.rst @@ -219,7 +219,7 @@ since the system was booted, via two new /proc/vmstat/nr_foll_pin_requested Those are both going to show zero, unless CONFIG_DEBUG_VM is set. This is -because there is a noticeable performance drop in put_user_page(), when they +because there is a noticeable performance drop in unpin_user_page(), when they are activated. References --- a/drivers/gpu/drm/via/via_dmablit.c~mm-tree-wide-rename-put_user_page-to-unpin_user_page +++ a/drivers/gpu/drm/via/via_dmablit.c @@ -188,8 +188,8 @@ via_free_sg_info(struct pci_dev *pdev, d kfree(vsg->desc_pages); /* fall through */ case dr_via_pages_locked: - put_user_pages_dirty_lock(vsg->pages, vsg->num_pages, - (vsg->direction == DMA_FROM_DEVICE)); + unpin_user_pages_dirty_lock(vsg->pages, vsg->num_pages, + (vsg->direction == DMA_FROM_DEVICE)); /* fall through */ case dr_via_pages_alloc: vfree(vsg->pages); --- a/drivers/infiniband/core/umem.c~mm-tree-wide-rename-put_user_page-to-unpin_user_page +++ a/drivers/infiniband/core/umem.c @@ -54,7 +54,7 @@ static void __ib_umem_release(struct ib_ for_each_sg_page(umem->sg_head.sgl, &sg_iter, umem->sg_nents, 0) { page = sg_page_iter_page(&sg_iter); - put_user_pages_dirty_lock(&page, 1, umem->writable && dirty); + unpin_user_pages_dirty_lock(&page, 1, umem->writable && dirty); } sg_free_table(&umem->sg_head); --- a/drivers/infiniband/hw/hfi1/user_pages.c~mm-tree-wide-rename-put_user_page-to-unpin_user_page +++ a/drivers/infiniband/hw/hfi1/user_pages.c @@ -118,7 +118,7 @@ int hfi1_acquire_user_pages(struct mm_st void hfi1_release_user_pages(struct mm_struct *mm, struct page **p, size_t npages, bool dirty) { - put_user_pages_dirty_lock(p, npages, dirty); + unpin_user_pages_dirty_lock(p, npages, dirty); if (mm) { /* during close after signal, mm can be NULL */ atomic64_sub(npages, &mm->pinned_vm); --- a/drivers/infiniband/hw/mthca/mthca_memfree.c~mm-tree-wide-rename-put_user_page-to-unpin_user_page +++ a/drivers/infiniband/hw/mthca/mthca_memfree.c @@ -482,7 +482,7 @@ int mthca_map_user_db(struct mthca_dev * ret = pci_map_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); if (ret < 0) { - put_user_page(pages[0]); + unpin_user_page(pages[0]); goto out; } @@ -490,7 +490,7 @@ int mthca_map_user_db(struct mthca_dev * mthca_uarc_virt(dev, uar, i)); if (ret) { pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); - put_user_page(sg_page(&db_tab->page[i].mem)); + unpin_user_page(sg_page(&db_tab->page[i].mem)); goto out; } @@ -556,7 +556,7 @@ void mthca_cleanup_user_db_tab(struct mt if (db_tab->page[i].uvirt) { mthca_UNMAP_ICM(dev, mthca_uarc_virt(dev, uar, i), 1); pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); - put_user_page(sg_page(&db_tab->page[i].mem)); + unpin_user_page(sg_page(&db_tab->page[i].mem)); } } --- a/drivers/infiniband/hw/qib/qib_user_pages.c~mm-tree-wide-rename-put_user_page-to-unpin_user_page +++ a/drivers/infiniband/hw/qib/qib_user_pages.c @@ -40,7 +40,7 @@ static void __qib_release_user_pages(struct page **p, size_t num_pages, int dirty) { - put_user_pages_dirty_lock(p, num_pages, dirty); + unpin_user_pages_dirty_lock(p, num_pages, dirty); } /** --- a/drivers/infiniband/hw/qib/qib_user_sdma.c~mm-tree-wide-rename-put_user_page-to-unpin_user_page +++ a/drivers/infiniband/hw/qib/qib_user_sdma.c @@ -317,7 +317,7 @@ static int qib_user_sdma_page_to_frags(c * the caller can ignore this page. */ if (put) { - put_user_page(page); + unpin_user_page(page); } else { /* coalesce case */ kunmap(page); @@ -631,7 +631,7 @@ static void qib_user_sdma_free_pkt_frag( kunmap(pkt->addr[i].page); if (pkt->addr[i].put_page) - put_user_page(pkt->addr[i].page); + unpin_user_page(pkt->addr[i].page); else __free_page(pkt->addr[i].page); } else if (pkt->addr[i].kvaddr) { @@ -706,7 +706,7 @@ static int qib_user_sdma_pin_pages(const /* if error, return all pages not managed by pkt */ free_pages: while (i < j) - put_user_page(pages[i++]); + unpin_user_page(pages[i++]); done: return ret; --- a/drivers/infiniband/hw/usnic/usnic_uiom.c~mm-tree-wide-rename-put_user_page-to-unpin_user_page +++ a/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -75,7 +75,7 @@ static void usnic_uiom_put_pages(struct for_each_sg(chunk->page_list, sg, chunk->nents, i) { page = sg_page(sg); pa = sg_phys(sg); - put_user_pages_dirty_lock(&page, 1, dirty); + unpin_user_pages_dirty_lock(&page, 1, dirty); usnic_dbg("pa: %pa\n", &pa); } kfree(chunk); --- a/drivers/infiniband/sw/siw/siw_mem.c~mm-tree-wide-rename-put_user_page-to-unpin_user_page +++ a/drivers/infiniband/sw/siw/siw_mem.c @@ -63,7 +63,7 @@ struct siw_mem *siw_mem_id2obj(struct si static void siw_free_plist(struct siw_page_chunk *chunk, int num_pages, bool dirty) { - put_user_pages_dirty_lock(chunk->plist, num_pages, dirty); + unpin_user_pages_dirty_lock(chunk->plist, num_pages, dirty); } void siw_umem_release(struct siw_umem *umem, bool dirty) --- a/drivers/media/v4l2-core/videobuf-dma-sg.c~mm-tree-wide-rename-put_user_page-to-unpin_user_page +++ a/drivers/media/v4l2-core/videobuf-dma-sg.c @@ -349,8 +349,8 @@ int videobuf_dma_free(struct videobuf_dm BUG_ON(dma->sglen); if (dma->pages) { - put_user_pages_dirty_lock(dma->pages, dma->nr_pages, - dma->direction == DMA_FROM_DEVICE); + unpin_user_pages_dirty_lock(dma->pages, dma->nr_pages, + dma->direction == DMA_FROM_DEVICE); kfree(dma->pages); dma->pages = NULL; } --- a/drivers/platform/goldfish/goldfish_pipe.c~mm-tree-wide-rename-put_user_page-to-unpin_user_page +++ a/drivers/platform/goldfish/goldfish_pipe.c @@ -360,8 +360,8 @@ static int transfer_max_buffers(struct g *consumed_size = pipe->command_buffer->rw_params.consumed_size; - put_user_pages_dirty_lock(pipe->pages, pages_count, - !is_write && *consumed_size > 0); + unpin_user_pages_dirty_lock(pipe->pages, pages_count, + !is_write && *consumed_size > 0); mutex_unlock(&pipe->lock); return 0; --- a/drivers/vfio/vfio_iommu_type1.c~mm-tree-wide-rename-put_user_page-to-unpin_user_page +++ a/drivers/vfio/vfio_iommu_type1.c @@ -310,7 +310,7 @@ static int put_pfn(unsigned long pfn, in if (!is_invalid_reserved_pfn(pfn)) { struct page *page = pfn_to_page(pfn); - put_user_pages_dirty_lock(&page, 1, prot & IOMMU_WRITE); + unpin_user_pages_dirty_lock(&page, 1, prot & IOMMU_WRITE); return 1; } return 0; --- a/fs/io_uring.c~mm-tree-wide-rename-put_user_page-to-unpin_user_page +++ a/fs/io_uring.c @@ -6005,7 +6005,7 @@ static int io_sqe_buffer_unregister(stru struct io_mapped_ubuf *imu = &ctx->user_bufs[i]; for (j = 0; j < imu->nr_bvecs; j++) - put_user_page(imu->bvec[j].bv_page); + unpin_user_page(imu->bvec[j].bv_page); if (ctx->account_mem) io_unaccount_mem(ctx->user, imu->nr_bvecs); @@ -6150,7 +6150,7 @@ static int io_sqe_buffer_register(struct * release any pages we did get */ if (pret > 0) - put_user_pages(pages, pret); + unpin_user_pages(pages, pret); if (ctx->account_mem) io_unaccount_mem(ctx->user, nr_pages); kvfree(imu->bvec); --- a/include/linux/mm.h~mm-tree-wide-rename-put_user_page-to-unpin_user_page +++ a/include/linux/mm.h @@ -1039,27 +1039,27 @@ static inline void put_page(struct page } /** - * put_user_page() - release a gup-pinned page + * unpin_user_page() - release a gup-pinned page * @page: pointer to page to be released * * Pages that were pinned via pin_user_pages*() must be released via either - * put_user_page(), or one of the put_user_pages*() routines. This is so that - * eventually such pages can be separately tracked and uniquely handled. In + * unpin_user_page(), or one of the unpin_user_pages*() routines. This is so + * that eventually such pages can be separately tracked and uniquely handled. In * particular, interactions with RDMA and filesystems need special handling. * - * put_user_page() and put_page() are not interchangeable, despite this early - * implementation that makes them look the same. put_user_page() calls must + * unpin_user_page() and put_page() are not interchangeable, despite this early + * implementation that makes them look the same. unpin_user_page() calls must * be perfectly matched up with pin*() calls. */ -static inline void put_user_page(struct page *page) +static inline void unpin_user_page(struct page *page) { put_page(page); } -void put_user_pages_dirty_lock(struct page **pages, unsigned long npages, - bool make_dirty); +void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, + bool make_dirty); -void put_user_pages(struct page **pages, unsigned long npages); +void unpin_user_pages(struct page **pages, unsigned long npages); #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) #define SECTION_IN_PAGE_FLAGS @@ -2590,7 +2590,7 @@ struct page *follow_page(struct vm_area_ #define FOLL_ANON 0x8000 /* don't do file mappings */ #define FOLL_LONGTERM 0x10000 /* mapping lifetime is indefinite: see below */ #define FOLL_SPLIT_PMD 0x20000 /* split huge pmd before returning */ -#define FOLL_PIN 0x40000 /* pages must be released via put_user_page() */ +#define FOLL_PIN 0x40000 /* pages must be released via unpin_user_page */ /* * FOLL_PIN and FOLL_LONGTERM may be used in various combinations with each @@ -2625,7 +2625,7 @@ struct page *follow_page(struct vm_area_ * Direct IO). This lets the filesystem know that some non-file-system entity is * potentially changing the pages' data. In contrast to FOLL_GET (whose pages * are released via put_page()), FOLL_PIN pages must be released, ultimately, by - * a call to put_user_page(). + * a call to unpin_user_page(). * * FOLL_PIN is similar to FOLL_GET: both of these pin pages. They use different * and separate refcounting mechanisms, however, and that means that each has @@ -2633,7 +2633,7 @@ struct page *follow_page(struct vm_area_ * * FOLL_GET: get_user_pages*() to acquire, and put_page() to release. * - * FOLL_PIN: pin_user_pages*() to acquire, and put_user_pages to release. + * FOLL_PIN: pin_user_pages*() to acquire, and unpin_user_pages to release. * * FOLL_PIN and FOLL_GET are mutually exclusive for a given function call. * (The underlying pages may experience both FOLL_GET-based and FOLL_PIN-based @@ -2643,7 +2643,7 @@ struct page *follow_page(struct vm_area_ * FOLL_PIN should be set internally by the pin_user_pages*() APIs, never * directly by the caller. That's in order to help avoid mismatches when * releasing pages: get_user_pages*() pages must be released via put_page(), - * while pin_user_pages*() pages must be released via put_user_page(). + * while pin_user_pages*() pages must be released via unpin_user_page(). * * Please see Documentation/vm/pin_user_pages.rst for more information. */ --- a/mm/gup.c~mm-tree-wide-rename-put_user_page-to-unpin_user_page +++ a/mm/gup.c @@ -45,7 +45,7 @@ static inline struct page *try_get_compo } /** - * put_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages + * unpin_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages * @pages: array of pages to be maybe marked dirty, and definitely released. * @npages: number of pages in the @pages array. * @make_dirty: whether to mark the pages dirty @@ -55,19 +55,19 @@ static inline struct page *try_get_compo * * For each page in the @pages array, make that page (or its head page, if a * compound page) dirty, if @make_dirty is true, and if the page was previously - * listed as clean. In any case, releases all pages using put_user_page(), - * possibly via put_user_pages(), for the non-dirty case. + * listed as clean. In any case, releases all pages using unpin_user_page(), + * possibly via unpin_user_pages(), for the non-dirty case. * - * Please see the put_user_page() documentation for details. + * Please see the unpin_user_page() documentation for details. * * set_page_dirty_lock() is used internally. If instead, set_page_dirty() is * required, then the caller should a) verify that this is really correct, * because _lock() is usually required, and b) hand code it: - * set_page_dirty_lock(), put_user_page(). + * set_page_dirty_lock(), unpin_user_page(). * */ -void put_user_pages_dirty_lock(struct page **pages, unsigned long npages, - bool make_dirty) +void unpin_user_pages_dirty_lock(struct page **pages, unsigned long npages, + bool make_dirty) { unsigned long index; @@ -78,7 +78,7 @@ void put_user_pages_dirty_lock(struct pa */ if (!make_dirty) { - put_user_pages(pages, npages); + unpin_user_pages(pages, npages); return; } @@ -106,21 +106,21 @@ void put_user_pages_dirty_lock(struct pa */ if (!PageDirty(page)) set_page_dirty_lock(page); - put_user_page(page); + unpin_user_page(page); } } -EXPORT_SYMBOL(put_user_pages_dirty_lock); +EXPORT_SYMBOL(unpin_user_pages_dirty_lock); /** - * put_user_pages() - release an array of gup-pinned pages. + * unpin_user_pages() - release an array of gup-pinned pages. * @pages: array of pages to be marked dirty and released. * @npages: number of pages in the @pages array. * - * For each page in the @pages array, release the page using put_user_page(). + * For each page in the @pages array, release the page using unpin_user_page(). * - * Please see the put_user_page() documentation for details. + * Please see the unpin_user_page() documentation for details. */ -void put_user_pages(struct page **pages, unsigned long npages) +void unpin_user_pages(struct page **pages, unsigned long npages) { unsigned long index; @@ -130,9 +130,9 @@ void put_user_pages(struct page **pages, * single operation to the head page should suffice. */ for (index = 0; index < npages; index++) - put_user_page(pages[index]); + unpin_user_page(pages[index]); } -EXPORT_SYMBOL(put_user_pages); +EXPORT_SYMBOL(unpin_user_pages); #ifdef CONFIG_MMU static struct page *no_page_table(struct vm_area_struct *vma, --- a/mm/process_vm_access.c~mm-tree-wide-rename-put_user_page-to-unpin_user_page +++ a/mm/process_vm_access.c @@ -126,8 +126,8 @@ static int process_vm_rw_single_vec(unsi pa += pinned_pages * PAGE_SIZE; /* If vm_write is set, the pages need to be made dirty: */ - put_user_pages_dirty_lock(process_pages, pinned_pages, - vm_write); + unpin_user_pages_dirty_lock(process_pages, pinned_pages, + vm_write); } return rc; --- a/net/xdp/xdp_umem.c~mm-tree-wide-rename-put_user_page-to-unpin_user_page +++ a/net/xdp/xdp_umem.c @@ -212,7 +212,7 @@ static int xdp_umem_map_pages(struct xdp static void xdp_umem_unpin_pages(struct xdp_umem *umem) { - put_user_pages_dirty_lock(umem->pgs, umem->npgs, true); + unpin_user_pages_dirty_lock(umem->pgs, umem->npgs, true); kfree(umem->pgs); umem->pgs = NULL;