From patchwork Fri Jan 31 06:13:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 11359187 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 4FA3313A4 for ; Fri, 31 Jan 2020 06:13:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0A84B21734 for ; Fri, 31 Jan 2020 06:13:07 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="fLeDeYXy" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0A84B21734 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 263E96B0503; Fri, 31 Jan 2020 01:13:05 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 1EE1D6B0505; Fri, 31 Jan 2020 01:13:05 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 104376B0506; Fri, 31 Jan 2020 01:13:05 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0177.hostedemail.com [216.40.44.177]) by kanga.kvack.org (Postfix) with ESMTP id E89756B0503 for ; Fri, 31 Jan 2020 01:13:04 -0500 (EST) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 8CDEC824999B for ; Fri, 31 Jan 2020 06:13:04 +0000 (UTC) X-FDA: 76436911488.29.girls28_1acc36553f741 X-Spam-Summary: 2,0,0,cc761c10437e558b,d41d8cd98f00b204,akpm@linux-foundation.org,:akpm@linux-foundation.org:alex.williamson@redhat.com:aneesh.kumar@linux.ibm.com:axboe@kernel.dk:bjorn.topel@intel.com:corbet@lwn.net:dan.j.williams@intel.com:daniel.vetter@ffwll.ch:hch@lst.de:hverkuil-cisco@xs4all.nl:ira.weiny@intel.com:jack@suse.cz:jgg@mellanox.com:jgg@ziepe.ca:jglisse@redhat.com:jhubbard@nvidia.com:kirill@shutemov.name:leonro@mellanox.com::mchehab@kernel.org:mm-commits@vger.kernel.org:rppt@linux.ibm.com:torvalds@linux-foundation.org,RULES_HIT:2:41:152:355:379:800:960:966:967:973:988:989:1260:1263:1277:1311:1313:1314:1345:1359:1381:1431:1437:1513:1515:1516:1518:1521:1535:1593:1594:1730:1747:1777:1792:2196:2199:2393:2525:2553:2559:2563:2682:2685:2859:2898:2902:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3355:3865:3866:3867:3868:3870:3871:3872:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4049:4119:4250:4321:4385:4605:5007:6261:6653:6737:6738:7 576:7904 X-HE-Tag: girls28_1acc36553f741 X-Filterd-Recvd-Size: 8683 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Fri, 31 Jan 2020 06:13:04 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A78682465B; Fri, 31 Jan 2020 06:13:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1580451183; bh=ULYF/pJX9/Rwx19ubapLPDB4TAMBi2vusvg1x77GaAo=; h=Date:From:To:Subject:In-Reply-To:From; b=fLeDeYXy5Ob4oXRdo1pRRN+Kx8Lxp2d3ep0i7wWShtRIotNLCtmq8t3GpRVhdifWx EgtZl+4yFPWKR3mOq3HgCqBjBR0QJ0dwRXp35kIbcK0RnYL0VuypfnJv9xRbEwlCO1 KdCAAOWDZZuRJ0DEmZbhKo8gr/JdmLBiDW10pITU= Date: Thu, 30 Jan 2020 22:13:02 -0800 From: Andrew Morton To: akpm@linux-foundation.org, alex.williamson@redhat.com, aneesh.kumar@linux.ibm.com, axboe@kernel.dk, bjorn.topel@intel.com, corbet@lwn.net, dan.j.williams@intel.com, daniel.vetter@ffwll.ch, hch@lst.de, hverkuil-cisco@xs4all.nl, ira.weiny@intel.com, jack@suse.cz, jgg@mellanox.com, jgg@ziepe.ca, jglisse@redhat.com, jhubbard@nvidia.com, kirill@shutemov.name, leonro@mellanox.com, linux-mm@kvack.org, mchehab@kernel.org, mm-commits@vger.kernel.org, rppt@linux.ibm.com, torvalds@linux-foundation.org Subject: [patch 036/118] IB/{core,hw,umem}: set FOLL_PIN via pin_user_pages*(), fix up ODP Message-ID: <20200131061302.BxpCPixtw%akpm@linux-foundation.org> In-Reply-To: <20200130221021.5f0211c56346d5485af07923@linux-foundation.org> User-Agent: s-nail v14.8.16 MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: John Hubbard Subject: IB/{core,hw,umem}: set FOLL_PIN via pin_user_pages*(), fix up ODP Convert infiniband to use the new pin_user_pages*() calls. Also, revert earlier changes to Infiniband ODP that had it using put_user_page(). ODP is "Case 3" in Documentation/core-api/pin_user_pages.rst, which is to say, normal get_user_pages() and put_page() is the API to use there. The new pin_user_pages*() calls replace corresponding get_user_pages*() calls, and set the FOLL_PIN flag. The FOLL_PIN flag requires that the caller must return the pages via put_user_page*() calls, but infiniband was already doing that as part of an earlier commit. Link: http://lkml.kernel.org/r/20200107224558.2362728-14-jhubbard@nvidia.com Signed-off-by: John Hubbard Reviewed-by: Jason Gunthorpe Cc: Alex Williamson Cc: Aneesh Kumar K.V Cc: Björn Töpel Cc: Christoph Hellwig Cc: Daniel Vetter Cc: Dan Williams Cc: Hans Verkuil Cc: Ira Weiny Cc: Jan Kara Cc: Jason Gunthorpe Cc: Jens Axboe Cc: Jerome Glisse Cc: Jonathan Corbet Cc: Kirill A. Shutemov Cc: Leon Romanovsky Cc: Mauro Carvalho Chehab Cc: Mike Rapoport Signed-off-by: Andrew Morton --- drivers/infiniband/core/umem.c | 2 +- drivers/infiniband/core/umem_odp.c | 13 ++++++------- drivers/infiniband/hw/hfi1/user_pages.c | 2 +- drivers/infiniband/hw/mthca/mthca_memfree.c | 2 +- drivers/infiniband/hw/qib/qib_user_pages.c | 2 +- drivers/infiniband/hw/qib/qib_user_sdma.c | 2 +- drivers/infiniband/hw/usnic/usnic_uiom.c | 2 +- drivers/infiniband/sw/siw/siw_mem.c | 2 +- 8 files changed, 13 insertions(+), 14 deletions(-) --- a/drivers/infiniband/core/umem.c~ib-corehwumem-set-foll_pin-via-pin_user_pages-fix-up-odp +++ a/drivers/infiniband/core/umem.c @@ -257,7 +257,7 @@ struct ib_umem *ib_umem_get(struct ib_de sg = umem->sg_head.sgl; while (npages) { - ret = get_user_pages_fast(cur_base, + ret = pin_user_pages_fast(cur_base, min_t(unsigned long, npages, PAGE_SIZE / sizeof(struct page *)), --- a/drivers/infiniband/core/umem_odp.c~ib-corehwumem-set-foll_pin-via-pin_user_pages-fix-up-odp +++ a/drivers/infiniband/core/umem_odp.c @@ -293,9 +293,8 @@ EXPORT_SYMBOL(ib_umem_odp_release); * The function returns -EFAULT if the DMA mapping operation fails. It returns * -EAGAIN if a concurrent invalidation prevents us from updating the page. * - * The page is released via put_user_page even if the operation failed. For - * on-demand pinning, the page is released whenever it isn't stored in the - * umem. + * The page is released via put_page even if the operation failed. For on-demand + * pinning, the page is released whenever it isn't stored in the umem. */ static int ib_umem_odp_map_dma_single_page( struct ib_umem_odp *umem_odp, @@ -348,7 +347,7 @@ static int ib_umem_odp_map_dma_single_pa } out: - put_user_page(page); + put_page(page); return ret; } @@ -458,7 +457,7 @@ int ib_umem_odp_map_dma_pages(struct ib_ ret = -EFAULT; break; } - put_user_page(local_page_list[j]); + put_page(local_page_list[j]); continue; } @@ -485,8 +484,8 @@ int ib_umem_odp_map_dma_pages(struct ib_ * ib_umem_odp_map_dma_single_page(). */ if (npages - (j + 1) > 0) - put_user_pages(&local_page_list[j+1], - npages - (j + 1)); + release_pages(&local_page_list[j+1], + npages - (j + 1)); break; } } --- a/drivers/infiniband/hw/hfi1/user_pages.c~ib-corehwumem-set-foll_pin-via-pin_user_pages-fix-up-odp +++ a/drivers/infiniband/hw/hfi1/user_pages.c @@ -106,7 +106,7 @@ int hfi1_acquire_user_pages(struct mm_st int ret; unsigned int gup_flags = FOLL_LONGTERM | (writable ? FOLL_WRITE : 0); - ret = get_user_pages_fast(vaddr, npages, gup_flags, pages); + ret = pin_user_pages_fast(vaddr, npages, gup_flags, pages); if (ret < 0) return ret; --- a/drivers/infiniband/hw/mthca/mthca_memfree.c~ib-corehwumem-set-foll_pin-via-pin_user_pages-fix-up-odp +++ a/drivers/infiniband/hw/mthca/mthca_memfree.c @@ -472,7 +472,7 @@ int mthca_map_user_db(struct mthca_dev * goto out; } - ret = get_user_pages_fast(uaddr & PAGE_MASK, 1, + ret = pin_user_pages_fast(uaddr & PAGE_MASK, 1, FOLL_WRITE | FOLL_LONGTERM, pages); if (ret < 0) goto out; --- a/drivers/infiniband/hw/qib/qib_user_pages.c~ib-corehwumem-set-foll_pin-via-pin_user_pages-fix-up-odp +++ a/drivers/infiniband/hw/qib/qib_user_pages.c @@ -108,7 +108,7 @@ int qib_get_user_pages(unsigned long sta down_read(¤t->mm->mmap_sem); for (got = 0; got < num_pages; got += ret) { - ret = get_user_pages(start_page + got * PAGE_SIZE, + ret = pin_user_pages(start_page + got * PAGE_SIZE, num_pages - got, FOLL_LONGTERM | FOLL_WRITE | FOLL_FORCE, p + got, NULL); --- a/drivers/infiniband/hw/qib/qib_user_sdma.c~ib-corehwumem-set-foll_pin-via-pin_user_pages-fix-up-odp +++ a/drivers/infiniband/hw/qib/qib_user_sdma.c @@ -670,7 +670,7 @@ static int qib_user_sdma_pin_pages(const else j = npages; - ret = get_user_pages_fast(addr, j, FOLL_LONGTERM, pages); + ret = pin_user_pages_fast(addr, j, FOLL_LONGTERM, pages); if (ret != j) { i = 0; j = ret; --- a/drivers/infiniband/hw/usnic/usnic_uiom.c~ib-corehwumem-set-foll_pin-via-pin_user_pages-fix-up-odp +++ a/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -141,7 +141,7 @@ static int usnic_uiom_get_pages(unsigned ret = 0; while (npages) { - ret = get_user_pages(cur_base, + ret = pin_user_pages(cur_base, min_t(unsigned long, npages, PAGE_SIZE / sizeof(struct page *)), gup_flags | FOLL_LONGTERM, --- a/drivers/infiniband/sw/siw/siw_mem.c~ib-corehwumem-set-foll_pin-via-pin_user_pages-fix-up-odp +++ a/drivers/infiniband/sw/siw/siw_mem.c @@ -426,7 +426,7 @@ struct siw_umem *siw_umem_get(u64 start, while (nents) { struct page **plist = &umem->page_chunk[i].plist[got]; - rv = get_user_pages(first_page_va, nents, + rv = pin_user_pages(first_page_va, nents, foll_flags | FOLL_LONGTERM, plist, NULL); if (rv < 0)