From patchwork Tue Dec 4 00:17:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 10710951 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 1303A1731 for ; Tue, 4 Dec 2018 00:17:34 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 01BCA2A9CA for ; Tue, 4 Dec 2018 00:17:34 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id E97BC2AD1D; Tue, 4 Dec 2018 00:17:33 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 2E1A82A9CA for ; Tue, 4 Dec 2018 00:17:33 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7B9ED6B6BBB; Mon, 3 Dec 2018 19:17:30 -0500 (EST) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 76A946B6BBE; Mon, 3 Dec 2018 19:17:30 -0500 (EST) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 60DF16B6BD5; Mon, 3 Dec 2018 19:17:30 -0500 (EST) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f198.google.com (mail-pg1-f198.google.com [209.85.215.198]) by kanga.kvack.org (Postfix) with ESMTP id 21A716B6BBB for ; Mon, 3 Dec 2018 19:17:30 -0500 (EST) Received: by mail-pg1-f198.google.com with SMTP id q62so7881827pgq.9 for ; Mon, 03 Dec 2018 16:17:30 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:from:to:cc:subject:date :message-id:in-reply-to:references:mime-version :content-transfer-encoding; bh=sPERV01M+H72mDTnW41nW+wbJyCratcfiLxrJGvVtk8=; b=iavzksoHvo1/Apgl21BsbN5Frud1yFhVT+yHbSAb7jfFrjrp0RDFoqAq1waEWY1EWZ u18S4swDKa4xoa7RJsyF6DddCQ/CW+w2zmzkyzNy7Pkmcm3nRfvRf12DJ5tlJPZby9Ew R7bdKBa/WtEiiTnnuK+OlROdW8TY2oIVENmOdOiV2YM24UxFsM6KEX5mUQLQxcazmdle NxEvZBhwA3jptdXtnG/zYsl1CPT+KtdVzGnyuXfN8+6YNhHok7dhVDnMyPC41WPXIt7p xCmuAp1K1/ocSX8/9JlvYxOsZ6o8/1P+FPUZfHTx+6PONAeFQDxLH9IVSQUvOSkeiFcE M//Q== X-Gm-Message-State: AA+aEWb/y2m4IT/JwH7ZNH6O/t1NUFjanqVxHNMr+aCemycbVRt8XhX2 2TgXsWt3yANFfZSCNfft3arT6fv75F1P+a4wbNXm0gzCVrkyKHT5k9xVspshJU3vHYctbuCj0gV kq7XNuPlETqwOeDEXrfFksYiMapS45bwLLt0XHPHUxPhj5WycxX2Hnm7nyrMarS6nmnNYsn1sNj 80ewNEyf/qWZn8m6pLMkYfnTE2oaP+AV5njIvHpGjb6wC80ptgkdwBBeH/Px7kSsRFWfMTKpnEi a5zIg3cZ6yhc7+fvzChRZ5LvVhmXOFsjqwGxphqc3xzaMpfUy12fGkCd5pzpjlO9qvY2u/VdWCf QwGbXxn6/dS8CXXc4iIsVDqzbm21CUQc6Y3biIgkm1h7WusJNhVuXEZurH4gocpXmvA/fLTRq6s c X-Received: by 2002:a62:da5a:: with SMTP id w26mr17941031pfl.106.1543882649710; Mon, 03 Dec 2018 16:17:29 -0800 (PST) X-Received: by 2002:a62:da5a:: with SMTP id w26mr17940999pfl.106.1543882648741; Mon, 03 Dec 2018 16:17:28 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1543882648; cv=none; d=google.com; s=arc-20160816; b=WU79A51qCdHCxQoM6AgmRzRVsPKd8ZgHzO+hYdtclNRIeD2pyh0rbo9j/MYVL0DzQo exKOzvJ7ReczDZ+38X8pc9HuYTFzJmvZHuA6IUvEJb0Nx6ZhmptSXXVfhboJV/KWLJnn bIafbIjE+Lw3nH74XzlaB7F2LXOIwVGLliFYCd7XLLp2bTC10J0HPWPCV1aIhoOz0XGN B3/7F1pSxWRyoD3l0tgMqorD+qYBT2+8fuifwooAt9nFgQuKY7E+3q8sftmmGIuCLGBU sEe0zlt5GhtVso/g6vnTIx2JLFjF/a46Kvo7DC4XhxwoJGyqFIgL85TsR8UaZIApCuAy pRrA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:dkim-signature; bh=sPERV01M+H72mDTnW41nW+wbJyCratcfiLxrJGvVtk8=; b=m2Pg65UZnbd0Hw6QZK7o0Gl8qsWmmyZzTJtFHeK0dthVcBAPtliywCQEna2Fa5fZl+ iOgczhLpju9W0CuC4hjJKWaiGUvjz8nuSqJMosMm07Rg+/c9TYCwzhjHabTrxKPR8ASN 4jIn3VDsZAxiGXdKHz0XSOfFEkg5VvEmzUiZ1kz/Cbih2H9oeEAUUgKrbmY+68UC+W3P YcT4ehGuqyo5oA4l/ivthHkpazKb3qd4aA7QtEd1yiX9dp4QAM/NBDRerOMhfFpx0ZN1 jATYYhSu9sZU8M8maDY5JdRWAr4CI9GaSQb4iV3DWgJHCuQQ2RB/Lau7+9O+9QjJG3Bk hd9w== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=oFsY5RV7; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id j190sor14559251pfc.20.2018.12.03.16.17.28 for (Google Transport Security); Mon, 03 Dec 2018 16:17:28 -0800 (PST) Received-SPF: pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=oFsY5RV7; spf=pass (google.com: domain of john.hubbard@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=john.hubbard@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=sPERV01M+H72mDTnW41nW+wbJyCratcfiLxrJGvVtk8=; b=oFsY5RV7l5W14w7mLS1Cdmirye+IFFD7BkKniPTccV9g+ZUK1Y7i+yZ53H5GvWiXz9 bonqQZYJ3iK653f0cOY6pZSyf33iRK7o5QnfiPgU+eglqjDRatwNWtc/TxXQO48hx60z D9yw2Uy5iiFhmDfEA42JWqdu3m/YE6uzgpKNZrD3V3nz056OCN9BkpD4vlqTWAPxOXgP Uh97wVjpjaAcCdvisXL3koVPzVK2h6x30Qx5tR4eRhbUK4ViCxpwaF5zzUeMxhZFs6aF SVl6Xbi4S6AkyMDcizvwRn0NMY7OGbG5eOu7sxqDmZXaT+pzGmNkYnVpheqPshpbYsE9 Bghg== X-Google-Smtp-Source: AFSGD/WnDfNTTI35vpMyyUI1N8vHGdlUk+0zS0sR9DUu/ICG5V4jxhdGhoKiOtqDMvzC33FXw+vmZA== X-Received: by 2002:a62:c101:: with SMTP id i1mr17934515pfg.80.1543882648390; Mon, 03 Dec 2018 16:17:28 -0800 (PST) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id y12sm21733332pfk.70.2018.12.03.16.17.26 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 03 Dec 2018 16:17:27 -0800 (PST) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton , linux-mm@kvack.org Cc: Jan Kara , Tom Talpey , Al Viro , Christian Benvenuti , Christoph Hellwig , Christopher Lameter , Dan Williams , Dennis Dalessandro , Doug Ledford , Jason Gunthorpe , Jerome Glisse , Matthew Wilcox , Michal Hocko , Mike Marciniszyn , Ralph Campbell , LKML , linux-fsdevel@vger.kernel.org, John Hubbard Subject: [PATCH 2/2] infiniband/mm: convert put_page() to put_user_page*() Date: Mon, 3 Dec 2018 16:17:20 -0800 Message-Id: <20181204001720.26138-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.19.2 In-Reply-To: <20181204001720.26138-1-jhubbard@nvidia.com> References: <20181204001720.26138-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP From: John Hubbard For infiniband code that retains pages via get_user_pages*(), release those pages via the new put_user_page(), or put_user_pages*(), instead of put_page() This is a tiny part of the second step of fixing the problem described in [1]. The steps are: 1) Provide put_user_page*() routines, intended to be used for releasing pages that were pinned via get_user_pages*(). 2) Convert all of the call sites for get_user_pages*(), to invoke put_user_page*(), instead of put_page(). This involves dozens of call sites, and will take some time. 3) After (2) is complete, use get_user_pages*() and put_user_page*() to implement tracking of these pages. This tracking will be separate from the existing struct page refcounting. 4) Use the tracking and identification of these pages, to implement special handling (especially in writeback paths) when the pages are backed by a filesystem. Again, [1] provides details as to why that is desirable. [1] https://lwn.net/Articles/753027/ : "The Trouble with get_user_pages()" Reviewed-by: Jan Kara Reviewed-by: Dennis Dalessandro Acked-by: Jason Gunthorpe Cc: Doug Ledford Cc: Jason Gunthorpe Cc: Mike Marciniszyn Cc: Dennis Dalessandro Cc: Christian Benvenuti Signed-off-by: John Hubbard --- drivers/infiniband/core/umem.c | 7 ++++--- drivers/infiniband/core/umem_odp.c | 2 +- drivers/infiniband/hw/hfi1/user_pages.c | 11 ++++------- drivers/infiniband/hw/mthca/mthca_memfree.c | 6 +++--- drivers/infiniband/hw/qib/qib_user_pages.c | 11 ++++------- drivers/infiniband/hw/qib/qib_user_sdma.c | 6 +++--- drivers/infiniband/hw/usnic/usnic_uiom.c | 7 ++++--- 7 files changed, 23 insertions(+), 27 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index c6144df47ea4..c2898bc7b3b2 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -58,9 +58,10 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d for_each_sg(umem->sg_head.sgl, sg, umem->npages, i) { page = sg_page(sg); - if (!PageDirty(page) && umem->writable && dirty) - set_page_dirty_lock(page); - put_page(page); + if (umem->writable && dirty) + put_user_pages_dirty_lock(&page, 1); + else + put_user_page(page); } sg_free_table(&umem->sg_head); diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c index 676c1fd1119d..99715049cd3b 100644 --- a/drivers/infiniband/core/umem_odp.c +++ b/drivers/infiniband/core/umem_odp.c @@ -659,7 +659,7 @@ int ib_umem_odp_map_dma_pages(struct ib_umem_odp *umem_odp, u64 user_virt, ret = -EFAULT; break; } - put_page(local_page_list[j]); + put_user_page(local_page_list[j]); continue; } diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/hw/hfi1/user_pages.c index e341e6dcc388..99ccc0483711 100644 --- a/drivers/infiniband/hw/hfi1/user_pages.c +++ b/drivers/infiniband/hw/hfi1/user_pages.c @@ -121,13 +121,10 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsigned long vaddr, size_t np void hfi1_release_user_pages(struct mm_struct *mm, struct page **p, size_t npages, bool dirty) { - size_t i; - - for (i = 0; i < npages; i++) { - if (dirty) - set_page_dirty_lock(p[i]); - put_page(p[i]); - } + if (dirty) + put_user_pages_dirty_lock(p, npages); + else + put_user_pages(p, npages); if (mm) { /* during close after signal, mm can be NULL */ down_write(&mm->mmap_sem); diff --git a/drivers/infiniband/hw/mthca/mthca_memfree.c b/drivers/infiniband/hw/mthca/mthca_memfree.c index cc9c0c8ccba3..b8b12effd009 100644 --- a/drivers/infiniband/hw/mthca/mthca_memfree.c +++ b/drivers/infiniband/hw/mthca/mthca_memfree.c @@ -481,7 +481,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar, ret = pci_map_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); if (ret < 0) { - put_page(pages[0]); + put_user_page(pages[0]); goto out; } @@ -489,7 +489,7 @@ int mthca_map_user_db(struct mthca_dev *dev, struct mthca_uar *uar, mthca_uarc_virt(dev, uar, i)); if (ret) { pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); - put_page(sg_page(&db_tab->page[i].mem)); + put_user_page(sg_page(&db_tab->page[i].mem)); goto out; } @@ -555,7 +555,7 @@ void mthca_cleanup_user_db_tab(struct mthca_dev *dev, struct mthca_uar *uar, if (db_tab->page[i].uvirt) { mthca_UNMAP_ICM(dev, mthca_uarc_virt(dev, uar, i), 1); pci_unmap_sg(dev->pdev, &db_tab->page[i].mem, 1, PCI_DMA_TODEVICE); - put_page(sg_page(&db_tab->page[i].mem)); + put_user_page(sg_page(&db_tab->page[i].mem)); } } diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c index 16543d5e80c3..1a5c64c8695f 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -40,13 +40,10 @@ static void __qib_release_user_pages(struct page **p, size_t num_pages, int dirty) { - size_t i; - - for (i = 0; i < num_pages; i++) { - if (dirty) - set_page_dirty_lock(p[i]); - put_page(p[i]); - } + if (dirty) + put_user_pages_dirty_lock(p, num_pages); + else + put_user_pages(p, num_pages); } /* diff --git a/drivers/infiniband/hw/qib/qib_user_sdma.c b/drivers/infiniband/hw/qib/qib_user_sdma.c index 926f3c8eba69..4a4b802b011f 100644 --- a/drivers/infiniband/hw/qib/qib_user_sdma.c +++ b/drivers/infiniband/hw/qib/qib_user_sdma.c @@ -321,7 +321,7 @@ static int qib_user_sdma_page_to_frags(const struct qib_devdata *dd, * the caller can ignore this page. */ if (put) { - put_page(page); + put_user_page(page); } else { /* coalesce case */ kunmap(page); @@ -635,7 +635,7 @@ static void qib_user_sdma_free_pkt_frag(struct device *dev, kunmap(pkt->addr[i].page); if (pkt->addr[i].put_page) - put_page(pkt->addr[i].page); + put_user_page(pkt->addr[i].page); else __free_page(pkt->addr[i].page); } else if (pkt->addr[i].kvaddr) { @@ -710,7 +710,7 @@ static int qib_user_sdma_pin_pages(const struct qib_devdata *dd, /* if error, return all pages not managed by pkt */ free_pages: while (i < j) - put_page(pages[i++]); + put_user_page(pages[i++]); done: return ret; diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c index 49275a548751..2ef8d31dc838 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -77,9 +77,10 @@ static void usnic_uiom_put_pages(struct list_head *chunk_list, int dirty) for_each_sg(chunk->page_list, sg, chunk->nents, i) { page = sg_page(sg); pa = sg_phys(sg); - if (!PageDirty(page) && dirty) - set_page_dirty_lock(page); - put_page(page); + if (dirty) + put_user_pages_dirty_lock(&page, 1); + else + put_user_page(page); usnic_dbg("pa: %pa\n", &pa); } kfree(chunk);