From patchwork Tue Jan 15 18:12:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Davidlohr Bueso X-Patchwork-Id: 10764925 X-Patchwork-Delegate: jgg@ziepe.ca Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7828D6C5 for ; Tue, 15 Jan 2019 18:13:31 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6610B2D4D8 for ; Tue, 15 Jan 2019 18:13:31 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 57E052D467; Tue, 15 Jan 2019 18:13:31 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.9 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E672D2D467 for ; Tue, 15 Jan 2019 18:13:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731263AbfAOSNa (ORCPT ); Tue, 15 Jan 2019 13:13:30 -0500 Received: from smtp2.provo.novell.com ([137.65.250.81]:58089 "EHLO smtp2.provo.novell.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731165AbfAOSNa (ORCPT ); Tue, 15 Jan 2019 13:13:30 -0500 Received: from localhost.localdomain (prv-ext-foundry1int.gns.novell.com [137.65.251.240]) by smtp2.provo.novell.com with ESMTP (TLS encrypted); Tue, 15 Jan 2019 11:13:24 -0700 From: Davidlohr Bueso To: akpm@linux-foundation.org Cc: dledford@redhat.com, jgg@mellanox.com, linux-rdma@vger.kernel.org, linux-mm@kvack.org, dave@stgolabs.net, dennis.dalessandro@intel.com, mike.marciniszyn@intel.com, Davidlohr Bueso Subject: [PATCH 3/6] drivers/IB,qib: do not use mmap_sem Date: Tue, 15 Jan 2019 10:12:57 -0800 Message-Id: <20190115181300.27547-4-dave@stgolabs.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190115181300.27547-1-dave@stgolabs.net> References: <20190115181300.27547-1-dave@stgolabs.net> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The driver uses mmap_sem for both pinned_vm accounting and get_user_pages(). By using gup_fast() and letting the mm handle the lock if needed, we can no longer rely on the semaphore and simplify the whole thing as the pinning is decoupled from the lock. This also fixes a bug that __qib_get_user_pages was not taking into account the current value of pinned_vm. Cc: dennis.dalessandro@intel.com Cc: mike.marciniszyn@intel.com Signed-off-by: Davidlohr Bueso Reviewed-by: Ira Weiny --- drivers/infiniband/hw/qib/qib_user_pages.c | 67 ++++++++++-------------------- 1 file changed, 22 insertions(+), 45 deletions(-) diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c index 981795b23b73..4b7a5be782e6 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -49,43 +49,6 @@ static void __qib_release_user_pages(struct page **p, size_t num_pages, } } -/* - * Call with current->mm->mmap_sem held. - */ -static int __qib_get_user_pages(unsigned long start_page, size_t num_pages, - struct page **p) -{ - unsigned long lock_limit; - size_t got; - int ret; - - lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; - - if (num_pages > lock_limit && !capable(CAP_IPC_LOCK)) { - ret = -ENOMEM; - goto bail; - } - - for (got = 0; got < num_pages; got += ret) { - ret = get_user_pages(start_page + got * PAGE_SIZE, - num_pages - got, - FOLL_WRITE | FOLL_FORCE, - p + got, NULL); - if (ret < 0) - goto bail_release; - } - - atomic_long_add(num_pages, ¤t->mm->pinned_vm); - - ret = 0; - goto bail; - -bail_release: - __qib_release_user_pages(p, got, 0); -bail: - return ret; -} - /** * qib_map_page - a safety wrapper around pci_map_page() * @@ -137,26 +100,40 @@ int qib_map_page(struct pci_dev *hwdev, struct page *page, dma_addr_t *daddr) int qib_get_user_pages(unsigned long start_page, size_t num_pages, struct page **p) { + unsigned long locked, lock_limit; + size_t got; int ret; - down_write(¤t->mm->mmap_sem); + lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; + locked = atomic_long_add_return(num_pages, ¤t->mm->pinned_vm); - ret = __qib_get_user_pages(start_page, num_pages, p); + if (locked > lock_limit && !capable(CAP_IPC_LOCK)) { + ret = -ENOMEM; + goto bail; + } - up_write(¤t->mm->mmap_sem); + for (got = 0; got < num_pages; got += ret) { + ret = get_user_pages_fast(start_page + got * PAGE_SIZE, + num_pages - got, + FOLL_WRITE | FOLL_FORCE, + p + got); + if (ret < 0) + goto bail_release; + } + return 0; +bail_release: + __qib_release_user_pages(p, got, 0); +bail: + atomic_long_sub(num_pages, ¤t->mm->pinned_vm); return ret; } void qib_release_user_pages(struct page **p, size_t num_pages) { - if (current->mm) /* during close after signal, mm can be NULL */ - down_write(¤t->mm->mmap_sem); - __qib_release_user_pages(p, num_pages, 1); - if (current->mm) { + if (current->mm) { /* during close after signal, mm can be NULL */ atomic_long_sub(num_pages, ¤t->mm->pinned_vm); - up_write(¤t->mm->mmap_sem); } }