From patchwork Wed Oct 2 14:28:01 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Kara X-Patchwork-Id: 2975301 Return-Path: X-Original-To: patchwork-linux-rdma@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 79D859F2B8 for ; Wed, 2 Oct 2013 14:34:23 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 5C9742017B for ; Wed, 2 Oct 2013 14:34:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 5D50C2029B for ; Wed, 2 Oct 2013 14:34:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753659Ab3JBOcm (ORCPT ); Wed, 2 Oct 2013 10:32:42 -0400 Received: from cantor2.suse.de ([195.135.220.15]:49006 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753210Ab3JBO27 (ORCPT ); Wed, 2 Oct 2013 10:28:59 -0400 Received: from relay2.suse.de (unknown [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id CC128A5443; Wed, 2 Oct 2013 16:28:57 +0200 (CEST) Received: by quack.suse.cz (Postfix, from userid 1000) id 2613280ED0; Wed, 2 Oct 2013 16:28:55 +0200 (CEST) From: Jan Kara To: LKML Cc: linux-mm@kvack.org, Jan Kara , Roland Dreier , linux-rdma@vger.kernel.org Subject: [PATCH 20/26] ib: Convert ib_umem_get() to get_user_pages_unlocked() Date: Wed, 2 Oct 2013 16:28:01 +0200 Message-Id: <1380724087-13927-21-git-send-email-jack@suse.cz> X-Mailer: git-send-email 1.8.1.4 In-Reply-To: <1380724087-13927-1-git-send-email-jack@suse.cz> References: <1380724087-13927-1-git-send-email-jack@suse.cz> Sender: linux-rdma-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-rdma@vger.kernel.org X-Spam-Status: No, score=-7.6 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Convert ib_umem_get() to use get_user_pages_unlocked(). This significantly shortens the section where mmap_sem is held (we only need it for updating of mm->pinned_vm and inside get_user_pages()) and removes the knowledge about locking of get_user_pages(). CC: Roland Dreier CC: linux-rdma@vger.kernel.org Signed-off-by: Jan Kara --- drivers/infiniband/core/umem.c | 41 +++++++++++++++++------------------------ 1 file changed, 17 insertions(+), 24 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index a84112322071..0640a89021a9 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -80,7 +80,6 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr, { struct ib_umem *umem; struct page **page_list; - struct vm_area_struct **vma_list; struct ib_umem_chunk *chunk; unsigned long locked; unsigned long lock_limit; @@ -125,34 +124,31 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr, return ERR_PTR(-ENOMEM); } - /* - * if we can't alloc the vma_list, it's not so bad; - * just assume the memory is not hugetlb memory - */ - vma_list = (struct vm_area_struct **) __get_free_page(GFP_KERNEL); - if (!vma_list) - umem->hugetlb = 0; - npages = PAGE_ALIGN(size + umem->offset) >> PAGE_SHIFT; down_write(¤t->mm->mmap_sem); - locked = npages + current->mm->pinned_vm; lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; - - if ((locked > lock_limit) && !capable(CAP_IPC_LOCK)) { - ret = -ENOMEM; - goto out; + locked = npages; + if (npages + current->mm->pinned_vm > lock_limit && + !capable(CAP_IPC_LOCK)) { + up_write(¤t->mm->mmap_sem); + kfree(umem); + free_page((unsigned long) page_list); + return ERR_PTR(-ENOMEM); } + current->mm->pinned_vm += npages; + + up_write(¤t->mm->mmap_sem); cur_base = addr & PAGE_MASK; ret = 0; while (npages) { - ret = get_user_pages(current, current->mm, cur_base, + ret = get_user_pages_unlocked(current, current->mm, cur_base, min_t(unsigned long, npages, PAGE_SIZE / sizeof (struct page *)), - 1, !umem->writable, page_list, vma_list); + 1, !umem->writable, page_list); if (ret < 0) goto out; @@ -174,8 +170,7 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr, chunk->nents = min_t(int, ret, IB_UMEM_MAX_PAGE_CHUNK); sg_init_table(chunk->page_list, chunk->nents); for (i = 0; i < chunk->nents; ++i) { - if (vma_list && - !is_vm_hugetlb_page(vma_list[i + off])) + if (!PageHuge(page_list[i + off])) umem->hugetlb = 0; sg_set_page(&chunk->page_list[i], page_list[i + off], PAGE_SIZE, 0); } @@ -206,12 +201,10 @@ out: if (ret < 0) { __ib_umem_release(context->device, umem, 0); kfree(umem); - } else - current->mm->pinned_vm = locked; - - up_write(¤t->mm->mmap_sem); - if (vma_list) - free_page((unsigned long) vma_list); + down_write(¤t->mm->mmap_sem); + current->mm->pinned_vm -= locked; + up_write(¤t->mm->mmap_sem); + } free_page((unsigned long) page_list); return ret < 0 ? ERR_PTR(ret) : umem;