From patchwork Wed Sep 21 02:00:28 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuriy Romanenko X-Patchwork-Id: 9342689 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0AC52607D0 for ; Wed, 21 Sep 2016 02:02:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E0AA929B71 for ; Wed, 21 Sep 2016 02:02:54 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id D3BCB29B88; Wed, 21 Sep 2016 02:02:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.1 required=2.0 tests=BAYES_00,DKIM_SIGNED, RCVD_IN_DNSWL_MED,T_DKIM_INVALID autolearn=ham version=3.3.1 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.9]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 102AD29B71 for ; Wed, 21 Sep 2016 02:02:53 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.85_2 #1 (Red Hat Linux)) id 1bmWqG-0005jS-90; Wed, 21 Sep 2016 02:00:56 +0000 Received: from mail-qt0-x243.google.com ([2607:f8b0:400d:c0d::243]) by bombadil.infradead.org with esmtps (Exim 4.85_2 #1 (Red Hat Linux)) id 1bmWqB-0005iO-W4 for linux-arm-kernel@lists.infradead.org; Wed, 21 Sep 2016 02:00:52 +0000 Received: by mail-qt0-x243.google.com with SMTP id 11so1120717qtc.3 for ; Tue, 20 Sep 2016 19:00:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:from:date:message-id:subject:to; bh=KiGTwnOQrzD4/lXs3J2e9ZiaMJ4/xdN+/Jyu1Vv4DfM=; b=mog1wem7gvViZTtvy70pQUi5pzG4DJsYkQ+AAdJEKaGK+zQEvF7aTQUg3FzhbVC6Ui FpAyvqJis1QPI/ebzl8O9C/DRPgGL1dyBe8csoY9FBsHMdGfZEQquMxDj3bHjo7vXLpW M6Dfcgl905k/scY6c05e3RsoT5dI7VqArmsIgRGDbXBqNo9oV/XMkImeTom8nq8jbt5c qbbQuZsCu65ayfb4ysrDllF/yd7MtoLXwY6TYqL10jGrofo4sfuQvGkhi7oO7MErWBJL dHNp4tR25jD1sOaO9ycyrd0z4cigrFp6oiNQe8Ks6OTv2bRxVpWl+NHWUDIfkky/Tzcn heoA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:sender:from:date:message-id:subject :to; bh=KiGTwnOQrzD4/lXs3J2e9ZiaMJ4/xdN+/Jyu1Vv4DfM=; b=Cx4VmMnug5AsQckIaBK8ibvV/N2iQ02GiAMJsEfepDWz9Nlu4l1SdGIpDYXWios/74 ft2Y4gApZjBb09AhT9tJStvYro60JRuYYYN1hDg6RWql81SAyA2M3DI22+hlVcU286e7 A1H8ExXQhSPU1ZevqdNrLgLovk/WNXGtixdidbvNDacSNvd4FL2I1UKPzztyE6ZVV+jo goZt5ZoZMweQLyS3iAOmWsklh6Wrh8sYlRlzfGDrmCPs6hjJ4m8SVN3kHU6PrAnUBZy5 AoCiaPTKxAbi/2UZXUOZL9nmeC1EekpU9cBlUHLakatsBdQfZSx80G9cLhNLwdLKLacj pkYw== X-Gm-Message-State: AE9vXwN4cIU8GR70fWIEct22uLAYaMJtRbfcVTL1bhEdiFoh9mdUznIsW9OoeswqYEilUvOT+3BGcdD5f29JUA== X-Received: by 10.200.56.168 with SMTP id f37mr39075705qtc.2.1474423229040; Tue, 20 Sep 2016 19:00:29 -0700 (PDT) MIME-Version: 1.0 Received: by 10.200.40.117 with HTTP; Tue, 20 Sep 2016 19:00:28 -0700 (PDT) From: Yuriy Romanenko Date: Tue, 20 Sep 2016 19:00:28 -0700 X-Google-Sender-Auth: HbW9_aFhx4uIdrqlXA_hzKnbMTg Message-ID: Subject: [PATCH] ARM: *: mm: Implement get_user_pages_fast() To: Russell King , linux-arm-kernel@lists.infradead.org X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20160920_190052_165381_C402BD20 X-CRM114-Status: GOOD ( 17.11 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Virus-Scanned: ClamAV using ClamSMTP From 6be781314e78ad43d797915189145a0aae41f639 Mon Sep 17 00:00:00 2001 From: Yuriy Romanenko Date: Tue, 20 Sep 2016 18:50:16 -0700 Subject: [PATCH] ARM: *: mm: Implement get_user_pages_fast() Will do an unlocked walk of the page table, if that provides everything necessary, it will succeed and return, otherwise it will call the old slow path on the remainder Signed-off-by: Yuriy Romanenko --- arch/arm/mm/Makefile | 2 +- arch/arm/mm/gup.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 91 insertions(+), 1 deletion(-) create mode 100644 arch/arm/mm/gup.c diff --git a/arch/arm/mm/Makefile b/arch/arm/mm/Makefile index 7f76d96..096cfcb 100644 --- a/arch/arm/mm/Makefile +++ b/arch/arm/mm/Makefile @@ -6,7 +6,7 @@ obj-y := dma-mapping.o extable.o fault.o init.o \ iomap.o obj-$(CONFIG_MMU) += fault-armv.o flush.o idmap.o ioremap.o \ - mmap.o pgd.o mmu.o pageattr.o + mmap.o pgd.o mmu.o pageattr.o gup.o ifneq ($(CONFIG_MMU),y) obj-y += nommu.o diff --git a/arch/arm/mm/gup.c b/arch/arm/mm/gup.c new file mode 100644 index 0000000..6e57bc9 --- /dev/null +++ b/arch/arm/mm/gup.c @@ -0,0 +1,90 @@ +/* + * Lockless get_user_pages_fast for ARM + * + * Copyright (C) 2014 Lytro, Inc. + */ + +#include +#include +#include +#include +#include + +#include + +struct gup_private_data { + int nr; + struct page **pages; + int write; +}; + +static int gup_pte_entry(pte_t *ptep, unsigned long start, + unsigned long end, struct mm_walk *walk) +{ + struct gup_private_data *private_data = + (struct gup_private_data *)walk->private; + struct page * page; + pte_t pte = *ptep; + if (!pte_present(pte) || + pte_special(pte) || + (private_data->write && !pte_write(pte))) + { + return private_data->nr; + } + page = pte_page(pte); + get_page(page); + private_data->pages[private_data->nr++] = page; + return 0; +} + +static int gup_pte_hole_entry(unsigned long start, unsigned long end, + struct mm_walk *walk) +{ + struct gup_private_data *private_data = + (struct gup_private_data *)walk->private; + return private_data->nr; +} + + +int get_user_pages_fast(unsigned long start, int nr_pages, int write, + struct page **pages) +{ + struct mm_struct *mm = current->mm; + int ret; + unsigned long page_addr = (start & PAGE_MASK); + int nr = 0; + + struct gup_private_data private_data = { + .nr = 0, + .pages = pages, + .write = write + }; + + struct mm_walk gup_walk = { + .pte_entry = gup_pte_entry, + .pte_hole = gup_pte_hole_entry, + .mm = mm, + .private = (void *)&private_data + }; + + ret = walk_page_range(page_addr, + page_addr + nr_pages * PAGE_SIZE, + &gup_walk); + nr = ret ? ret : nr_pages; + + if (nr == nr_pages) + { + return nr; + } + else + { + page_addr += (nr << PAGE_SHIFT); + } + + down_read(&mm->mmap_sem); + ret = get_user_pages(current, mm, page_addr, + nr_pages - nr, write, 0, pages + nr, NULL); + up_read(&mm->mmap_sem); + + return (ret < 0) ? nr : (ret + nr); +} \ No newline at end of file