From patchwork Thu Oct 27 20:34:03 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 9400391 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 744A56059C for ; Thu, 27 Oct 2016 20:35:00 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 5BF0F2A38A for ; Thu, 27 Oct 2016 20:35:00 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 507C72A3C0; Thu, 27 Oct 2016 20:35:00 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id CACE92A3BB for ; Thu, 27 Oct 2016 20:34:59 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1030219AbcJ0Uej (ORCPT ); Thu, 27 Oct 2016 16:34:39 -0400 Received: from mail-wm0-f68.google.com ([74.125.82.68]:33616 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S942389AbcJ0UeP (ORCPT ); Thu, 27 Oct 2016 16:34:15 -0400 Received: by mail-wm0-f68.google.com with SMTP id m83so4409410wmc.0; Thu, 27 Oct 2016 13:34:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=359QTmU/ajrYeB13WJBvVzEVm4p5FUsiMLxUEO3UdS8=; b=TJLku/X5g0QbLdKJFXA5Bt0qWE8KgUzrjeoxXhVIIrkfyynyl+xYUkmoLxqCqnMjDC T9yC0hTRqqV0LfdAvvP6DhYwQYF/UGnh1VaqxY4XtmhnWL3JqHhtDEadeYBw+OGOlnPc l/DnODgHeMG9QJbjPooJDgzmJG6MoGCprPfdrS4iPKkqx09GNEBMel2t4LZAXh7lgPzX iAVknszjY3dP4e8Fpjj1h1K0KrDKXDjrcbObdagbLJw7417NoxEp8EKOKG1GUeP0004Y dRpXKQWDxJODDUL5FVyjYfJd2gM21vQi35RqqflBaPrkhslHh7XVwF04LUigpBkAZFAi AD1A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=359QTmU/ajrYeB13WJBvVzEVm4p5FUsiMLxUEO3UdS8=; b=XLOOAvkiPLb+iiSr4rHAug/yYQXjM0cfJf2obvMGEsFQW9oRhzDF+Z7r+D+HdWwDCR 6ahCEB/OyoSgy+heUtbHn8gIAERFREaeNPdB9/3WVtKbv5XMGvA7GlT6mFv3cLvexAJZ tYn6WXV3wF6Sbks/n3QmYL5XjanuNk793bqk4s7oSoakF/psmk7f3L8z0xRyt9WMe399 ZPEQftE6h7JP/ZnY9vASJTS2YfJXYt7gCWY9ldKKAP9VrETnEpMddXhzPARcrYXh4YK2 308Y7Oz5LEQt/yZqLh52M7Av/0YkVJnl7UJv+3X+4TlP3iAPNaaqAiisbHMItlcayXWF BzIQ== X-Gm-Message-State: ABUngvfYW/gStgTGZ86bw43cgwfUf+2rqgg5qff0GI2GeEWIgksTcu4K6UUabTp1fIJyVw== X-Received: by 10.194.246.3 with SMTP id xs3mr10212189wjc.87.1477600453410; Thu, 27 Oct 2016 13:34:13 -0700 (PDT) Received: from localhost (cpc94060-newt37-2-0-cust185.19-3.cable.virginm.net. [92.234.204.186]) by smtp.gmail.com with ESMTPSA id b184sm5212134wma.0.2016.10.27.13.34.12 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Thu, 27 Oct 2016 13:34:12 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org Cc: Michal Hocko , Linus Torvalds , Jan Kara , Hugh Dickins , Dave Hansen , Rik van Riel , Mel Gorman , Andrew Morton , Paolo Bonzini , =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= , kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-security-module@vger.kernel.org, linux-rdma@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-fsdevel@vger.kernel.org, Lorenzo Stoakes Subject: [PATCH v2 2/2] mm: unexport __get_user_pages_unlocked() Date: Thu, 27 Oct 2016 21:34:03 +0100 Message-Id: <20161027203403.31708-3-lstoakes@gmail.com> X-Mailer: git-send-email 2.10.1 In-Reply-To: <20161027203403.31708-1-lstoakes@gmail.com> References: <20161027203403.31708-1-lstoakes@gmail.com> Sender: owner-linux-security-module@vger.kernel.org Precedence: bulk List-ID: X-Virus-Scanned: ClamAV using ClamSMTP This patch unexports the low-level __get_user_pages_unlocked() function and replaces invocations with calls to more appropriate higher-level functions. In hva_to_pfn_slow() we are able to replace __get_user_pages_unlocked() with get_user_pages_unlocked() since we can now pass gup_flags. In async_pf_execute() and process_vm_rw_single_vec() we need to pass different tsk, mm arguments so get_user_pages_remote() is the sane replacement in these cases (having added manual acquisition and release of mmap_sem.) Additionally get_user_pages_remote() reintroduces use of the FOLL_TOUCH flag. However, this flag was originally silently dropped by 1e9877902dc7e ("mm/gup: Introduce get_user_pages_remote()"), so this appears to have been unintentional and reintroducing it is therefore not an issue. Signed-off-by: Lorenzo Stoakes --- v2: updated patch to apply against mainline rather than -mmots include/linux/mm.h | 3 --- mm/gup.c | 8 ++++---- mm/nommu.c | 7 +++---- mm/process_vm_access.c | 12 ++++++++---- virt/kvm/async_pf.c | 10 +++++++--- virt/kvm/kvm_main.c | 5 ++--- 6 files changed, 24 insertions(+), 21 deletions(-) -- 2.10.1 -- To unsubscribe from this list: send the line "unsubscribe linux-security-module" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/include/linux/mm.h b/include/linux/mm.h index cc15445..7b2d14e 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1280,9 +1280,6 @@ long get_user_pages(unsigned long start, unsigned long nr_pages, struct vm_area_struct **vmas); long get_user_pages_locked(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, int *locked); -long __get_user_pages_unlocked(struct task_struct *tsk, struct mm_struct *mm, - unsigned long start, unsigned long nr_pages, - struct page **pages, unsigned int gup_flags); long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, struct page **pages, unsigned int gup_flags); int get_user_pages_fast(unsigned long start, int nr_pages, int write, diff --git a/mm/gup.c b/mm/gup.c index 0567851..8028af1 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -866,9 +866,10 @@ EXPORT_SYMBOL(get_user_pages_locked); * according to the parameters "pages", "write", "force" * respectively. */ -__always_inline long __get_user_pages_unlocked(struct task_struct *tsk, struct mm_struct *mm, - unsigned long start, unsigned long nr_pages, - struct page **pages, unsigned int gup_flags) +static __always_inline long __get_user_pages_unlocked(struct task_struct *tsk, + struct mm_struct *mm, unsigned long start, + unsigned long nr_pages, struct page **pages, + unsigned int gup_flags) { long ret; int locked = 1; @@ -880,7 +881,6 @@ __always_inline long __get_user_pages_unlocked(struct task_struct *tsk, struct m up_read(&mm->mmap_sem); return ret; } -EXPORT_SYMBOL(__get_user_pages_unlocked); /* * get_user_pages_unlocked() is suitable to replace the form: diff --git a/mm/nommu.c b/mm/nommu.c index 8b8faaf..669437b 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -176,9 +176,9 @@ long get_user_pages_locked(unsigned long start, unsigned long nr_pages, } EXPORT_SYMBOL(get_user_pages_locked); -long __get_user_pages_unlocked(struct task_struct *tsk, struct mm_struct *mm, - unsigned long start, unsigned long nr_pages, - struct page **pages, unsigned int gup_flags) +static long __get_user_pages_unlocked(struct task_struct *tsk, struct mm_struct *mm, + unsigned long start, unsigned long nr_pages, + struct page **pages, unsigned int gup_flags) { long ret; down_read(&mm->mmap_sem); @@ -187,7 +187,6 @@ long __get_user_pages_unlocked(struct task_struct *tsk, struct mm_struct *mm, up_read(&mm->mmap_sem); return ret; } -EXPORT_SYMBOL(__get_user_pages_unlocked); long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, struct page **pages, unsigned int gup_flags) diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c index be8dc8d..84d0c7e 100644 --- a/mm/process_vm_access.c +++ b/mm/process_vm_access.c @@ -88,7 +88,7 @@ static int process_vm_rw_single_vec(unsigned long addr, ssize_t rc = 0; unsigned long max_pages_per_loop = PVM_MAX_KMALLOC_PAGES / sizeof(struct pages *); - unsigned int flags = FOLL_REMOTE; + unsigned int flags = 0; /* Work out address and page range required */ if (len == 0) @@ -100,15 +100,19 @@ static int process_vm_rw_single_vec(unsigned long addr, while (!rc && nr_pages && iov_iter_count(iter)) { int pages = min(nr_pages, max_pages_per_loop); + int locked = 1; size_t bytes; /* * Get the pages we're interested in. We must - * add FOLL_REMOTE because task/mm might not + * access remotely because task/mm might not * current/current->mm */ - pages = __get_user_pages_unlocked(task, mm, pa, pages, - process_pages, flags); + down_read(&mm->mmap_sem); + pages = get_user_pages_remote(task, mm, pa, pages, flags, + process_pages, NULL, &locked); + if (locked) + up_read(&mm->mmap_sem); if (pages <= 0) return -EFAULT; diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index 8035cc1..dab8b19 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -76,16 +76,20 @@ static void async_pf_execute(struct work_struct *work) struct kvm_vcpu *vcpu = apf->vcpu; unsigned long addr = apf->addr; gva_t gva = apf->gva; + int locked = 1; might_sleep(); /* * This work is run asynchromously to the task which owns * mm and might be done in another context, so we must - * use FOLL_REMOTE. + * access remotely. */ - __get_user_pages_unlocked(NULL, mm, addr, 1, NULL, - FOLL_WRITE | FOLL_REMOTE); + down_read(&mm->mmap_sem); + get_user_pages_remote(NULL, mm, addr, 1, FOLL_WRITE, NULL, NULL, + &locked); + if (locked) + up_read(&mm->mmap_sem); kvm_async_page_present_sync(vcpu, apf); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2907b7b..c45d951 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1415,13 +1415,12 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, npages = get_user_page_nowait(addr, write_fault, page); up_read(¤t->mm->mmap_sem); } else { - unsigned int flags = FOLL_TOUCH | FOLL_HWPOISON; + unsigned int flags = FOLL_HWPOISON; if (write_fault) flags |= FOLL_WRITE; - npages = __get_user_pages_unlocked(current, current->mm, addr, 1, - page, flags); + npages = get_user_pages_unlocked(addr, 1, page, flags); } if (npages != 1) return npages;