From patchwork Thu Oct 13 00:20:12 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 9374321 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 181CA608A0 for ; Thu, 13 Oct 2016 04:38:55 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 0DCC12940F for ; Thu, 13 Oct 2016 04:38:55 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 01C422955C; Thu, 13 Oct 2016 04:38:54 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 6D75729761 for ; Thu, 13 Oct 2016 04:38:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755406AbcJMEit (ORCPT ); Thu, 13 Oct 2016 00:38:49 -0400 Received: from mail-lf0-f65.google.com ([209.85.215.65]:33248 "EHLO mail-lf0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755384AbcJMEis (ORCPT ); Thu, 13 Oct 2016 00:38:48 -0400 Received: by mail-lf0-f65.google.com with SMTP id l131so7652592lfl.0; Wed, 12 Oct 2016 21:38:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xSUuUrBH8lcWdovpYlNL+rmLQ3zitbCqleUVFqkr8W4=; b=iyN5OHL+u66IEb0aypbKM65kvL9pi/v4REPPjifLsyCWeaXnJRIix+p2elZ1NqGrgo 0GSKXJSothATR7lyrKNvK2hLxlrPrtWThlLZtq1xhhbDuomraOb0vCvP8fVI1F90QF60 R8hEfZuZwvzgYVI60iLJE3XGve9cnDiZhe6U/klrsE05SboMxExlUXF67IAPlacPnn2f iG6IAKnLeI+gDaxttqwEWdpoAk27HWdHePYzdQF0zATZfmE6LjddH79/4ekApjPHqCCk P8wWTCct+m1cJn9QFn2BWutfCou7iDcjOU4OsFl/UubDmZyNbHwTokKpCSyFUXqavkgW cdOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xSUuUrBH8lcWdovpYlNL+rmLQ3zitbCqleUVFqkr8W4=; b=CNQU5HXj3sw7qslo7lCt6iU6LdADaip77rpiARlLOIXlvKmfD94WHNLvaBEnYTuf1e h3vWbYeK2uCMC/ZM1a0vM2ILmlR3lZ9AQkR/VGrsoozqA4LesEBB3uVQoObIKfqEAtBl YAKVFzv1iKW4gZwgW4zPCnW6/c/UEPT3EIMTmjdzZw1xXABrq6g2i/3CV3wXFsEd6uuK qVu+qtUKqcBg3QX+G+Va2Jki/q99ZFw/bNA3GVSlvYaDSnqllGfCjaKLLw5NoYH1GEHU WlUHPy9l3RRkSD/br5Tbl6W4mQQXTUvj/JYyWa9GNhd9akThqlxJGqZHafj0QsrtKc+T 7zmg== X-Gm-Message-State: AA6/9RlHd4mTxEv+xvax5ilL7jMSG5KN7ge//Z1FlMsWXx4J5E4ZZH8YP32Irun8eJFBcg== X-Received: by 10.194.154.227 with SMTP id vr3mr5186786wjb.127.1476318029585; Wed, 12 Oct 2016 17:20:29 -0700 (PDT) Received: from localhost (cpc94060-newt37-2-0-cust185.19-3.cable.virginm.net. [92.234.204.186]) by smtp.gmail.com with ESMTPSA id uq6sm17295811wjc.37.2016.10.12.17.20.28 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 12 Oct 2016 17:20:28 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org Cc: Linus Torvalds , Jan Kara , Hugh Dickins , Dave Hansen , Rik van Riel , Mel Gorman , Andrew Morton , adi-buildroot-devel@lists.sourceforge.net, ceph-devel@vger.kernel.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kvm@vger.kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-cris-kernel@axis.com, linux-fbdev@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linux-mips@linux-mips.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, linux-samsung-soc@vger.kernel.org, linux-scsi@vger.kernel.org, linux-security-module@vger.kernel.org, linux-sh@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Lorenzo Stoakes Subject: [PATCH 02/10] mm: remove write/force parameters from __get_user_pages_unlocked() Date: Thu, 13 Oct 2016 01:20:12 +0100 Message-Id: <20161013002020.3062-3-lstoakes@gmail.com> X-Mailer: git-send-email 2.10.0 In-Reply-To: <20161013002020.3062-1-lstoakes@gmail.com> References: <20161013002020.3062-1-lstoakes@gmail.com> Sender: linux-fbdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fbdev@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch removes the write and force parameters from __get_user_pages_unlocked() to make the use of FOLL_FORCE explicit in callers as use of this flag can result in surprising behaviour (and hence bugs) within the mm subsystem. Signed-off-by: Lorenzo Stoakes Acked-by: Paolo Bonzini Reviewed-by: Jan Kara --- include/linux/mm.h | 3 +-- mm/gup.c | 17 +++++++++-------- mm/nommu.c | 12 +++++++++--- mm/process_vm_access.c | 7 +++++-- virt/kvm/async_pf.c | 3 ++- virt/kvm/kvm_main.c | 11 ++++++++--- 6 files changed, 34 insertions(+), 19 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index e9caec6..2db98b6 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1285,8 +1285,7 @@ long get_user_pages_locked(unsigned long start, unsigned long nr_pages, int write, int force, struct page **pages, int *locked); long __get_user_pages_unlocked(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, unsigned long nr_pages, - int write, int force, struct page **pages, - unsigned int gup_flags); + struct page **pages, unsigned int gup_flags); long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, int write, int force, struct page **pages); int get_user_pages_fast(unsigned long start, int nr_pages, int write, diff --git a/mm/gup.c b/mm/gup.c index ba83942..3d620dd 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -865,17 +865,11 @@ EXPORT_SYMBOL(get_user_pages_locked); */ __always_inline long __get_user_pages_unlocked(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, unsigned long nr_pages, - int write, int force, struct page **pages, - unsigned int gup_flags) + struct page **pages, unsigned int gup_flags) { long ret; int locked = 1; - if (write) - gup_flags |= FOLL_WRITE; - if (force) - gup_flags |= FOLL_FORCE; - down_read(&mm->mmap_sem); ret = __get_user_pages_locked(tsk, mm, start, nr_pages, pages, NULL, &locked, false, gup_flags); @@ -905,8 +899,15 @@ EXPORT_SYMBOL(__get_user_pages_unlocked); long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, int write, int force, struct page **pages) { + unsigned int flags = FOLL_TOUCH; + + if (write) + flags |= FOLL_WRITE; + if (force) + flags |= FOLL_FORCE; + return __get_user_pages_unlocked(current, current->mm, start, nr_pages, - write, force, pages, FOLL_TOUCH); + pages, flags); } EXPORT_SYMBOL(get_user_pages_unlocked); diff --git a/mm/nommu.c b/mm/nommu.c index 95daf81..925dcc1 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -185,8 +185,7 @@ EXPORT_SYMBOL(get_user_pages_locked); long __get_user_pages_unlocked(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, unsigned long nr_pages, - int write, int force, struct page **pages, - unsigned int gup_flags) + struct page **pages, unsigned int gup_flags) { long ret; down_read(&mm->mmap_sem); @@ -200,8 +199,15 @@ EXPORT_SYMBOL(__get_user_pages_unlocked); long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, int write, int force, struct page **pages) { + unsigned int flags = 0; + + if (write) + flags |= FOLL_WRITE; + if (force) + flags |= FOLL_FORCE; + return __get_user_pages_unlocked(current, current->mm, start, nr_pages, - write, force, pages, 0); + pages, flags); } EXPORT_SYMBOL(get_user_pages_unlocked); diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c index 07514d4..be8dc8d 100644 --- a/mm/process_vm_access.c +++ b/mm/process_vm_access.c @@ -88,12 +88,16 @@ static int process_vm_rw_single_vec(unsigned long addr, ssize_t rc = 0; unsigned long max_pages_per_loop = PVM_MAX_KMALLOC_PAGES / sizeof(struct pages *); + unsigned int flags = FOLL_REMOTE; /* Work out address and page range required */ if (len == 0) return 0; nr_pages = (addr + len - 1) / PAGE_SIZE - addr / PAGE_SIZE + 1; + if (vm_write) + flags |= FOLL_WRITE; + while (!rc && nr_pages && iov_iter_count(iter)) { int pages = min(nr_pages, max_pages_per_loop); size_t bytes; @@ -104,8 +108,7 @@ static int process_vm_rw_single_vec(unsigned long addr, * current/current->mm */ pages = __get_user_pages_unlocked(task, mm, pa, pages, - vm_write, 0, process_pages, - FOLL_REMOTE); + process_pages, flags); if (pages <= 0) return -EFAULT; diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index db96688..8035cc1 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -84,7 +84,8 @@ static void async_pf_execute(struct work_struct *work) * mm and might be done in another context, so we must * use FOLL_REMOTE. */ - __get_user_pages_unlocked(NULL, mm, addr, 1, 1, 0, NULL, FOLL_REMOTE); + __get_user_pages_unlocked(NULL, mm, addr, 1, NULL, + FOLL_WRITE | FOLL_REMOTE); kvm_async_page_present_sync(vcpu, apf); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 81dfc73..28510e7 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1416,10 +1416,15 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, down_read(¤t->mm->mmap_sem); npages = get_user_page_nowait(addr, write_fault, page); up_read(¤t->mm->mmap_sem); - } else + } else { + unsigned int flags = FOLL_TOUCH | FOLL_HWPOISON; + + if (write_fault) + flags |= FOLL_WRITE; + npages = __get_user_pages_unlocked(current, current->mm, addr, 1, - write_fault, 0, page, - FOLL_TOUCH|FOLL_HWPOISON); + page, flags); + } if (npages != 1) return npages;