From patchwork Thu Oct 13 00:20:12 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lorenzo Stoakes X-Patchwork-Id: 9374061 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 5BA0860839 for ; Thu, 13 Oct 2016 00:25:39 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 4AF51288C4 for ; Thu, 13 Oct 2016 00:25:39 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 3E86B2958A; Thu, 13 Oct 2016 00:25:39 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.3 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, RCVD_IN_DNSWL_HI, RCVD_IN_SORBS_SPAM, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B1476288C4 for ; Thu, 13 Oct 2016 00:25:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933116AbcJMAZL (ORCPT ); Wed, 12 Oct 2016 20:25:11 -0400 Received: from mail-lf0-f66.google.com ([209.85.215.66]:36780 "EHLO mail-lf0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933072AbcJMAZI (ORCPT ); Wed, 12 Oct 2016 20:25:08 -0400 Received: by mail-lf0-f66.google.com with SMTP id b75so9576889lfg.3; Wed, 12 Oct 2016 17:20:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xSUuUrBH8lcWdovpYlNL+rmLQ3zitbCqleUVFqkr8W4=; b=iyN5OHL+u66IEb0aypbKM65kvL9pi/v4REPPjifLsyCWeaXnJRIix+p2elZ1NqGrgo 0GSKXJSothATR7lyrKNvK2hLxlrPrtWThlLZtq1xhhbDuomraOb0vCvP8fVI1F90QF60 R8hEfZuZwvzgYVI60iLJE3XGve9cnDiZhe6U/klrsE05SboMxExlUXF67IAPlacPnn2f iG6IAKnLeI+gDaxttqwEWdpoAk27HWdHePYzdQF0zATZfmE6LjddH79/4ekApjPHqCCk P8wWTCct+m1cJn9QFn2BWutfCou7iDcjOU4OsFl/UubDmZyNbHwTokKpCSyFUXqavkgW cdOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xSUuUrBH8lcWdovpYlNL+rmLQ3zitbCqleUVFqkr8W4=; b=L8dEJAbehHkqJcKTIH0NpMKE1pARGHBo/GtQEaiYzaKLZkZlniNatvNm8mW6tu8qdv Pxf8bvbIE7ek6Rokz6KmbAmJy/J2YbI+Bwpr5updmsz5uI08gEmNO8Oxbo45xnFGWQip MNUbP9Jr7cfnnvkwmcrnjsOfm1xe1yvxWbqvTUdQpqpgN7W7N65XzNPLR+WLESHQJDbV xOQZvZI6aFQMg/8IAE+3mG6XVF7m5Cj+9iX8R67zAhVftUg1rdx3QIlDWlv/9+X3+OgP No+EnnpbjSrLRYrjXsLBjuHHhXybVUBufSH3ggxpYeAfBOiST3tBT9Yk32uG74ysXAN6 aItw== X-Gm-Message-State: AA6/9Rm/tPlAaKk6gd0k50+XP+Bx1XbOS1iWqK3HM5/5B6uvaZdIFSdsnszS7O8Tm59jKA== X-Received: by 10.194.154.227 with SMTP id vr3mr5186786wjb.127.1476318029585; Wed, 12 Oct 2016 17:20:29 -0700 (PDT) Received: from localhost (cpc94060-newt37-2-0-cust185.19-3.cable.virginm.net. [92.234.204.186]) by smtp.gmail.com with ESMTPSA id uq6sm17295811wjc.37.2016.10.12.17.20.28 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Wed, 12 Oct 2016 17:20:28 -0700 (PDT) From: Lorenzo Stoakes To: linux-mm@kvack.org Cc: Linus Torvalds , Jan Kara , Hugh Dickins , Dave Hansen , Rik van Riel , Mel Gorman , Andrew Morton , adi-buildroot-devel@lists.sourceforge.net, ceph-devel@vger.kernel.org, dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org, kvm@vger.kernel.org, linux-alpha@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-cris-kernel@axis.com, linux-fbdev@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org, linux-media@vger.kernel.org, linux-mips@linux-mips.org, linux-rdma@vger.kernel.org, linux-s390@vger.kernel.org, linux-samsung-soc@vger.kernel.org, linux-scsi@vger.kernel.org, linux-security-module@vger.kernel.org, linux-sh@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Lorenzo Stoakes Subject: [PATCH 02/10] mm: remove write/force parameters from __get_user_pages_unlocked() Date: Thu, 13 Oct 2016 01:20:12 +0100 Message-Id: <20161013002020.3062-3-lstoakes@gmail.com> X-Mailer: git-send-email 2.10.0 In-Reply-To: <20161013002020.3062-1-lstoakes@gmail.com> References: <20161013002020.3062-1-lstoakes@gmail.com> Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP This patch removes the write and force parameters from __get_user_pages_unlocked() to make the use of FOLL_FORCE explicit in callers as use of this flag can result in surprising behaviour (and hence bugs) within the mm subsystem. Signed-off-by: Lorenzo Stoakes Acked-by: Paolo Bonzini Reviewed-by: Jan Kara --- include/linux/mm.h | 3 +-- mm/gup.c | 17 +++++++++-------- mm/nommu.c | 12 +++++++++--- mm/process_vm_access.c | 7 +++++-- virt/kvm/async_pf.c | 3 ++- virt/kvm/kvm_main.c | 11 ++++++++--- 6 files changed, 34 insertions(+), 19 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index e9caec6..2db98b6 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1285,8 +1285,7 @@ long get_user_pages_locked(unsigned long start, unsigned long nr_pages, int write, int force, struct page **pages, int *locked); long __get_user_pages_unlocked(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, unsigned long nr_pages, - int write, int force, struct page **pages, - unsigned int gup_flags); + struct page **pages, unsigned int gup_flags); long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, int write, int force, struct page **pages); int get_user_pages_fast(unsigned long start, int nr_pages, int write, diff --git a/mm/gup.c b/mm/gup.c index ba83942..3d620dd 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -865,17 +865,11 @@ EXPORT_SYMBOL(get_user_pages_locked); */ __always_inline long __get_user_pages_unlocked(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, unsigned long nr_pages, - int write, int force, struct page **pages, - unsigned int gup_flags) + struct page **pages, unsigned int gup_flags) { long ret; int locked = 1; - if (write) - gup_flags |= FOLL_WRITE; - if (force) - gup_flags |= FOLL_FORCE; - down_read(&mm->mmap_sem); ret = __get_user_pages_locked(tsk, mm, start, nr_pages, pages, NULL, &locked, false, gup_flags); @@ -905,8 +899,15 @@ EXPORT_SYMBOL(__get_user_pages_unlocked); long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, int write, int force, struct page **pages) { + unsigned int flags = FOLL_TOUCH; + + if (write) + flags |= FOLL_WRITE; + if (force) + flags |= FOLL_FORCE; + return __get_user_pages_unlocked(current, current->mm, start, nr_pages, - write, force, pages, FOLL_TOUCH); + pages, flags); } EXPORT_SYMBOL(get_user_pages_unlocked); diff --git a/mm/nommu.c b/mm/nommu.c index 95daf81..925dcc1 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -185,8 +185,7 @@ EXPORT_SYMBOL(get_user_pages_locked); long __get_user_pages_unlocked(struct task_struct *tsk, struct mm_struct *mm, unsigned long start, unsigned long nr_pages, - int write, int force, struct page **pages, - unsigned int gup_flags) + struct page **pages, unsigned int gup_flags) { long ret; down_read(&mm->mmap_sem); @@ -200,8 +199,15 @@ EXPORT_SYMBOL(__get_user_pages_unlocked); long get_user_pages_unlocked(unsigned long start, unsigned long nr_pages, int write, int force, struct page **pages) { + unsigned int flags = 0; + + if (write) + flags |= FOLL_WRITE; + if (force) + flags |= FOLL_FORCE; + return __get_user_pages_unlocked(current, current->mm, start, nr_pages, - write, force, pages, 0); + pages, flags); } EXPORT_SYMBOL(get_user_pages_unlocked); diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c index 07514d4..be8dc8d 100644 --- a/mm/process_vm_access.c +++ b/mm/process_vm_access.c @@ -88,12 +88,16 @@ static int process_vm_rw_single_vec(unsigned long addr, ssize_t rc = 0; unsigned long max_pages_per_loop = PVM_MAX_KMALLOC_PAGES / sizeof(struct pages *); + unsigned int flags = FOLL_REMOTE; /* Work out address and page range required */ if (len == 0) return 0; nr_pages = (addr + len - 1) / PAGE_SIZE - addr / PAGE_SIZE + 1; + if (vm_write) + flags |= FOLL_WRITE; + while (!rc && nr_pages && iov_iter_count(iter)) { int pages = min(nr_pages, max_pages_per_loop); size_t bytes; @@ -104,8 +108,7 @@ static int process_vm_rw_single_vec(unsigned long addr, * current/current->mm */ pages = __get_user_pages_unlocked(task, mm, pa, pages, - vm_write, 0, process_pages, - FOLL_REMOTE); + process_pages, flags); if (pages <= 0) return -EFAULT; diff --git a/virt/kvm/async_pf.c b/virt/kvm/async_pf.c index db96688..8035cc1 100644 --- a/virt/kvm/async_pf.c +++ b/virt/kvm/async_pf.c @@ -84,7 +84,8 @@ static void async_pf_execute(struct work_struct *work) * mm and might be done in another context, so we must * use FOLL_REMOTE. */ - __get_user_pages_unlocked(NULL, mm, addr, 1, 1, 0, NULL, FOLL_REMOTE); + __get_user_pages_unlocked(NULL, mm, addr, 1, NULL, + FOLL_WRITE | FOLL_REMOTE); kvm_async_page_present_sync(vcpu, apf); diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 81dfc73..28510e7 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1416,10 +1416,15 @@ static int hva_to_pfn_slow(unsigned long addr, bool *async, bool write_fault, down_read(¤t->mm->mmap_sem); npages = get_user_page_nowait(addr, write_fault, page); up_read(¤t->mm->mmap_sem); - } else + } else { + unsigned int flags = FOLL_TOUCH | FOLL_HWPOISON; + + if (write_fault) + flags |= FOLL_WRITE; + npages = __get_user_pages_unlocked(current, current->mm, addr, 1, - write_fault, 0, page, - FOLL_TOUCH|FOLL_HWPOISON); + page, flags); + } if (npages != 1) return npages;