From patchwork Thu Sep 27 17:51:23 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Souptick Joarder X-Patchwork-Id: 10618371 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 76AFF14BD for ; Thu, 27 Sep 2018 17:48:14 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 210F52B7D4 for ; Thu, 27 Sep 2018 17:48:14 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 131822B8F3; Thu, 27 Sep 2018 17:48:14 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.0 required=2.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FROM,MAILING_LIST_MULTI,RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.1 Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 730332B7D4 for ; Thu, 27 Sep 2018 17:48:13 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C98638E0002; Thu, 27 Sep 2018 13:48:11 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id C48468E0001; Thu, 27 Sep 2018 13:48:11 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B105F8E0002; Thu, 27 Sep 2018 13:48:11 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from mail-pg1-f199.google.com (mail-pg1-f199.google.com [209.85.215.199]) by kanga.kvack.org (Postfix) with ESMTP id 6E6498E0001 for ; Thu, 27 Sep 2018 13:48:11 -0400 (EDT) Received: by mail-pg1-f199.google.com with SMTP id 11-v6so3655103pgd.1 for ; Thu, 27 Sep 2018 10:48:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:dkim-signature:date:from:to:cc:subject :message-id:mime-version:content-disposition:user-agent; bh=HDQ9y9C/oiqeIou2l3SVv0ho6BkiUbABnTmzk+xl4LE=; b=CiRp2slNLNW/9bEb1LpfAz7vT3EeibYHRdSExjm1rnj1yiZDrbdIBcHxlUZQ1N025s id8hyDhOwsi4nR6NuK2FQFeD7AH87Kxkqpy6CnMIABRBM8TKzoxmIJ83N1eB8F38NTH1 W4M1h+dFPnwk+e6rlhCqfBCZwPZ3Wq9uQCAPnugMwEXL9CTGjRUp7zM0ycM6VcGo/B9v fUquWrzxXIHa9LO+8kNN40jtcEsa5E1nSxAh2C/KAZb/gywDHuxJpDSkT0kLs/lbto9/ mP/5j4PFhF811PmwZYq6CYMYAB1CSxEyv0rlR9FC6Hoeduyt62ysyFjrR4n8A/wvuxui YtOg== X-Gm-Message-State: ABuFfogped++vGj4CUnpFrBoQHtHCJnMHvIa53WhOlKJfNwNQOj66s9N jc9aYbUf2xAMYlXJ3qBzpUSG5gmp8RuMSaYDYgbEpMVDWtMnXrVpI1/8p95Htekk6/iJecKVs9R p/ojDozSz1YIx8guatWgESBQonJMLDjLXrYZtxoGHMGFfz2XFjIg8BPs20C7j+q/7zMTgliV7TI 6+F1x6nXVhdKyQhBU9FynQb/8k4rXSnaqlo352wzAdgHlXrPD4Z3tLGS9ozC8gZ3D90UyRUWmT5 IOPqkN71AvM5La0YOGWmi+42rI3dVpSVDdKOZQIfQ9thjDtxUCiQHIN5lN/EexXdn6Yv9H9PPec ma5KmQSRdw+z/ntNwB8+rAT3S6YAI41Jg3/Ea5n6KvGr5dTRP5rVs6OYeXE+xCo7MCyq6z7hpgD v X-Received: by 2002:a17:902:b70d:: with SMTP id d13-v6mr4874382pls.44.1538070491113; Thu, 27 Sep 2018 10:48:11 -0700 (PDT) X-Received: by 2002:a17:902:b70d:: with SMTP id d13-v6mr4874332pls.44.1538070490136; Thu, 27 Sep 2018 10:48:10 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1538070490; cv=none; d=google.com; s=arc-20160816; b=dpvD7GJPts6sPaBDAQHnYIeqaFZtC9fB9op6awdXUhrftlvvkBQx0px2O2L/KXqH4Q fAWTntsXnhz1SAzRz2MTh3Amu9dihv+g/7uM4zAlITYGsZQ/hDNrxWDHYOc9j3N54wYn 8lOD0yoYOE3HTrQxuzosIQzzelH9QDqRnRWG6AayJCujA3t4cfBvz/URb8M07ZwiELME GO4VGfzqKFFxb0Sbv1E7xY5WjskQyD3XhdQql97nMB3uXPkDQCYk5WSxgH7+k8HbFAck dtppRG867/QRuO5hX02YIE0FHKghXTmkwRyiIW3X2IBiaoV+xTwMD39U3jakGQt2KgvP tE4g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=user-agent:content-disposition:mime-version:message-id:subject:cc :to:from:date:dkim-signature; bh=HDQ9y9C/oiqeIou2l3SVv0ho6BkiUbABnTmzk+xl4LE=; b=MyY7eeaS/+4cODYaj1ctd7ZpR2rfYAqikUROwVHPft+9Frg42VtGPdFqtka8sJwTAT 5pCqI5wkGhGP61aaEWS/1vAyiRHueeE42HpsNWInWE0HCDZulGbmdschhby+Tv9Zwqgf OrVp2yrLC5CfatsZ/zQ/f1EGdw/mKIbJSxWRWBsCU3u5GlpW361edUsj2WA7KXyhXhmf iINqvIEduXs/MDvrllxQLVTVVHtWJ6n6eBWGJKfmXRDcGUktGOXY6fbOXJW5P1XhnfiN jhSFCJECP4IBNXyEZUwiJWui/keWdcwJffr4i51XIzhO0ZtY9aWsiqSRnWf8zif91y/c hJdQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=IQvW4dmm; spf=pass (google.com: domain of jrdr.linux@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=jrdr.linux@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: from mail-sor-f65.google.com (mail-sor-f65.google.com. [209.85.220.65]) by mx.google.com with SMTPS id u132-v6sor657774pgb.68.2018.09.27.10.48.09 for (Google Transport Security); Thu, 27 Sep 2018 10:48:10 -0700 (PDT) Received-SPF: pass (google.com: domain of jrdr.linux@gmail.com designates 209.85.220.65 as permitted sender) client-ip=209.85.220.65; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20161025 header.b=IQvW4dmm; spf=pass (google.com: domain of jrdr.linux@gmail.com designates 209.85.220.65 as permitted sender) smtp.mailfrom=jrdr.linux@gmail.com; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=date:from:to:cc:subject:message-id:mime-version:content-disposition :user-agent; bh=HDQ9y9C/oiqeIou2l3SVv0ho6BkiUbABnTmzk+xl4LE=; b=IQvW4dmmeGDwnygUzN8u0nkcMxPauUjXTXhZkys2wMPa8CxfT7jUAvcmbRnqpYTf7G aAIKlu8mn/ZfpCD8mWs20RsJulBZox0NbEK7mlAm6jhgxiXQOnPVCp1f2zJbfdHtQLC8 ItcXlvTI2FzpsBjz5AuX4fV+2B+YE/wWC/J0/JOp8VEeudYfYxYRKrwaxEVnzur2DKQs nGi/e7puiDe8aYiraPi832wsiJXeZICLjbFEaDvLwzXh84Y2Zn6tnaN/n6e69Dnd5KQA j5GLJdO3ObU1m78HoL+1+d7qYq4KUYGw2lCqpQhRkZZDzTAcI5XcDYMEnEgmY8UlGQsX 4zcg== X-Google-Smtp-Source: ACcGV63d39oyKY8lCOqDWf4RHO297kS9KEKakDJIYhFdu3/5r2/qnVHK3B2lKNSgCHgBbZGSYoCudQ== X-Received: by 2002:a65:53c9:: with SMTP id z9-v6mr11193305pgr.203.1538070489368; Thu, 27 Sep 2018 10:48:09 -0700 (PDT) Received: from jordon-HP-15-Notebook-PC ([49.205.219.124]) by smtp.gmail.com with ESMTPSA id w12-v6sm3981717pfd.110.2018.09.27.10.48.06 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 27 Sep 2018 10:48:08 -0700 (PDT) Date: Thu, 27 Sep 2018 23:21:23 +0530 From: Souptick Joarder To: akpm@linux-foundation.org, dan.j.williams@intel.com, mhocko@suse.com, kirill.shutemov@linux.intel.com, pasha.tatashin@oracle.com, riel@redhat.com, willy@infradead.org, minchan@kernel.org, peterz@infradead.org, ying.huang@intel.com, ak@linux.intel.com, rppt@linux.vnet.ibm.com, linux@dominikbrodowski.net, arnd@arndb.de, mcgrof@kernel.org Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH] mm: Introduce new function vm_insert_kmem_page Message-ID: <20180927175123.GA16367@jordon-HP-15-Notebook-PC> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: X-Virus-Scanned: ClamAV using ClamSMTP vm_insert_kmem_page is similar to vm_insert_page and will be used by drivers to map kernel (kmalloc/vmalloc/pages) allocated memory to user vma. Previously vm_insert_page is used for both page fault handlers and outside page fault handlers context. When vm_insert_page is used in page fault handlers context, each driver have to map errno to VM_FAULT_CODE in their own way. But as part of vm_fault_t migration all the page fault handlers are cleaned up by using new vmf_insert_page. Going forward, vm_insert_page will be removed by converting it to vmf_insert_page. But their are places where vm_insert_page is used outside page fault handlers context and converting those to vmf_insert_page is not a good approach as drivers will end up with new VM_FAULT_CODE to errno conversion code and it will make each user more complex. So this new vm_insert_kmem_page can be used to map kernel memory to user vma outside page fault handler context. In short, vmf_insert_page will be used in page fault handlers context and vm_insert_kmem_page will be used to map kernel memory to user vma outside page fault handlers context. We will slowly convert all the user of vm_insert_page to vm_insert_kmem_page after this API be available in linus tree. Signed-off-by: Souptick Joarder --- include/linux/mm.h | 2 ++ mm/memory.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ mm/nommu.c | 7 ++++++ 3 files changed, 78 insertions(+) diff --git a/include/linux/mm.h b/include/linux/mm.h index a61ebe8..5f42d35 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2477,6 +2477,8 @@ unsigned long change_prot_numa(struct vm_area_struct *vma, struct vm_area_struct *find_extend_vma(struct mm_struct *, unsigned long addr); int remap_pfn_range(struct vm_area_struct *, unsigned long addr, unsigned long pfn, unsigned long size, pgprot_t); +int vm_insert_kmem_page(struct vm_area_struct *vma, unsigned long addr, + struct page *page); int vm_insert_page(struct vm_area_struct *, unsigned long addr, struct page *); int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr, unsigned long pfn); diff --git a/mm/memory.c b/mm/memory.c index c467102..b800c10 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -1682,6 +1682,75 @@ pte_t *__get_locked_pte(struct mm_struct *mm, unsigned long addr, return pte_alloc_map_lock(mm, pmd, addr, ptl); } +static int insert_kmem_page(struct vm_area_struct *vma, unsigned long addr, + struct page *page, pgprot_t prot) +{ + struct mm_struct *mm = vma->vm_mm; + int retval; + pte_t *pte; + spinlock_t *ptl; + + retval = -EINVAL; + if (PageAnon(page)) + goto out; + retval = -ENOMEM; + flush_dcache_page(page); + pte = get_locked_pte(mm, addr, &ptl); + if (!pte) + goto out; + retval = -EBUSY; + if (!pte_none(*pte)) + goto out_unlock; + + get_page(page); + inc_mm_counter_fast(mm, mm_counter_file(page)); + page_add_file_rmap(page, false); + set_pte_at(mm, addr, pte, mk_pte(page, prot)); + + retval = 0; + pte_unmap_unlock(pte, ptl); + return retval; +out_unlock: + pte_unmap_unlock(pte, ptl); +out: + return retval; +} + +/** + * vm_insert_kmem_page - insert single page into user vma + * @vma: user vma to map to + * @addr: target user address of this page + * @page: source kernel page + * + * This allows drivers to insert individual kernel memory into a user vma. + * This API should be used outside page fault handlers context. + * + * Previously the same has been done with vm_insert_page by drivers. But + * vm_insert_page will be converted to vmf_insert_page and will be used + * in fault handlers context and return type of vmf_insert_page will be + * vm_fault_t type. + * + * But there are places where drivers need to map kernel memory into user + * vma outside fault handlers context. As vmf_insert_page will be restricted + * to use within page fault handlers, vm_insert_kmem_page could be used + * to map kernel memory to user vma outside fault handlers context. + */ +int vm_insert_kmem_page(struct vm_area_struct *vma, unsigned long addr, + struct page *page) +{ + if (addr < vma->vm_start || addr >= vma->vm_end) + return -EFAULT; + if (!page_count(page)) + return -EINVAL; + if (!(vma->vm_flags & VM_MIXEDMAP)) { + BUG_ON(down_read_trylock(&vma->vm_mm->mmap_sem)); + BUG_ON(vma->vm_flags & VM_PFNMAP); + vma->vm_flags |= VM_MIXEDMAP; + } + return insert_kmem_page(vma, addr, page, vma->vm_page_prot); +} +EXPORT_SYMBOL(vm_insert_kmem_page); + /* * This is the old fallback for page remapping. * diff --git a/mm/nommu.c b/mm/nommu.c index e4aac33..153b8c8 100644 --- a/mm/nommu.c +++ b/mm/nommu.c @@ -473,6 +473,13 @@ int vm_insert_page(struct vm_area_struct *vma, unsigned long addr, } EXPORT_SYMBOL(vm_insert_page); +int vm_insert_kmem_page(struct vm_area_struct *vma, unsigned long addr, + struct page *page) +{ + return -EINVAL; +} +EXPORT_SYMBOL(vm_insert_kmem_page); + /* * sys_brk() for the most part doesn't need the global kernel * lock, except when an application is doing something nasty