From patchwork Fri Sep 9 22:24:23 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kani, Toshi" X-Patchwork-Id: 9324533 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id B4EF660231 for ; Fri, 9 Sep 2016 22:25:53 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B76E929FDE for ; Fri, 9 Sep 2016 22:25:53 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id AC0BE2A024; Fri, 9 Sep 2016 22:25:53 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.1 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RDNS_NONE autolearn=no version=3.3.1 Received: from ml01.01.org (unknown [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 59E1529FDE for ; Fri, 9 Sep 2016 22:25:53 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 2CAEE1A1E1D; Fri, 9 Sep 2016 15:25:33 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from g2t2352.austin.hpe.com (g2t2352.austin.hpe.com [15.233.44.25]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id EA66F1A1E1D for ; Fri, 9 Sep 2016 15:25:30 -0700 (PDT) Received: from g2t2360.austin.hpecorp.net (g2t2360.austin.hpecorp.net [16.196.225.135]) by g2t2352.austin.hpe.com (Postfix) with ESMTP id 50A1154; Fri, 9 Sep 2016 22:25:30 +0000 (UTC) Received: from misato.fc.hp.com (misato.fc.hp.com [16.78.168.61]) by g2t2360.austin.hpecorp.net (Postfix) with ESMTP id 9F11B3B; Fri, 9 Sep 2016 22:25:29 +0000 (UTC) From: Toshi Kani To: akpm@linux-foundation.org Subject: [PATCH 2/2] shmem: call __thp_get_unmapped_area to alloc a pmd-aligned addr Date: Fri, 9 Sep 2016 16:24:23 -0600 Message-Id: <1473459863-11287-3-git-send-email-toshi.kani@hpe.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1473459863-11287-1-git-send-email-toshi.kani@hpe.com> References: <1473459863-11287-1-git-send-email-toshi.kani@hpe.com> X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-nvdimm@lists.01.org, mawilcox@microsoft.com, hughd@google.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, kirill.shutemov@linux.intel.com MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Virus-Scanned: ClamAV using ClamSMTP shmem_get_unmapped_area() provides a functionality similar to __thp_get_unmapped_area() as both allocate a pmd-aligned address. Change shmem_get_unmapped_area() to do shm-specific checks and then call __thp_get_unmapped_area() for allocating a pmd-aligned address. link: https://lkml.org/lkml/2016/8/29/620 Suggested-by: Kirill A. Shutemov Signed-off-by: Toshi Kani Cc: Andrew Morton Cc: Kirill A. Shutemov Cc: Hugh Dickins Cc: Matthew Wilcox Cc: Dan Williams Acked-by: Kirill A. Shutemov --- include/linux/huge_mm.h | 10 +++++++ mm/shmem.c | 68 +++++++++-------------------------------------- 2 files changed, 23 insertions(+), 55 deletions(-) diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h index 4fca526..1b65924 100644 --- a/include/linux/huge_mm.h +++ b/include/linux/huge_mm.h @@ -90,6 +90,9 @@ extern unsigned long transparent_hugepage_flags; extern unsigned long thp_get_unmapped_area(struct file *filp, unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags); +extern unsigned long __thp_get_unmapped_area(struct file *filp, + unsigned long len, loff_t off, unsigned long flags, + unsigned long size); extern void prep_transhuge_page(struct page *page); extern void free_transhuge_page(struct page *page); @@ -176,6 +179,13 @@ static inline void prep_transhuge_page(struct page *page) {} #define thp_get_unmapped_area NULL +static inline unsigned long __thp_get_unmapped_area(struct file *filp, + unsigned long len, loff_t off, unsigned long flags, + unsigned long size) +{ + return 0; +} + static inline int split_huge_page_to_list(struct page *page, struct list_head *list) { diff --git a/mm/shmem.c b/mm/shmem.c index aec5b49..ef27455 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -1925,45 +1925,23 @@ static int shmem_fault(struct vm_area_struct *vma, struct vm_fault *vmf) } unsigned long shmem_get_unmapped_area(struct file *file, - unsigned long uaddr, unsigned long len, + unsigned long addr, unsigned long len, unsigned long pgoff, unsigned long flags) { - unsigned long (*get_area)(struct file *, - unsigned long, unsigned long, unsigned long, unsigned long); - unsigned long addr; - unsigned long offset; - unsigned long inflated_len; - unsigned long inflated_addr; - unsigned long inflated_offset; - - if (len > TASK_SIZE) - return -ENOMEM; - - get_area = current->mm->get_unmapped_area; - addr = get_area(file, uaddr, len, pgoff, flags); + loff_t off = (loff_t)pgoff << PAGE_SHIFT; if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE)) - return addr; - if (IS_ERR_VALUE(addr)) - return addr; - if (addr & ~PAGE_MASK) - return addr; - if (addr > TASK_SIZE - len) - return addr; - + goto out; if (shmem_huge == SHMEM_HUGE_DENY) - return addr; - if (len < HPAGE_PMD_SIZE) - return addr; - if (flags & MAP_FIXED) - return addr; + goto out; + /* * Our priority is to support MAP_SHARED mapped hugely; * and support MAP_PRIVATE mapped hugely too, until it is COWed. * But if caller specified an address hint, respect that as before. */ - if (uaddr) - return addr; + if (addr) + goto out; if (shmem_huge != SHMEM_HUGE_FORCE) { struct super_block *sb; @@ -1977,39 +1955,19 @@ unsigned long shmem_get_unmapped_area(struct file *file, * for "/dev/zero", to create a shared anonymous object. */ if (IS_ERR(shm_mnt)) - return addr; + goto out; sb = shm_mnt->mnt_sb; } if (SHMEM_SB(sb)->huge == SHMEM_HUGE_NEVER) - return addr; + goto out; } - offset = (pgoff << PAGE_SHIFT) & (HPAGE_PMD_SIZE-1); - if (offset && offset + len < 2 * HPAGE_PMD_SIZE) - return addr; - if ((addr & (HPAGE_PMD_SIZE-1)) == offset) - return addr; - - inflated_len = len + HPAGE_PMD_SIZE - PAGE_SIZE; - if (inflated_len > TASK_SIZE) - return addr; - if (inflated_len < len) - return addr; - - inflated_addr = get_area(NULL, 0, inflated_len, 0, flags); - if (IS_ERR_VALUE(inflated_addr)) - return addr; - if (inflated_addr & ~PAGE_MASK) + addr = __thp_get_unmapped_area(file, len, off, flags, HPAGE_PMD_SIZE); + if (addr) return addr; - inflated_offset = inflated_addr & (HPAGE_PMD_SIZE-1); - inflated_addr += offset - inflated_offset; - if (inflated_offset > offset) - inflated_addr += HPAGE_PMD_SIZE; - - if (inflated_addr > TASK_SIZE - len) - return addr; - return inflated_addr; + out: + return current->mm->get_unmapped_area(file, addr, len, pgoff, flags); } #ifdef CONFIG_NUMA