From patchwork Thu Apr 14 16:48:30 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kani, Toshi" X-Patchwork-Id: 8839661 Return-Path: X-Original-To: patchwork-linux-nvdimm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 8E0909F71A for ; Thu, 14 Apr 2016 16:57:23 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 70E052034A for ; Thu, 14 Apr 2016 16:57:19 +0000 (UTC) Received: from ml01.01.org (ml01.01.org [198.145.21.10]) (using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id B249B202F0 for ; Thu, 14 Apr 2016 16:57:17 +0000 (UTC) Received: from [127.0.0.1] (localhost [IPv6:::1]) by ml01.01.org (Postfix) with ESMTP id 492331A2070; Thu, 14 Apr 2016 09:57:17 -0700 (PDT) X-Original-To: linux-nvdimm@lists.01.org Delivered-To: linux-nvdimm@lists.01.org Received: from g1t5425.austin.hp.com (g1t5425.austin.hp.com [15.216.225.55]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ml01.01.org (Postfix) with ESMTPS id C83421A201D for ; Thu, 14 Apr 2016 09:57:15 -0700 (PDT) Received: from g2t2360.austin.hpecorp.net (g2t2360.austin.hpecorp.net [16.196.225.135]) by g1t5425.austin.hp.com (Postfix) with ESMTP id B853756; Thu, 14 Apr 2016 16:57:12 +0000 (UTC) Received: from misato.fc.hp.com (misato.fc.hp.com [16.78.168.61]) by g2t2360.austin.hpecorp.net (Postfix) with ESMTP id A9B7A49; Thu, 14 Apr 2016 16:57:11 +0000 (UTC) From: Toshi Kani To: akpm@linux-foundation.org, dan.j.williams@intel.com, viro@zeniv.linux.org.uk Subject: [PATCH v3 1/2] dax: add dax_get_unmapped_area for pmd mappings Date: Thu, 14 Apr 2016 10:48:30 -0600 Message-Id: <1460652511-19636-2-git-send-email-toshi.kani@hpe.com> X-Mailer: git-send-email 2.5.5 In-Reply-To: <1460652511-19636-1-git-send-email-toshi.kani@hpe.com> References: <1460652511-19636-1-git-send-email-toshi.kani@hpe.com> X-BeenThere: linux-nvdimm@lists.01.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: "Linux-nvdimm developer list." List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: tytso@mit.edu, linux-nvdimm@lists.01.org, jack@suse.cz, david@fromorbit.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, adilger.kernel@dilger.ca, linux-fsdevel@vger.kernel.org, kirill.shutemov@linux.intel.com MIME-Version: 1.0 Errors-To: linux-nvdimm-bounces@lists.01.org Sender: "Linux-nvdimm" X-Spam-Status: No, score=-2.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP When CONFIG_FS_DAX_PMD is set, DAX supports mmap() using pmd page size. This feature relies on both mmap virtual address and FS block (i.e. physical address) to be aligned by the pmd page size. Users can use mkfs options to specify FS to align block allocations. However, aligning mmap address requires code changes to existing applications for providing a pmd-aligned address to mmap(). For instance, fio with "ioengine=mmap" performs I/Os with mmap() [1]. It calls mmap() with a NULL address, which needs to be changed to provide a pmd-aligned address for testing with DAX pmd mappings. Changing all applications that call mmap() with NULL is undesirable. Add dax_get_unmapped_area(), which can be called by filesystem's get_unmapped_area to align an mmap address by the pmd size for a DAX file. It calls the default handler, mm->get_unmapped_area(), to find a range and then aligns it for a DAX file. [1]: https://github.com/axboe/fio/blob/master/engines/mmap.c Signed-off-by: Toshi Kani Cc: Andrew Morton Cc: Alexander Viro Cc: Dan Williams Cc: Matthew Wilcox Cc: Ross Zwisler Cc: Kirill A. Shutemov Cc: Dave Chinner Cc: Jan Kara Cc: Theodore Ts'o Cc: Andreas Dilger --- fs/dax.c | 43 +++++++++++++++++++++++++++++++++++++++++++ include/linux/dax.h | 3 +++ 2 files changed, 46 insertions(+) diff --git a/fs/dax.c b/fs/dax.c index 75ba46d..f8ddd27 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -1158,3 +1158,46 @@ int dax_truncate_page(struct inode *inode, loff_t from, get_block_t get_block) return dax_zero_page_range(inode, from, length, get_block); } EXPORT_SYMBOL_GPL(dax_truncate_page); + +/** + * dax_get_unmapped_area - handle get_unmapped_area for a DAX file + * @filp: The file being mmap'd, if not NULL + * @addr: The mmap address. If NULL, the kernel assigns the address + * @len: The mmap size in bytes + * @pgoff: The page offset in the file where the mapping starts from. + * @flags: The mmap flags + * + * This function can be called by a filesystem for get_unmapped_area(). + * When a target file is a DAX file, it aligns the mmap address at the + * beginning of the file by the pmd size. + */ +unsigned long dax_get_unmapped_area(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, unsigned long flags) +{ + unsigned long off, off_end, off_pmd, len_pmd, addr_pmd; + + if (!IS_ENABLED(CONFIG_FS_DAX_PMD) || + !filp || addr || !IS_DAX(filp->f_mapping->host)) + goto out; + + off = pgoff << PAGE_SHIFT; + off_end = off + len; + off_pmd = round_up(off, PMD_SIZE); /* pmd-aligned offset */ + + if ((off_end <= off_pmd) || ((off_end - off_pmd) < PMD_SIZE)) + goto out; + + len_pmd = len + PMD_SIZE; + if ((off + len_pmd) < off) + goto out; + + addr_pmd = current->mm->get_unmapped_area(filp, addr, len_pmd, + pgoff, flags); + if (!IS_ERR_VALUE(addr_pmd)) { + addr_pmd += (off - addr_pmd) & (PMD_SIZE - 1); + return addr_pmd; + } +out: + return current->mm->get_unmapped_area(filp, addr, len, pgoff, flags); +} +EXPORT_SYMBOL_GPL(dax_get_unmapped_area); diff --git a/include/linux/dax.h b/include/linux/dax.h index 636dd59..184b171 100644 --- a/include/linux/dax.h +++ b/include/linux/dax.h @@ -17,12 +17,15 @@ int __dax_fault(struct vm_area_struct *, struct vm_fault *, get_block_t, #ifdef CONFIG_FS_DAX struct page *read_dax_sector(struct block_device *bdev, sector_t n); +unsigned long dax_get_unmapped_area(struct file *filp, unsigned long addr, + unsigned long len, unsigned long pgoff, unsigned long flags); #else static inline struct page *read_dax_sector(struct block_device *bdev, sector_t n) { return ERR_PTR(-ENXIO); } +#define dax_get_unmapped_area NULL #endif #ifdef CONFIG_TRANSPARENT_HUGEPAGE