From patchwork Sun Nov 8 19:27:39 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dan Williams X-Patchwork-Id: 7579251 Return-Path: X-Original-To: patchwork-linux-block@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 2825AC05C6 for ; Sun, 8 Nov 2015 19:33:30 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 41A392058A for ; Sun, 8 Nov 2015 19:33:29 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 43AEB20591 for ; Sun, 8 Nov 2015 19:33:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751761AbbKHTdW (ORCPT ); Sun, 8 Nov 2015 14:33:22 -0500 Received: from mga11.intel.com ([192.55.52.93]:18514 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751722AbbKHTdV (ORCPT ); Sun, 8 Nov 2015 14:33:21 -0500 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP; 08 Nov 2015 11:33:21 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,263,1444719600"; d="scan'208";a="831004096" Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.39]) by fmsmga001.fm.intel.com with ESMTP; 08 Nov 2015 11:33:21 -0800 Subject: [PATCH v4 03/14] dax: use HPAGE_SIZE instead of PMD_SIZE From: Dan Williams To: axboe@fb.com Cc: jack@suse.cz, linux-nvdimm@lists.01.org, Dave Hansen , david@fromorbit.com, linux-block@vger.kernel.org, ross.zwisler@linux.intel.com, hch@lst.de Date: Sun, 08 Nov 2015 14:27:39 -0500 Message-ID: <20151108192739.9104.32105.stgit@dwillia2-desk3.amr.corp.intel.com> In-Reply-To: <20151108192722.9104.86664.stgit@dwillia2-desk3.amr.corp.intel.com> References: <20151108192722.9104.86664.stgit@dwillia2-desk3.amr.corp.intel.com> User-Agent: StGit/0.17.1-9-g687f MIME-Version: 1.0 Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org X-Spam-Status: No, score=-6.9 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_HI, T_RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP As Dave points out when dealing with the contents of a page we use PAGE_SIZE and PAGE_SHIFT, similary for huge pages use HPAGE_SIZE and HPAGE_SHIFT. Reported-by: Dave Hansen Signed-off-by: Dan Williams --- fs/dax.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) -- To unsubscribe from this list: send the line "unsubscribe linux-block" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/fs/dax.c b/fs/dax.c index f8e543839e5c..149d6000d72a 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -511,7 +511,7 @@ EXPORT_SYMBOL_GPL(dax_fault); * The 'colour' (ie low bits) within a PMD of a page offset. This comes up * more often than one might expect in the below function. */ -#define PG_PMD_COLOUR ((PMD_SIZE >> PAGE_SHIFT) - 1) +#define PG_PMD_COLOUR ((HPAGE_SIZE >> PAGE_SHIFT) - 1) int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, pmd_t *pmd, unsigned int flags, get_block_t get_block, @@ -537,7 +537,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, /* If the PMD would extend outside the VMA */ if (pmd_addr < vma->vm_start) return VM_FAULT_FALLBACK; - if ((pmd_addr + PMD_SIZE) > vma->vm_end) + if ((pmd_addr + HPAGE_SIZE) > vma->vm_end) return VM_FAULT_FALLBACK; pgoff = linear_page_index(vma, pmd_addr); @@ -551,7 +551,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, memset(&bh, 0, sizeof(bh)); block = (sector_t)pgoff << (PAGE_SHIFT - blkbits); - bh.b_size = PMD_SIZE; + bh.b_size = HPAGE_SIZE; length = get_block(inode, block, &bh, write); if (length) return VM_FAULT_SIGBUS; @@ -562,7 +562,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, * just fall back to PTEs. Calling get_block 512 times in a loop * would be silly. */ - if (!buffer_size_valid(&bh) || bh.b_size < PMD_SIZE) + if (!buffer_size_valid(&bh) || bh.b_size < HPAGE_SIZE) goto fallback; /* @@ -571,7 +571,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, */ if (buffer_new(&bh)) { i_mmap_unlock_read(mapping); - unmap_mapping_range(mapping, pgoff << PAGE_SHIFT, PMD_SIZE, 0); + unmap_mapping_range(mapping, pgoff << PAGE_SHIFT, HPAGE_SIZE, 0); i_mmap_lock_read(mapping); } @@ -616,7 +616,7 @@ int __dax_pmd_fault(struct vm_area_struct *vma, unsigned long address, result = VM_FAULT_SIGBUS; goto out; } - if ((length < PMD_SIZE) || (pfn & PG_PMD_COLOUR)) + if ((length < HPAGE_SIZE) || (pfn & PG_PMD_COLOUR)) goto fallback; if (buffer_unwritten(&bh) || buffer_new(&bh)) {