From patchwork Mon Jul 27 16:29:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 11687175 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 5A18E13B6 for ; Mon, 27 Jul 2020 16:29:58 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 425E720FC3 for ; Mon, 27 Jul 2020 16:29:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1595867398; bh=hdVX0IsUo7eSkaqkoaF98FlrWWgQtbf1bZ3UXG9vp9E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=cMo6XFQV7oPbHh1/A/I1QjCMK3oUMvDqe67CzGi+fT3M5Iqaen4TUctjKVgxORD1y sFsdgzxKA9lLAQt6RY6vzFe4DpZ3wj3Kzi8E8BRMCEZmU/QAJ02p1fJ8hZymF7u2Ci OP+eISUKpCfOdS00c70OKvsXrTYMk1y20pzsK7cA= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732075AbgG0Q34 (ORCPT ); Mon, 27 Jul 2020 12:29:56 -0400 Received: from mail.kernel.org ([198.145.29.99]:58476 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730778AbgG0Q34 (ORCPT ); Mon, 27 Jul 2020 12:29:56 -0400 Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 60A6A20775; Mon, 27 Jul 2020 16:29:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1595867395; bh=hdVX0IsUo7eSkaqkoaF98FlrWWgQtbf1bZ3UXG9vp9E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tpKi6YXufrADChYkIl1EiNp/etiAQqY8b6izlKQYhbgervIIKuPo/MhJD6AQKJRoV 0Ewn8WOwQZMfRXHtZHxCOV7Yg2QQlENg/hqmpF47QkCvnMZgby/XZSduYQFgHeQHEu IRrJOSO5v+O0uFgkTUYV/hLN6+Qu/tTEDQVXuTSg= From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Viro , Andrew Morton , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christopher Lameter , Dan Williams , Dave Hansen , Elena Reshetova , "H. Peter Anvin" , Idan Yaniv , Ingo Molnar , James Bottomley , "Kirill A. Shutemov" , Matthew Wilcox , Mike Rapoport , Mike Rapoport , Michael Kerrisk , Palmer Dabbelt , Paul Walmsley , Peter Zijlstra , Thomas Gleixner , Tycho Andersen , Will Deacon , linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org, x86@kernel.org Subject: [PATCH v2 1/7] mm: add definition of PMD_PAGE_ORDER Date: Mon, 27 Jul 2020 19:29:29 +0300 Message-Id: <20200727162935.31714-2-rppt@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200727162935.31714-1-rppt@kernel.org> References: <20200727162935.31714-1-rppt@kernel.org> MIME-Version: 1.0 Sender: linux-fsdevel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-fsdevel@vger.kernel.org From: Mike Rapoport The definition of PMD_PAGE_ORDER denoting the number of base pages in the second-level leaf page is already used by DAX and maybe handy in other cases as well. Several architectures already have definition of PMD_ORDER as the size of second level page table, so to avoid conflict with these definitions use PMD_PAGE_ORDER name and update DAX respectively. Signed-off-by: Mike Rapoport --- fs/dax.c | 10 +++++----- include/linux/pgtable.h | 3 +++ 2 files changed, 8 insertions(+), 5 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 11b16729b86f..b91d8c8dda45 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -50,7 +50,7 @@ static inline unsigned int pe_order(enum page_entry_size pe_size) #define PG_PMD_NR (PMD_SIZE >> PAGE_SHIFT) /* The order of a PMD entry */ -#define PMD_ORDER (PMD_SHIFT - PAGE_SHIFT) +#define PMD_PAGE_ORDER (PMD_SHIFT - PAGE_SHIFT) static wait_queue_head_t wait_table[DAX_WAIT_TABLE_ENTRIES]; @@ -98,7 +98,7 @@ static bool dax_is_locked(void *entry) static unsigned int dax_entry_order(void *entry) { if (xa_to_value(entry) & DAX_PMD) - return PMD_ORDER; + return PMD_PAGE_ORDER; return 0; } @@ -1456,7 +1456,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp, { struct vm_area_struct *vma = vmf->vma; struct address_space *mapping = vma->vm_file->f_mapping; - XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_ORDER); + XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_PAGE_ORDER); unsigned long pmd_addr = vmf->address & PMD_MASK; bool write = vmf->flags & FAULT_FLAG_WRITE; bool sync; @@ -1515,7 +1515,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp, * entry is already in the array, for instance), it will return * VM_FAULT_FALLBACK. */ - entry = grab_mapping_entry(&xas, mapping, PMD_ORDER); + entry = grab_mapping_entry(&xas, mapping, PMD_PAGE_ORDER); if (xa_is_internal(entry)) { result = xa_to_internal(entry); goto fallback; @@ -1681,7 +1681,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order) if (order == 0) ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn); #ifdef CONFIG_FS_DAX_PMD - else if (order == PMD_ORDER) + else if (order == PMD_PAGE_ORDER) ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE); #endif else diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 56c1e8eb7bb0..79f8443609e7 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -28,6 +28,9 @@ #define USER_PGTABLES_CEILING 0UL #endif +/* Number of base pages in a second level leaf page */ +#define PMD_PAGE_ORDER (PMD_SHIFT - PAGE_SHIFT) + /* * A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD] *