From patchwork Mon Jul 27 16:29:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 11687177 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 70B73138C for ; Mon, 27 Jul 2020 16:29:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3C7F32083E for ; Mon, 27 Jul 2020 16:29:58 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="tpKi6YXu" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3C7F32083E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 672A86B0005; Mon, 27 Jul 2020 12:29:57 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 622BC6B0006; Mon, 27 Jul 2020 12:29:57 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 512AF6B0007; Mon, 27 Jul 2020 12:29:57 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0195.hostedemail.com [216.40.44.195]) by kanga.kvack.org (Postfix) with ESMTP id 3C2D16B0005 for ; Mon, 27 Jul 2020 12:29:57 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 03CA51EF1 for ; Mon, 27 Jul 2020 16:29:56 +0000 (UTC) X-FDA: 77084392434.18.ear61_620384d26f62 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin18.hostedemail.com (Postfix) with ESMTP id BCD3A100ED9CA for ; Mon, 27 Jul 2020 16:29:56 +0000 (UTC) X-Spam-Summary: 1,0,0,0563f952109cfa7f,d41d8cd98f00b204,rppt@kernel.org,,RULES_HIT:41:355:379:541:560:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1542:1711:1730:1747:1777:1792:2198:2199:2393:2559:2562:2693:2731:2895:3138:3139:3140:3141:3142:3354:3865:3866:3867:3868:3870:3871:3872:4250:4321:4605:5007:6119:6261:6653:6742:6743:7576:7875:7903:8603:9036:10004:11026:11232:11473:11658:11914:12043:12296:12297:12438:12517:12519:12555:12679:12895:13161:13229:13894:14181:14394:14721:21080:21324:21451:21627:21990:30003:30029:30034:30054:30070,0,RBL:198.145.29.99:@kernel.org:.lbl8.mailshell.net-62.2.0.100 64.100.201.201;04yrzeeo9xpsymonjq3gm7j1z5qy4ophj3zfqmjjckm8fff6g3fpw5z1sm4g6ok.x6oh1px1zxhsf7dpbam7tb49t8j11e7skfyrtrxdfbpx14z98cgxe1e8xi5381m.s-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:34,LUA_SUMMARY:none X-HE-Tag: ear61_620384d26f62 X-Filterd-Recvd-Size: 5417 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf37.hostedemail.com (Postfix) with ESMTP for ; Mon, 27 Jul 2020 16:29:56 +0000 (UTC) Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 60A6A20775; Mon, 27 Jul 2020 16:29:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1595867395; bh=hdVX0IsUo7eSkaqkoaF98FlrWWgQtbf1bZ3UXG9vp9E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tpKi6YXufrADChYkIl1EiNp/etiAQqY8b6izlKQYhbgervIIKuPo/MhJD6AQKJRoV 0Ewn8WOwQZMfRXHtZHxCOV7Yg2QQlENg/hqmpF47QkCvnMZgby/XZSduYQFgHeQHEu IRrJOSO5v+O0uFgkTUYV/hLN6+Qu/tTEDQVXuTSg= From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Alexander Viro , Andrew Morton , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Catalin Marinas , Christopher Lameter , Dan Williams , Dave Hansen , Elena Reshetova , "H. Peter Anvin" , Idan Yaniv , Ingo Molnar , James Bottomley , "Kirill A. Shutemov" , Matthew Wilcox , Mike Rapoport , Mike Rapoport , Michael Kerrisk , Palmer Dabbelt , Paul Walmsley , Peter Zijlstra , Thomas Gleixner , Tycho Andersen , Will Deacon , linux-api@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, linux-nvdimm@lists.01.org, linux-riscv@lists.infradead.org, x86@kernel.org Subject: [PATCH v2 1/7] mm: add definition of PMD_PAGE_ORDER Date: Mon, 27 Jul 2020 19:29:29 +0300 Message-Id: <20200727162935.31714-2-rppt@kernel.org> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200727162935.31714-1-rppt@kernel.org> References: <20200727162935.31714-1-rppt@kernel.org> MIME-Version: 1.0 X-Rspamd-Queue-Id: BCD3A100ED9CA X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The definition of PMD_PAGE_ORDER denoting the number of base pages in the second-level leaf page is already used by DAX and maybe handy in other cases as well. Several architectures already have definition of PMD_ORDER as the size of second level page table, so to avoid conflict with these definitions use PMD_PAGE_ORDER name and update DAX respectively. Signed-off-by: Mike Rapoport --- fs/dax.c | 10 +++++----- include/linux/pgtable.h | 3 +++ 2 files changed, 8 insertions(+), 5 deletions(-) diff --git a/fs/dax.c b/fs/dax.c index 11b16729b86f..b91d8c8dda45 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -50,7 +50,7 @@ static inline unsigned int pe_order(enum page_entry_size pe_size) #define PG_PMD_NR (PMD_SIZE >> PAGE_SHIFT) /* The order of a PMD entry */ -#define PMD_ORDER (PMD_SHIFT - PAGE_SHIFT) +#define PMD_PAGE_ORDER (PMD_SHIFT - PAGE_SHIFT) static wait_queue_head_t wait_table[DAX_WAIT_TABLE_ENTRIES]; @@ -98,7 +98,7 @@ static bool dax_is_locked(void *entry) static unsigned int dax_entry_order(void *entry) { if (xa_to_value(entry) & DAX_PMD) - return PMD_ORDER; + return PMD_PAGE_ORDER; return 0; } @@ -1456,7 +1456,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp, { struct vm_area_struct *vma = vmf->vma; struct address_space *mapping = vma->vm_file->f_mapping; - XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_ORDER); + XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_PAGE_ORDER); unsigned long pmd_addr = vmf->address & PMD_MASK; bool write = vmf->flags & FAULT_FLAG_WRITE; bool sync; @@ -1515,7 +1515,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp, * entry is already in the array, for instance), it will return * VM_FAULT_FALLBACK. */ - entry = grab_mapping_entry(&xas, mapping, PMD_ORDER); + entry = grab_mapping_entry(&xas, mapping, PMD_PAGE_ORDER); if (xa_is_internal(entry)) { result = xa_to_internal(entry); goto fallback; @@ -1681,7 +1681,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order) if (order == 0) ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn); #ifdef CONFIG_FS_DAX_PMD - else if (order == PMD_ORDER) + else if (order == PMD_PAGE_ORDER) ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE); #endif else diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index 56c1e8eb7bb0..79f8443609e7 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -28,6 +28,9 @@ #define USER_PGTABLES_CEILING 0UL #endif +/* Number of base pages in a second level leaf page */ +#define PMD_PAGE_ORDER (PMD_SHIFT - PAGE_SHIFT) + /* * A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD] *