From patchwork Tue May 12 18:44:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Rapoport X-Patchwork-Id: 11543775 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 7D0D7139F for ; Tue, 12 May 2020 18:46:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3C1EB20784 for ; Tue, 12 May 2020 18:46:19 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="df1O0LqH" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3C1EB20784 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 642E99000E2; Tue, 12 May 2020 14:46:18 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 5F327900036; Tue, 12 May 2020 14:46:18 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 508F99000E2; Tue, 12 May 2020 14:46:18 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0067.hostedemail.com [216.40.44.67]) by kanga.kvack.org (Postfix) with ESMTP id 36EA2900036 for ; Tue, 12 May 2020 14:46:18 -0400 (EDT) Received: from smtpin12.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E35932C18 for ; Tue, 12 May 2020 18:46:17 +0000 (UTC) X-FDA: 76808947194.12.wind79_19d63b6b2ef00 X-Spam-Summary: 2,0,0,0cf56cf12a8c847b,d41d8cd98f00b204,rppt@kernel.org,,RULES_HIT:2:41:69:355:379:541:800:960:973:988:989:1260:1311:1314:1345:1359:1437:1515:1535:1605:1606:1730:1747:1777:1792:2393:2559:2562:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3874:4119:4250:4321:4605:5007:6117:6261:6653:6742:6743:7576:7903:9036:9592:10004:11026:11473:11657:11658:11914:12043:12114:12291:12297:12517:12519:12555:12679:12895:13161:13229:13255:13894:14096:14394:21067:21080:21451:21627:21795:30003:30051:30054,0,RBL:198.145.29.99:@kernel.org:.lbl8.mailshell.net-62.2.0.100 64.100.201.201,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none X-HE-Tag: wind79_19d63b6b2ef00 X-Filterd-Recvd-Size: 8637 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Tue, 12 May 2020 18:46:17 +0000 (UTC) Received: from aquarius.haifa.ibm.com (nesher1.haifa.il.ibm.com [195.110.40.7]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id B409D23128; Tue, 12 May 2020 18:46:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1589309176; bh=yDrZA870oUFaCW7CO9U7txLmbCxWgFuHlkXJRLkXrf0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=df1O0LqHMxZEqdiZbaZmdDIF2V+gC5fFm3dupkLWddckRlrbJLxiwnBoMmfCnb3tb wI0xJ9Tbo6zHg5sqXarrqxWgV6ibIIa8Bu4+Z2mki64NuyhdTy9K6D1jG+3AUqxc+u eo273LJjuzAUbd4pwtJVWdo+IddEARRHpeNxgkAE= From: Mike Rapoport To: linux-kernel@vger.kernel.org Cc: Andrew Morton , Arnd Bergmann , Borislav Petkov , Brian Cain , Catalin Marinas , Chris Zankel , "David S. Miller" , Geert Uytterhoeven , Greentime Hu , Greg Ungerer , Guan Xuetao , Guo Ren , Heiko Carstens , Helge Deller , Ingo Molnar , Ley Foon Tan , Mark Salter , Matt Turner , Max Filippov , Michael Ellerman , Michal Simek , Mike Rapoport , Nick Hu , Paul Walmsley , Richard Weinberger , Rich Felker , Russell King , Stafford Horne , Thomas Bogendoerfer , Thomas Gleixner , Tony Luck , Vincent Chen , Vineet Gupta , Will Deacon , Yoshinori Sato , linux-alpha@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-c6x-dev@linux-c6x.org, linux-csky@vger.kernel.org, linux-hexagon@vger.kernel.org, linux-ia64@vger.kernel.org, linux-m68k@lists.linux-m68k.org, linux-mips@vger.kernel.org, linux-mm@kvack.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, linux-snps-arc@lists.infradead.org, linux-um@lists.infradead.org, linux-xtensa@linux-xtensa.org, openrisc@lists.librecores.org, sparclinux@vger.kernel.org, x86@kernel.org, Mike Rapoport Subject: [PATCH 06/12] m68k/mm: move {cache,nocahe}_page() definitions close to their user Date: Tue, 12 May 2020 21:44:16 +0300 Message-Id: <20200512184422.12418-7-rppt@kernel.org> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200512184422.12418-1-rppt@kernel.org> References: <20200512184422.12418-1-rppt@kernel.org> MIME-Version: 1.0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Mike Rapoport The cache_page() and nocache_page() functions are only used by the morotola MMU variant for setting caching attributes for the page table pages. Move the definitions of these functions from arch/m68k/include/asm/motorola_pgtable.h closer to their usage in arch/m68k/mm/motorola.c and drop unused definition in arch/m68k/include/asm/mcf_pgtable.h. Signed-off-by: Mike Rapoport Acked-by: Greg Ungerer --- arch/m68k/include/asm/mcf_pgtable.h | 40 --------------------- arch/m68k/include/asm/motorola_pgtable.h | 44 ------------------------ arch/m68k/mm/motorola.c | 43 +++++++++++++++++++++++ 3 files changed, 43 insertions(+), 84 deletions(-) diff --git a/arch/m68k/include/asm/mcf_pgtable.h b/arch/m68k/include/asm/mcf_pgtable.h index 0031cd387b75..737e826294f3 100644 --- a/arch/m68k/include/asm/mcf_pgtable.h +++ b/arch/m68k/include/asm/mcf_pgtable.h @@ -328,46 +328,6 @@ extern pgd_t kernel_pg_dir[PTRS_PER_PGD]; #define pte_offset_kernel(dir, address) \ ((pte_t *) __pmd_page(*(dir)) + __pte_offset(address)) -/* - * Disable caching for page at given kernel virtual address. - */ -static inline void nocache_page(void *vaddr) -{ - pgd_t *dir; - p4d_t *p4dp; - pud_t *pudp; - pmd_t *pmdp; - pte_t *ptep; - unsigned long addr = (unsigned long) vaddr; - - dir = pgd_offset_k(addr); - p4dp = p4d_offset(dir, addr); - pudp = pud_offset(p4dp, addr); - pmdp = pmd_offset(pudp, addr); - ptep = pte_offset_kernel(pmdp, addr); - *ptep = pte_mknocache(*ptep); -} - -/* - * Enable caching for page at given kernel virtual address. - */ -static inline void cache_page(void *vaddr) -{ - pgd_t *dir; - p4d_t *p4dp; - pud_t *pudp; - pmd_t *pmdp; - pte_t *ptep; - unsigned long addr = (unsigned long) vaddr; - - dir = pgd_offset_k(addr); - p4dp = p4d_offset(dir, addr); - pudp = pud_offset(p4dp, addr); - pmdp = pmd_offset(pudp, addr); - ptep = pte_offset_kernel(pmdp, addr); - *ptep = pte_mkcache(*ptep); -} - /* * Encode and de-code a swap entry (must be !pte_none(e) && !pte_present(e)) */ diff --git a/arch/m68k/include/asm/motorola_pgtable.h b/arch/m68k/include/asm/motorola_pgtable.h index 9e5a3de21e15..e1594acf7c7e 100644 --- a/arch/m68k/include/asm/motorola_pgtable.h +++ b/arch/m68k/include/asm/motorola_pgtable.h @@ -227,50 +227,6 @@ static inline pte_t *pte_offset_kernel(pmd_t *pmdp, unsigned long address) #define pte_offset_map(pmdp,address) ((pte_t *)__pmd_page(*pmdp) + (((address) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))) #define pte_unmap(pte) ((void)0) -/* Prior to calling these routines, the page should have been flushed - * from both the cache and ATC, or the CPU might not notice that the - * cache setting for the page has been changed. -jskov - */ -static inline void nocache_page(void *vaddr) -{ - unsigned long addr = (unsigned long)vaddr; - - if (CPU_IS_040_OR_060) { - pgd_t *dir; - p4d_t *p4dp; - pud_t *pudp; - pmd_t *pmdp; - pte_t *ptep; - - dir = pgd_offset_k(addr); - p4dp = p4d_offset(dir, addr); - pudp = pud_offset(p4dp, addr); - pmdp = pmd_offset(pudp, addr); - ptep = pte_offset_kernel(pmdp, addr); - *ptep = pte_mknocache(*ptep); - } -} - -static inline void cache_page(void *vaddr) -{ - unsigned long addr = (unsigned long)vaddr; - - if (CPU_IS_040_OR_060) { - pgd_t *dir; - p4d_t *p4dp; - pud_t *pudp; - pmd_t *pmdp; - pte_t *ptep; - - dir = pgd_offset_k(addr); - p4dp = p4d_offset(dir, addr); - pudp = pud_offset(p4dp, addr); - pmdp = pmd_offset(pudp, addr); - ptep = pte_offset_kernel(pmdp, addr); - *ptep = pte_mkcache(*ptep); - } -} - /* Encode and de-code a swap entry (must be !pte_none(e) && !pte_present(e)) */ #define __swp_type(x) (((x).val >> 4) & 0xff) #define __swp_offset(x) ((x).val >> 12) diff --git a/arch/m68k/mm/motorola.c b/arch/m68k/mm/motorola.c index 904c2a663977..8e5e74121a78 100644 --- a/arch/m68k/mm/motorola.c +++ b/arch/m68k/mm/motorola.c @@ -45,6 +45,49 @@ unsigned long mm_cachebits; EXPORT_SYMBOL(mm_cachebits); #endif +/* Prior to calling these routines, the page should have been flushed + * from both the cache and ATC, or the CPU might not notice that the + * cache setting for the page has been changed. -jskov + */ +static inline void nocache_page(void *vaddr) +{ + unsigned long addr = (unsigned long)vaddr; + + if (CPU_IS_040_OR_060) { + pgd_t *dir; + p4d_t *p4dp; + pud_t *pudp; + pmd_t *pmdp; + pte_t *ptep; + + dir = pgd_offset_k(addr); + p4dp = p4d_offset(dir, addr); + pudp = pud_offset(p4dp, addr); + pmdp = pmd_offset(pudp, addr); + ptep = pte_offset_kernel(pmdp, addr); + *ptep = pte_mknocache(*ptep); + } +} + +static inline void cache_page(void *vaddr) +{ + unsigned long addr = (unsigned long)vaddr; + + if (CPU_IS_040_OR_060) { + pgd_t *dir; + p4d_t *p4dp; + pud_t *pudp; + pmd_t *pmdp; + pte_t *ptep; + + dir = pgd_offset_k(addr); + p4dp = p4d_offset(dir, addr); + pudp = pud_offset(p4dp, addr); + pmdp = pmd_offset(pudp, addr); + ptep = pte_offset_kernel(pmdp, addr); + *ptep = pte_mkcache(*ptep); + } +} /* * Motorola 680x0 user's manual recommends using uncached memory for address