From patchwork Sat Sep 19 09:18:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thomas Gleixner X-Patchwork-Id: 11786881 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id B3B4C6CB for ; Sat, 19 Sep 2020 09:50:42 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 69B2E2075E for ; Sat, 19 Sep 2020 09:50:42 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="uxskJelr"; dkim=permerror (0-bit key) header.d=linutronix.de header.i=@linutronix.de header.b="PrQO9NIU" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 69B2E2075E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linutronix.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 99AAA6B006C; Sat, 19 Sep 2020 05:50:21 -0400 (EDT) Delivered-To: linux-mm-outgoing@kvack.org Received: by kanga.kvack.org (Postfix, from userid 40) id 970AB900008; Sat, 19 Sep 2020 05:50:21 -0400 (EDT) X-Original-To: int-list-linux-mm@kvack.org X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7ECA56B0070; Sat, 19 Sep 2020 05:50:21 -0400 (EDT) X-Original-To: linux-mm@kvack.org X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0167.hostedemail.com [216.40.44.167]) by kanga.kvack.org (Postfix) with ESMTP id 5E5866B006C for ; Sat, 19 Sep 2020 05:50:21 -0400 (EDT) Received: from smtpin30.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 20084180AD802 for ; Sat, 19 Sep 2020 09:50:21 +0000 (UTC) X-FDA: 77279340642.30.error04_39069bb27132 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin30.hostedemail.com (Postfix) with ESMTP id EBC00180B3AA7 for ; Sat, 19 Sep 2020 09:50:20 +0000 (UTC) X-Spam-Summary: 1,0,0,b169eba2a1411d89,d41d8cd98f00b204,tglx@linutronix.de,,RULES_HIT:1:2:41:69:152:355:379:800:960:966:968:973:988:989:1183:1260:1277:1311:1313:1314:1345:1431:1437:1515:1516:1518:1593:1594:1605:1730:1747:1777:1792:1981:2194:2196:2199:2200:2393:2559:2562:2693:3138:3139:3140:3141:3142:3865:3866:3867:3868:3870:3871:3872:3874:4051:4250:4385:4605:5007:6120:6261:6653:6742:6743:7875:7901:10004:11026:11473:11657:11658:11914:12043:12160:12291:12296:12297:12438:12683:13161:13229:14096:14097:14659:21080:21433:21451:21627:21990:30054,0,RBL:193.142.43.55:@linutronix.de:.lbl8.mailshell.net-62.2.6.100 64.100.201.201;04yftpnpc96cfk9rgz15bmnqob3ioocte1ggkdzz9qcne9qijunkzhxpqa1bgt5.o5qtjnyyoxddo5dus3z9gh1cdhwokzuifdn7hox9gohaqgn7bxsxhuexrhkh7g3.g-lbl8.mailshell.net-223.238.255.100,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fp,MSBL:0,DNSBL:neutral,Custom_rules:0:0:0,LFtime:38,LUA_SUMMARY:none X-HE-Tag: error04_39069bb27132 X-Filterd-Recvd-Size: 11813 Received: from galois.linutronix.de (Galois.linutronix.de [193.142.43.55]) by imf36.hostedemail.com (Postfix) with ESMTP for ; Sat, 19 Sep 2020 09:50:20 +0000 (UTC) Message-Id: <20200919092617.375720378@linutronix.de> DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1600509019; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: references:references; bh=dJhau0LaFILj/IFm3JpK00/lnAjoDukMJ8R0j3dPchU=; b=uxskJelrTfz/GKc9edsAEs0gb3DWYfCu7n8APYzVlc7bJQzv04iZOzHcmFq6t5qDrrVBZ/ 4C4PAFGkzsFp6nGQcpt5niy6S+9ydBCpiObiah/IVCCwDxhhHC5cgkAu/6V3V08Eh1r96m xmw3OVtejQT5gDr4T78MAAWwkDh33eqo6q17Wt0/k4vyjdndUorQf1g9379L4qm8930rmO GEKPqud78gf29AiUWocLXzxJzWSo2OEEBj/yrvwUOT1wKbOVlZ0FliAB+Yqg7cNrXdk79+ OO7sMlURN+BROu5//5m0JNoAo/MusM0EcFq1MWiz2xN8WqM4qDKFQ5QTwdF32g== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1600509019; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: references:references; bh=dJhau0LaFILj/IFm3JpK00/lnAjoDukMJ8R0j3dPchU=; b=PrQO9NIUtOX/0/EHlhtg0Juh5C5GtlSLp2/PPAA/aAaFXiFrlKNN8Eayg02fcVDZ14HDVH br4tlK6E4ZYXVrAg== Date: Sat, 19 Sep 2020 11:18:06 +0200 From: Thomas Gleixner To: LKML Cc: linux-arch@vger.kernel.org, Linus Torvalds , Paul McKenney , x86@kernel.org, Sebastian Andrzej Siewior , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Andrew Morton , Linux-MM , Russell King , Linux ARM , Chris Zankel , Max Filippov , linux-xtensa@linux-xtensa.org, Jani Nikula , Joonas Lahtinen , Rodrigo Vivi , David Airlie , Daniel Vetter , intel-gfx , dri-devel , Ard Biesheuvel , Herbert Xu , Vineet Gupta , linux-snps-arc@lists.infradead.org, Arnd Bergmann , Guo Ren , linux-csky@vger.kernel.org, Michal Simek , Thomas Bogendoerfer , linux-mips@vger.kernel.org, Nick Hu , Greentime Hu , Vincent Chen , Michael Ellerman , Benjamin Herrenschmidt , Paul Mackerras , linuxppc-dev@lists.ozlabs.org, "David S. Miller" , sparclinux@vger.kernel.org Subject: [patch RFC 15/15] mm/highmem: Provide kmap_temporary* References: <20200919091751.011116649@linutronix.de> MIME-Version: 1.0 Content-transfer-encoding: 8-bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now that the kmap atomic index is stored in task struct provide a preemptible variant. On context switch the maps of an outgoing task are removed and the map of the incoming task are restored. That's obviously slow, but highmem is slow anyway. The kmap_temporary and iomap_temporary interfaces can be invoked from both preemptible and atomic context. A wholesale conversion of kmap_atomic to be fully preemptible is not possible because some of the usage sites might rely on the preemption disable for serialization or per CPUness. Needs to be done on a case by case basis. Signed-off-by: Thomas Gleixner --- arch/x86/include/asm/iomap.h | 16 ++++++++- arch/x86/mm/iomap_32.c | 7 +--- include/linux/highmem.h | 70 +++++++++++++++++++++++++++++++++---------- mm/highmem.c | 18 +++++------ 4 files changed, 80 insertions(+), 31 deletions(-) --- a/arch/x86/include/asm/iomap.h +++ b/arch/x86/include/asm/iomap.h @@ -13,11 +13,23 @@ #include #include -void __iomem *iomap_atomic_pfn_prot(unsigned long pfn, pgprot_t prot); +void __iomem *iomap_temporary_pfn_prot(unsigned long pfn, pgprot_t prot); + +static inline void __iomem *iomap_atomic_pfn_prot(unsigned long pfn, + pgprot_t prot) +{ + preempt_disable(); + return iomap_temporary_pfn_prot(pfn, prot); +} + +static inline void iounmap_temporary(void __iomem *vaddr) +{ + kunmap_temporary_indexed((void __force *)vaddr); +} static inline void iounmap_atomic(void __iomem *vaddr) { - kunmap_atomic_indexed((void __force *)vaddr); + iounmap_temporary(vaddr); preempt_enable(); } --- a/arch/x86/mm/iomap_32.c +++ b/arch/x86/mm/iomap_32.c @@ -44,7 +44,7 @@ void iomap_free(resource_size_t base, un } EXPORT_SYMBOL_GPL(iomap_free); -void __iomem *iomap_atomic_pfn_prot(unsigned long pfn, pgprot_t prot) +void __iomem *iomap_temporary_pfn_prot(unsigned long pfn, pgprot_t prot) { /* * For non-PAT systems, translate non-WB request to UC- just in @@ -60,7 +60,6 @@ void __iomem *iomap_atomic_pfn_prot(unsi /* Filter out unsupported __PAGE_KERNEL* bits: */ pgprot_val(prot) &= __default_kernel_pte_mask; - preempt_disable(); - return (void __force __iomem *)kmap_atomic_pfn_prot(pfn, prot); + return (void __force __iomem *)__kmap_temporary_pfn_prot(pfn, prot); } -EXPORT_SYMBOL_GPL(iomap_atomic_pfn_prot); +EXPORT_SYMBOL_GPL(iomap_temporary_pfn_prot); --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -35,9 +35,9 @@ static inline void invalidate_kernel_vma * Outside of CONFIG_HIGHMEM to support X86 32bit iomap_atomic() cruft. */ #ifdef CONFIG_KMAP_ATOMIC_GENERIC -void *kmap_atomic_pfn_prot(unsigned long pfn, pgprot_t prot); -void *kmap_atomic_page_prot(struct page *page, pgprot_t prot); -void kunmap_atomic_indexed(void *vaddr); +void *__kmap_temporary_pfn_prot(unsigned long pfn, pgprot_t prot); +void *__kmap_temporary_page_prot(struct page *page, pgprot_t prot); +void kunmap_temporary_indexed(void *vaddr); void kmap_switch_temporary(struct task_struct *prev, struct task_struct *next); # ifndef ARCH_NEEDS_KMAP_HIGH_GET static inline void *arch_kmap_temporary_high_get(struct page *page) @@ -95,16 +95,35 @@ static inline void kunmap(struct page *p * be used in IRQ contexts, so in some (very limited) cases we need * it. */ -static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) +static inline void *kmap_temporary_page_prot(struct page *page, pgprot_t prot) { - preempt_disable(); - return kmap_atomic_page_prot(page, prot); + return __kmap_temporary_page_prot(page, prot); } -static inline void *kmap_atomic_pfn(unsigned long pfn) +static inline void *kmap_temporary_page(struct page *page) +{ + return kmap_temporary_page_prot(page, kmap_prot); +} + +static inline void *kmap_temporary_pfn_prot(unsigned long pfn, pgprot_t prot) +{ + return __kmap_temporary_pfn_prot(pfn, prot); +} + +static inline void *kmap_temporary_pfn(unsigned long pfn) +{ + return kmap_temporary_pfn_prot(pfn, kmap_prot); +} + +static inline void __kunmap_temporary(void *vaddr) +{ + kunmap_temporary_indexed(vaddr); +} + +static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) { preempt_disable(); - return kmap_atomic_pfn_prot(pfn, kmap_prot); + return kmap_temporary_page_prot(page, prot); } static inline void *kmap_atomic(struct page *page) @@ -112,9 +131,10 @@ static inline void *kmap_atomic(struct p return kmap_atomic_prot(page, kmap_prot); } -static inline void __kunmap_atomic(void *addr) +static inline void *kmap_atomic_pfn(unsigned long pfn) { - kumap_atomic_indexed(addr); + preempt_disable(); + return kmap_temporary_pfn_prot(pfn, kmap_prot); } /* declarations for linux/mm/highmem.c */ @@ -177,6 +197,22 @@ static inline void kunmap(struct page *p #endif } +static inline void *kmap_temporary_page(struct page *page) +{ + pagefault_disable(); + return page_address(page); +} + +static inline void *kmap_temporary_page_prot(struct page *page, pgprot_t prot) +{ + return kmap_temporary_page(page); +} + +static inline void *kmap_temporary_pfn(unsigned long pfn) +{ + return kmap_temporary_page(pfn_to_page(pfn)); +} + static inline void *kmap_atomic(struct page *page) { preempt_disable(); @@ -194,12 +230,8 @@ static inline void *kmap_atomic_pfn(unsi return kmap_atomic(pfn_to_page(pfn)); } -static inline void __kunmap_atomic(void *addr) +static inline void __kunmap_temporary(void *addr) { - /* - * Mostly nothing to do in the CONFIG_HIGHMEM=n case as kunmap_atomic() - * handles preemption - */ #ifdef ARCH_HAS_FLUSH_ON_KUNMAP kunmap_flush_on_unmap(addr); #endif @@ -217,10 +249,16 @@ static inline void __kunmap_atomic(void #define kunmap_atomic(addr) \ do { \ BUILD_BUG_ON(__same_type((addr), struct page *)); \ - __kunmap_atomic(addr); \ + __kunmap_temporary(addr); \ preempt_enable(); \ } while (0) +#define kunmap_temporary(addr) \ + do { \ + BUILD_BUG_ON(__same_type((addr), struct page *)); \ + __kunmap_temporary(addr); \ + } while (0) + /* when CONFIG_HIGHMEM is not set these will be plain clear/copy_page */ #ifndef clear_user_highpage static inline void clear_user_highpage(struct page *page, unsigned long vaddr) --- a/mm/highmem.c +++ b/mm/highmem.c @@ -432,7 +432,7 @@ static pte_t *kmap_get_pte(void) return __kmap_pte; } -static void *__kmap_atomic_pfn_prot(unsigned long pfn, pgprot_t prot) +static void *do_kmap_temporary_pfn_prot(unsigned long pfn, pgprot_t prot) { pte_t pteval, *kmap_pte = kmap_get_pte(); unsigned long vaddr; @@ -451,14 +451,14 @@ static void *__kmap_atomic_pfn_prot(unsi return (void *)vaddr; } -void *kmap_atomic_pfn_prot(unsigned long pfn, pgprot_t prot) +void *__kmap_temporary_pfn_prot(unsigned long pfn, pgprot_t prot) { pagefault_disable(); - return __kmap_atomic_pfn_prot(pfn, prot); + return do_kmap_temporary_pfn_prot(pfn, prot); } -EXPORT_SYMBOL(kmap_atomic_pfn_prot); +EXPORT_SYMBOL(__kmap_temporary_pfn_prot); -void *kmap_atomic_page_prot(struct page *page, pgprot_t prot) +void *__kmap_temporary_page_prot(struct page *page, pgprot_t prot) { void *kmap; @@ -471,11 +471,11 @@ void *kmap_atomic_page_prot(struct page if (kmap) return kmap; - return __kmap_atomic_pfn_prot(page_to_pfn(page), prot); + return do_kmap_temporary_pfn_prot(page_to_pfn(page), prot); } -EXPORT_SYMBOL(kmap_atomic_page_prot); +EXPORT_SYMBOL(__kmap_temporary_page_prot); -void kunmap_atomic_indexed(void *vaddr) +void kunmap_temporary_indexed(void *vaddr) { unsigned long addr = (unsigned long) vaddr & PAGE_MASK; pte_t *kmap_pte = kmap_get_pte(); @@ -503,7 +503,7 @@ void kunmap_atomic_indexed(void *vaddr) preempt_enable(); pagefault_enable(); } -EXPORT_SYMBOL(kunmap_atomic_indexed); +EXPORT_SYMBOL(kunmap_temporary_indexed); void kmap_switch_temporary(struct task_struct *prev, struct task_struct *next) {