Message ID | 20201009195033.3208459-6-ira.weiny@intel.com (mailing list archive) |
---|---|
State | RFC |
Headers | show |
Series | PMEM: Introduce stray write protection for PMEM | expand |
Ira, On Fri, Oct 09 2020 at 12:49, ira weiny wrote: > From: Ira Weiny <ira.weiny@intel.com> > > To correctly support the semantics of kmap() with Kernel protection keys > (PKS), kmap() may be required to set the protections on multiple > processors (globally). Enabling PKS globally can be very expensive > depending on the requested operation. Furthermore, enabling a domain > globally reduces the protection afforded by PKS. > > Most kmap() (Aprox 209 of 229) callers use the map within a single thread and > have no need for the protection domain to be enabled globally. However, the > remaining callers do not follow this pattern and, as best I can tell, expect > the mapping to be 'global' and available to any thread who may access the > mapping.[1] > > We don't anticipate global mappings to pmem, however in general there is a > danger in changing the semantics of kmap(). Effectively, this would cause an > unresolved page fault with little to no information about why the failure > occurred. > > To resolve this a number of options were considered. > > 1) Attempt to change all the thread local kmap() calls to kmap_atomic()[2] > 2) Introduce a flags parameter to kmap() to indicate if the mapping should be > global or not > 3) Change ~20 call sites to 'kmap_global()' to indicate that they require a > global enablement of the pages. > 4) Change ~209 call sites to 'kmap_thread()' to indicate that the mapping is to > be used within that thread of execution only > > Option 1 is simply not feasible. Option 2 would require all of the call sites > of kmap() to change. Option 3 seems like a good minimal change but there is a > danger that new code may miss the semantic change of kmap() and not get the > behavior the developer intended. Therefore, #4 was chosen. There is Option #5: Convert the thread local kmap() invocations to the proposed kmap_local() interface which is coming along [1]. That solves a couple of issues: 1) It relieves the current kmap_atomic() usage sites from the implict pagefault/preempt disable semantics which apply even when CONFIG_HIGHMEM is disabled. kmap_local() still can be invoked from atomic context. 2) Due to #1 it allows to replace the conditional usage of kmap() and kmap_atomic() for purely thread local mappings. 3) It puts the burden on the HIGHMEM inflicted systems 4) It is actually more efficient for most of the pure thread local use cases on HIGHMEM inflicted systems because it avoids the overhead of the global lock and the potential kmap slot exhaustion. A potential preemption will be more expensive, but that's not really the case we want to optimize for. 5) It solves the RT issue vs. kmap_atomic() So instead of creating yet another variety of kmap() which is just scratching the particular PKRS itch, can we please consolidate all of that on the wider reaching kmap_local() approach? Thanks, tglx [1] https://lore.kernel.org/lkml/20201103092712.714480842@linutronix.de/
On Tue, Nov 10, 2020 at 02:13:56AM +0100, Thomas Gleixner wrote: > Ira, > > On Fri, Oct 09 2020 at 12:49, ira weiny wrote: > > From: Ira Weiny <ira.weiny@intel.com> > > > > To correctly support the semantics of kmap() with Kernel protection keys > > (PKS), kmap() may be required to set the protections on multiple > > processors (globally). Enabling PKS globally can be very expensive > > depending on the requested operation. Furthermore, enabling a domain > > globally reduces the protection afforded by PKS. > > > > Most kmap() (Aprox 209 of 229) callers use the map within a single thread and > > have no need for the protection domain to be enabled globally. However, the > > remaining callers do not follow this pattern and, as best I can tell, expect > > the mapping to be 'global' and available to any thread who may access the > > mapping.[1] > > > > We don't anticipate global mappings to pmem, however in general there is a > > danger in changing the semantics of kmap(). Effectively, this would cause an > > unresolved page fault with little to no information about why the failure > > occurred. > > > > To resolve this a number of options were considered. > > > > 1) Attempt to change all the thread local kmap() calls to kmap_atomic()[2] > > 2) Introduce a flags parameter to kmap() to indicate if the mapping should be > > global or not > > 3) Change ~20 call sites to 'kmap_global()' to indicate that they require a > > global enablement of the pages. > > 4) Change ~209 call sites to 'kmap_thread()' to indicate that the mapping is to > > be used within that thread of execution only > > > > Option 1 is simply not feasible. Option 2 would require all of the call sites > > of kmap() to change. Option 3 seems like a good minimal change but there is a > > danger that new code may miss the semantic change of kmap() and not get the > > behavior the developer intended. Therefore, #4 was chosen. > > There is Option #5: There is now yes. :-D > > Convert the thread local kmap() invocations to the proposed kmap_local() > interface which is coming along [1]. I've been trying to follow that thread. > > That solves a couple of issues: > > 1) It relieves the current kmap_atomic() usage sites from the implict > pagefault/preempt disable semantics which apply even when > CONFIG_HIGHMEM is disabled. kmap_local() still can be invoked from > atomic context. > > 2) Due to #1 it allows to replace the conditional usage of kmap() and > kmap_atomic() for purely thread local mappings. > > 3) It puts the burden on the HIGHMEM inflicted systems > > 4) It is actually more efficient for most of the pure thread local use > cases on HIGHMEM inflicted systems because it avoids the overhead of > the global lock and the potential kmap slot exhaustion. A potential > preemption will be more expensive, but that's not really the case we > want to optimize for. > > 5) It solves the RT issue vs. kmap_atomic() > > So instead of creating yet another variety of kmap() which is just > scratching the particular PKRS itch, can we please consolidate all of > that on the wider reaching kmap_local() approach? Yes I agree. We absolutely don't want more kmap*() calls and I was hoping to dovetail into your kmap_local() work.[2] I've pivoted away from this work a bit to clean up all the kmap()/memcpy*()/kunmaps() as discussed elsewhere in the thread first.[3] I was hoping your work would land and then I could s/kmap_thread()/kmap_local()/ on all of these patches. Also, we can convert the new memcpy_*_page() calls to kmap_local() as well. [For now my patch just uses kmap_atomic().] I've not looked at all of the patches in your latest version. Have you included converting any of the kmap() call sites? I thought you were more focused on converting the kmap_atomic() to kmap_local()? Ira > > Thanks, > > tglx > > [1] https://lore.kernel.org/lkml/20201103092712.714480842@linutronix.de/ [2] https://lore.kernel.org/lkml/20201012195354.GC2046448@iweiny-DESK2.sc.intel.com/ [3] https://lore.kernel.org/lkml/20201009213434.GA839@sol.localdomain/ https://lore.kernel.org/lkml/20201013200149.GI3576660@ZenIV.linux.org.uk/
On Mon, Nov 09 2020 at 20:59, Ira Weiny wrote: > On Tue, Nov 10, 2020 at 02:13:56AM +0100, Thomas Gleixner wrote: > Also, we can convert the new memcpy_*_page() calls to kmap_local() as well. > [For now my patch just uses kmap_atomic().] > > I've not looked at all of the patches in your latest version. Have you > included converting any of the kmap() call sites? I thought you were more > focused on converting the kmap_atomic() to kmap_local()? I did not touch any of those yet, but it's a logical consequence to convert all kmap() instances which are _not_ creating a global mapping over to it. Thanks, tglx
diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 2a9806e3b8d2..ef7813544719 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -60,7 +60,7 @@ static inline void kmap_flush_tlb(unsigned long addr) { } #endif void *kmap_high(struct page *page); -static inline void *kmap(struct page *page) +static inline void *__kmap(struct page *page, bool global) { void *addr; @@ -74,20 +74,20 @@ static inline void *kmap(struct page *page) * Even non-highmem pages may have additional access protections which * need to be checked and potentially enabled. */ - dev_page_enable_access(page, true); + dev_page_enable_access(page, global); return addr; } void kunmap_high(struct page *page); -static inline void kunmap(struct page *page) +static inline void __kunmap(struct page *page, bool global) { might_sleep(); /* * Even non-highmem pages may have additional access protections which * need to be checked and potentially disabled. */ - dev_page_disable_access(page, true); + dev_page_disable_access(page, global); if (!PageHighMem(page)) return; kunmap_high(page); @@ -160,10 +160,10 @@ static inline struct page *kmap_to_page(void *addr) static inline unsigned long totalhigh_pages(void) { return 0UL; } -static inline void *kmap(struct page *page) +static inline void *__kmap(struct page *page, bool global) { might_sleep(); - dev_page_enable_access(page, true); + dev_page_enable_access(page, global); return page_address(page); } @@ -171,9 +171,9 @@ static inline void kunmap_high(struct page *page) { } -static inline void kunmap(struct page *page) +static inline void __kunmap(struct page *page, bool global) { - dev_page_disable_access(page, true); + dev_page_disable_access(page, global); #ifdef ARCH_HAS_FLUSH_ON_KUNMAP kunmap_flush_on_unmap(page_address(page)); #endif @@ -238,6 +238,24 @@ static inline void kmap_atomic_idx_pop(void) #endif +static inline void *kmap(struct page *page) +{ + return __kmap(page, true); +} +static inline void kunmap(struct page *page) +{ + __kunmap(page, true); +} + +static inline void *kmap_thread(struct page *page) +{ + return __kmap(page, false); +} +static inline void kunmap_thread(struct page *page) +{ + __kunmap(page, false); +} + /* * Prevent people trying to call kunmap_atomic() as if it were kunmap() * kunmap_atomic() should get the return value of kmap_atomic, not the page.