Message ID | 20210615132456.753241-2-hch@lst.de (mailing list archive) |
---|---|
State | Not Applicable |
Headers | show |
Series | [01/18] mm: add a kunmap_local_dirty helper | expand |
On Tue, Jun 15, 2021 at 03:24:39PM +0200, Christoph Hellwig wrote: > Add a helper that calls flush_kernel_dcache_page before unmapping the > local mapping. flush_kernel_dcache_page is required for all pages > potentially mapped into userspace that were written to using kmap*, > so having a helper that does the right thing can be very convenient. > > Signed-off-by: Christoph Hellwig <hch@lst.de> > --- > include/linux/highmem-internal.h | 7 +++++++ > include/linux/highmem.h | 4 ++++ > 2 files changed, 11 insertions(+) > > diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h > index 7902c7d8b55f..bd37706db147 100644 > --- a/include/linux/highmem-internal.h > +++ b/include/linux/highmem-internal.h > @@ -224,4 +224,11 @@ do { \ > __kunmap_local(__addr); \ > } while (0) > > +#define kunmap_local_dirty(__page, __addr) \ I think having to store the page and addr to return to kunmap_local_dirty() is going to be a pain in some code paths. Not a show stopper but see below... > +do { \ > + if (!PageSlab(__page)) \ Was there some clarification why the page can't be a Slab page? Or is this just an optimization? > + flush_kernel_dcache_page(__page); \ Is this required on 32bit systems? Why is kunmap_flush_on_unmap() not sufficient on 64bit systems? The normal kunmap_local() path does that. I'm sorry but I did not see a conclusion to my query on V1. Herbert implied the he just copied from the crypto code.[1] I'm concerned that this _dirty() call is just going to confuse the users of kmap even more. So why can't we get to the bottom of why flush_kernel_dcache_page() needs so much logic around it before complicating the general kernel users. I would like to see it go away if possible. Ira [1] https://lore.kernel.org/lkml/20210615050258.GA5208@gondor.apana.org.au/ > + kunmap_local(__addr); \ > +} while (0) > + > #endif > diff --git a/include/linux/highmem.h b/include/linux/highmem.h > index 832b49b50c7b..65f548db4f2d 100644 > --- a/include/linux/highmem.h > +++ b/include/linux/highmem.h > @@ -93,6 +93,10 @@ static inline void kmap_flush_unused(void); > * On HIGHMEM enabled systems mapping a highmem page has the side effect of > * disabling migration in order to keep the virtual address stable across > * preemption. No caller of kmap_local_page() can rely on this side effect. > + * > + * If data is written to the returned kernel mapping, the callers needs to > + * unmap the mapping using kunmap_local_dirty(), else kunmap_local() should > + * be used. > */ > static inline void *kmap_local_page(struct page *page); > > -- > 2.30.2 >
On Thu, Jun 17, 2021 at 08:01:57PM -0700, Ira Weiny wrote: > > > + flush_kernel_dcache_page(__page); \ > > Is this required on 32bit systems? Why is kunmap_flush_on_unmap() not > sufficient on 64bit systems? The normal kunmap_local() path does that. > > I'm sorry but I did not see a conclusion to my query on V1. Herbert implied the > he just copied from the crypto code.[1] I'm concerned that this _dirty() call > is just going to confuse the users of kmap even more. So why can't we get to > the bottom of why flush_kernel_dcache_page() needs so much logic around it > before complicating the general kernel users. > > I would like to see it go away if possible. This thread may be related: https://lwn.net/Articles/240249/ Cheers,
On Fri, Jun 18, 2021 at 11:37:28AM +0800, Herbert Xu wrote: > On Thu, Jun 17, 2021 at 08:01:57PM -0700, Ira Weiny wrote: > > > > > + flush_kernel_dcache_page(__page); \ > > > > Is this required on 32bit systems? Why is kunmap_flush_on_unmap() not > > sufficient on 64bit systems? The normal kunmap_local() path does that. > > > > I'm sorry but I did not see a conclusion to my query on V1. Herbert implied the > > he just copied from the crypto code.[1] I'm concerned that this _dirty() call > > is just going to confuse the users of kmap even more. So why can't we get to > > the bottom of why flush_kernel_dcache_page() needs so much logic around it > > before complicating the general kernel users. > > > > I would like to see it go away if possible. > > This thread may be related: > > https://lwn.net/Articles/240249/ Interesting! Thanks! Digging around a bit more I found: https://lore.kernel.org/patchwork/patch/439637/ Auditing all the flush_dcache_page() arch code reveals that the mapping field is either unused, or is checked for NULL. Furthermore, all the implementations call page_mapping_file() which further limits the page to not be a swap page. All flush_kernel_dcache_page() implementations appears to operate the same way in all arch's which define that call. So I'm confident now that additional !PageSlab(__page) checks are not needed and this patch is unnecessary. Christoph, can we leave this out of the kmap API and just fold the flush_kernel_dcache_page() calls back into the bvec code? Unfortunately, I'm not convinced this can be handled completely by kunmap_local() nor the mem*_page() calls because there is a difference between flush_dcache_page() and flush_kernel_dcache_page() in most archs... [parisc being an exception which falls back to flush_kernel_dcache_page()]... It seems like the generic unmap path _should_ be able to determine which call to make based on the page but I'd have to look at that more. Ira
diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h index 7902c7d8b55f..bd37706db147 100644 --- a/include/linux/highmem-internal.h +++ b/include/linux/highmem-internal.h @@ -224,4 +224,11 @@ do { \ __kunmap_local(__addr); \ } while (0) +#define kunmap_local_dirty(__page, __addr) \ +do { \ + if (!PageSlab(__page)) \ + flush_kernel_dcache_page(__page); \ + kunmap_local(__addr); \ +} while (0) + #endif diff --git a/include/linux/highmem.h b/include/linux/highmem.h index 832b49b50c7b..65f548db4f2d 100644 --- a/include/linux/highmem.h +++ b/include/linux/highmem.h @@ -93,6 +93,10 @@ static inline void kmap_flush_unused(void); * On HIGHMEM enabled systems mapping a highmem page has the side effect of * disabling migration in order to keep the virtual address stable across * preemption. No caller of kmap_local_page() can rely on this side effect. + * + * If data is written to the returned kernel mapping, the callers needs to + * unmap the mapping using kunmap_local_dirty(), else kunmap_local() should + * be used. */ static inline void *kmap_local_page(struct page *page);
Add a helper that calls flush_kernel_dcache_page before unmapping the local mapping. flush_kernel_dcache_page is required for all pages potentially mapped into userspace that were written to using kmap*, so having a helper that does the right thing can be very convenient. Signed-off-by: Christoph Hellwig <hch@lst.de> --- include/linux/highmem-internal.h | 7 +++++++ include/linux/highmem.h | 4 ++++ 2 files changed, 11 insertions(+)