Message ID | 20230228213738.272178-3-willy@infradead.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | New page table range API | expand |
On Tue, Feb 28, 2023 at 09:37:05PM +0000, Matthew Wilcox (Oracle) wrote: > flush_icache_page() is deprecated but not yet removed, so add > a range version of it. Change the documentation to refer to > update_mmu_cache_range() instead of update_mmu_cache(). > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Acked-by: Mike Rapoport (IBM) <rppt@kernel.org> > --- > Documentation/core-api/cachetlb.rst | 35 +++++++++++++++-------------- > include/asm-generic/cacheflush.h | 5 +++++ > 2 files changed, 23 insertions(+), 17 deletions(-) > > diff --git a/Documentation/core-api/cachetlb.rst b/Documentation/core-api/cachetlb.rst > index 5c0552e78c58..d4c9e2a28d36 100644 > --- a/Documentation/core-api/cachetlb.rst > +++ b/Documentation/core-api/cachetlb.rst > @@ -88,13 +88,13 @@ changes occur: > > This is used primarily during fault processing. > > -5) ``void update_mmu_cache(struct vm_area_struct *vma, > - unsigned long address, pte_t *ptep)`` > +5) ``void update_mmu_cache_range(struct vm_area_struct *vma, > + unsigned long address, pte_t *ptep, unsigned int nr)`` > > - At the end of every page fault, this routine is invoked to > - tell the architecture specific code that a translation > - now exists at virtual address "address" for address space > - "vma->vm_mm", in the software page tables. > + At the end of every page fault, this routine is invoked to tell > + the architecture specific code that translations now exists > + in the software page tables for address space "vma->vm_mm" > + at virtual address "address" for "nr" consecutive pages. > > A port may use this information in any way it so chooses. > For example, it could use this event to pre-load TLB > @@ -306,17 +306,18 @@ maps this page at its virtual address. > private". The kernel guarantees that, for pagecache pages, it will > clear this bit when such a page first enters the pagecache. > > - This allows these interfaces to be implemented much more efficiently. > - It allows one to "defer" (perhaps indefinitely) the actual flush if > - there are currently no user processes mapping this page. See sparc64's > - flush_dcache_page and update_mmu_cache implementations for an example > - of how to go about doing this. > + This allows these interfaces to be implemented much more > + efficiently. It allows one to "defer" (perhaps indefinitely) the > + actual flush if there are currently no user processes mapping this > + page. See sparc64's flush_dcache_page and update_mmu_cache_range > + implementations for an example of how to go about doing this. > > - The idea is, first at flush_dcache_page() time, if page_file_mapping() > - returns a mapping, and mapping_mapped on that mapping returns %false, > - just mark the architecture private page flag bit. Later, in > - update_mmu_cache(), a check is made of this flag bit, and if set the > - flush is done and the flag bit is cleared. > + The idea is, first at flush_dcache_page() time, if > + page_file_mapping() returns a mapping, and mapping_mapped on that > + mapping returns %false, just mark the architecture private page > + flag bit. Later, in update_mmu_cache_range(), a check is made > + of this flag bit, and if set the flush is done and the flag bit > + is cleared. > > .. important:: > > @@ -369,7 +370,7 @@ maps this page at its virtual address. > ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)`` > > All the functionality of flush_icache_page can be implemented in > - flush_dcache_page and update_mmu_cache. In the future, the hope > + flush_dcache_page and update_mmu_cache_range. In the future, the hope > is to remove this interface completely. > > The final category of APIs is for I/O to deliberately aliased address > diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h > index f46258d1a080..09d51a680765 100644 > --- a/include/asm-generic/cacheflush.h > +++ b/include/asm-generic/cacheflush.h > @@ -78,6 +78,11 @@ static inline void flush_icache_range(unsigned long start, unsigned long end) > #endif > > #ifndef flush_icache_page > +static inline void flush_icache_pages(struct vm_area_struct *vma, > + struct page *page, unsigned int nr) > +{ > +} > + > static inline void flush_icache_page(struct vm_area_struct *vma, > struct page *page) > { > -- > 2.39.1 > >
diff --git a/Documentation/core-api/cachetlb.rst b/Documentation/core-api/cachetlb.rst index 5c0552e78c58..d4c9e2a28d36 100644 --- a/Documentation/core-api/cachetlb.rst +++ b/Documentation/core-api/cachetlb.rst @@ -88,13 +88,13 @@ changes occur: This is used primarily during fault processing. -5) ``void update_mmu_cache(struct vm_area_struct *vma, - unsigned long address, pte_t *ptep)`` +5) ``void update_mmu_cache_range(struct vm_area_struct *vma, + unsigned long address, pte_t *ptep, unsigned int nr)`` - At the end of every page fault, this routine is invoked to - tell the architecture specific code that a translation - now exists at virtual address "address" for address space - "vma->vm_mm", in the software page tables. + At the end of every page fault, this routine is invoked to tell + the architecture specific code that translations now exists + in the software page tables for address space "vma->vm_mm" + at virtual address "address" for "nr" consecutive pages. A port may use this information in any way it so chooses. For example, it could use this event to pre-load TLB @@ -306,17 +306,18 @@ maps this page at its virtual address. private". The kernel guarantees that, for pagecache pages, it will clear this bit when such a page first enters the pagecache. - This allows these interfaces to be implemented much more efficiently. - It allows one to "defer" (perhaps indefinitely) the actual flush if - there are currently no user processes mapping this page. See sparc64's - flush_dcache_page and update_mmu_cache implementations for an example - of how to go about doing this. + This allows these interfaces to be implemented much more + efficiently. It allows one to "defer" (perhaps indefinitely) the + actual flush if there are currently no user processes mapping this + page. See sparc64's flush_dcache_page and update_mmu_cache_range + implementations for an example of how to go about doing this. - The idea is, first at flush_dcache_page() time, if page_file_mapping() - returns a mapping, and mapping_mapped on that mapping returns %false, - just mark the architecture private page flag bit. Later, in - update_mmu_cache(), a check is made of this flag bit, and if set the - flush is done and the flag bit is cleared. + The idea is, first at flush_dcache_page() time, if + page_file_mapping() returns a mapping, and mapping_mapped on that + mapping returns %false, just mark the architecture private page + flag bit. Later, in update_mmu_cache_range(), a check is made + of this flag bit, and if set the flush is done and the flag bit + is cleared. .. important:: @@ -369,7 +370,7 @@ maps this page at its virtual address. ``void flush_icache_page(struct vm_area_struct *vma, struct page *page)`` All the functionality of flush_icache_page can be implemented in - flush_dcache_page and update_mmu_cache. In the future, the hope + flush_dcache_page and update_mmu_cache_range. In the future, the hope is to remove this interface completely. The final category of APIs is for I/O to deliberately aliased address diff --git a/include/asm-generic/cacheflush.h b/include/asm-generic/cacheflush.h index f46258d1a080..09d51a680765 100644 --- a/include/asm-generic/cacheflush.h +++ b/include/asm-generic/cacheflush.h @@ -78,6 +78,11 @@ static inline void flush_icache_range(unsigned long start, unsigned long end) #endif #ifndef flush_icache_page +static inline void flush_icache_pages(struct vm_area_struct *vma, + struct page *page, unsigned int nr) +{ +} + static inline void flush_icache_page(struct vm_area_struct *vma, struct page *page) {
flush_icache_page() is deprecated but not yet removed, so add a range version of it. Change the documentation to refer to update_mmu_cache_range() instead of update_mmu_cache(). Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> --- Documentation/core-api/cachetlb.rst | 35 +++++++++++++++-------------- include/asm-generic/cacheflush.h | 5 +++++ 2 files changed, 23 insertions(+), 17 deletions(-)