diff mbox series

[3/3] fb_defio: do not use deprecated page->mapping, index fields

Message ID 3542c5bb74d2487cf45d1d02ee5e73a05c4d279a.1738347308.git.lorenzo.stoakes@oracle.com (mailing list archive)
State New
Headers show
Series expose mapping wrprotect, fix fb_defio use | expand

Commit Message

Lorenzo Stoakes Jan. 31, 2025, 6:28 p.m. UTC
With the introduction of mapping_wrprotect_page() there is no need to use
folio_mkclean() in order to write-protect mappings of frame buffer pages,
and therefore no need to inappropriately set kernel-allocated page->index,
mapping fields to permit this operation.

Instead, store the pointer to the page cache object for the mapped driver
in the fb_deferred_io object, and use the already stored page offset from
the pageref object to look up mappings in order to write-protect them.

This is justified, as for the page objects to store a mapping pointer at
the point of assignment of pages, they must all reference the same
underlying address_space object. Since the life time of the pagerefs is
also the lifetime of the fb_deferred_io object, storing the pointer here
makes snese.

This eliminates the need for all of the logic around setting and
maintaining page->index,mapping which we remove.

This eliminates the use of folio_mkclean() entirely but otherwise should
have no functional change.

Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Tested-by: Kajtar Zsolt <soci@c64.rulez.org>
---
 drivers/video/fbdev/core/fb_defio.c | 38 +++++++++--------------------
 include/linux/fb.h                  |  1 +
 2 files changed, 12 insertions(+), 27 deletions(-)

Comments

Lorenzo Stoakes Feb. 1, 2025, 5:06 p.m. UTC | #1
(This time sent in reply to the correct series...)

On Fri, Jan 31, 2025 at 06:28:58PM +0000, Lorenzo Stoakes wrote:
> With the introduction of mapping_wrprotect_page() there is no need to use
> folio_mkclean() in order to write-protect mappings of frame buffer pages,
> and therefore no need to inappropriately set kernel-allocated page->index,
> mapping fields to permit this operation.
>
> Instead, store the pointer to the page cache object for the mapped driver
> in the fb_deferred_io object, and use the already stored page offset from
> the pageref object to look up mappings in order to write-protect them.
>
> This is justified, as for the page objects to store a mapping pointer at
> the point of assignment of pages, they must all reference the same
> underlying address_space object. Since the life time of the pagerefs is
> also the lifetime of the fb_deferred_io object, storing the pointer here
> makes snese.
>
> This eliminates the need for all of the logic around setting and
> maintaining page->index,mapping which we remove.
>
> This eliminates the use of folio_mkclean() entirely but otherwise should
> have no functional change.
>
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Tested-by: Kajtar Zsolt <soci@c64.rulez.org>

Andrew -

Sorry to be a pain but could you please apply the attached fix-patch to
avoid build bot failures when randconfig generates invalid
configurations. The defio mechanism entirely relies upon the page faulting
mechanism, and thus an MMU to function.

This was previously masked, because folio_mkclean() happens to have a
!CONFIG_MMU stub. It really doesn't make sense to apply such a stub for
mapping_wrprotect_page() for architectures without an MMU.

Instead, correctly express the actual dependency in Kconfig, which should
prevent randconfig from doing the wrong thing and also helps document this
fact about defio.

Thanks!

----8<----
From 32abcfbb8dea92d9a8a99e6a86f45a1823a75c59 Mon Sep 17 00:00:00 2001
From: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Date: Sat, 1 Feb 2025 16:56:02 +0000
Subject: [PATCH] fbdev: have CONFIG_FB_DEFERRED_IO depend on CONFIG_MMU

Frame buffer deferred I/O is entirely reliant on the page faulting
mechanism (and thus, an MMU) to function.

Express this dependency in the Kconfig, as otherwise randconfig could
generate invalid configurations resulting in build errors.

Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202502020030.MnEJ847Z-lkp@intel.com/
---
 drivers/video/fbdev/core/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/video/fbdev/core/Kconfig b/drivers/video/fbdev/core/Kconfig
index d554d8c543d4..154804914680 100644
--- a/drivers/video/fbdev/core/Kconfig
+++ b/drivers/video/fbdev/core/Kconfig
@@ -135,6 +135,7 @@ config FB_SYSMEM_FOPS
 config FB_DEFERRED_IO
 	bool
 	depends on FB_CORE
+	depends on MMU

 config FB_DMAMEM_HELPERS
 	bool
--
2.48.1
Thomas Zimmermann Feb. 4, 2025, 8:21 a.m. UTC | #2
Hi


Am 31.01.25 um 19:28 schrieb Lorenzo Stoakes:
> With the introduction of mapping_wrprotect_page() there is no need to use
> folio_mkclean() in order to write-protect mappings of frame buffer pages,
> and therefore no need to inappropriately set kernel-allocated page->index,
> mapping fields to permit this operation.
>
> Instead, store the pointer to the page cache object for the mapped driver
> in the fb_deferred_io object, and use the already stored page offset from
> the pageref object to look up mappings in order to write-protect them.
>
> This is justified, as for the page objects to store a mapping pointer at
> the point of assignment of pages, they must all reference the same
> underlying address_space object. Since the life time of the pagerefs is
> also the lifetime of the fb_deferred_io object, storing the pointer here
> makes snese.
>
> This eliminates the need for all of the logic around setting and
> maintaining page->index,mapping which we remove.
>
> This eliminates the use of folio_mkclean() entirely but otherwise should
> have no functional change.
>
> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Tested-by: Kajtar Zsolt <soci@c64.rulez.org>
> ---
>   drivers/video/fbdev/core/fb_defio.c | 38 +++++++++--------------------
>   include/linux/fb.h                  |  1 +
>   2 files changed, 12 insertions(+), 27 deletions(-)
>
> diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c
> index 65363df8e81b..b9bab27a8c0f 100644
> --- a/drivers/video/fbdev/core/fb_defio.c
> +++ b/drivers/video/fbdev/core/fb_defio.c
> @@ -69,14 +69,6 @@ static struct fb_deferred_io_pageref *fb_deferred_io_pageref_lookup(struct fb_in
>   	return pageref;
>   }
>   
> -static void fb_deferred_io_pageref_clear(struct fb_deferred_io_pageref *pageref)
> -{
> -	struct page *page = pageref->page;
> -
> -	if (page)
> -		page->mapping = NULL;
> -}
> -
>   static struct fb_deferred_io_pageref *fb_deferred_io_pageref_get(struct fb_info *info,
>   								 unsigned long offset,
>   								 struct page *page)
> @@ -140,13 +132,10 @@ static vm_fault_t fb_deferred_io_fault(struct vm_fault *vmf)
>   	if (!page)
>   		return VM_FAULT_SIGBUS;
>   
> -	if (vmf->vma->vm_file)
> -		page->mapping = vmf->vma->vm_file->f_mapping;
> -	else
> +	if (!vmf->vma->vm_file)
>   		printk(KERN_ERR "no mapping available\n");

fb_err() here.

>   
> -	BUG_ON(!page->mapping);
> -	page->index = vmf->pgoff; /* for folio_mkclean() */
> +	BUG_ON(!info->fbdefio->mapping);
>   
>   	vmf->page = page;
>   	return 0;
> @@ -194,9 +183,9 @@ static vm_fault_t fb_deferred_io_track_page(struct fb_info *info, unsigned long
>   
>   	/*
>   	 * We want the page to remain locked from ->page_mkwrite until
> -	 * the PTE is marked dirty to avoid folio_mkclean() being called
> -	 * before the PTE is updated, which would leave the page ignored
> -	 * by defio.
> +	 * the PTE is marked dirty to avoid mapping_wrprotect_page()
> +	 * being called before the PTE is updated, which would leave
> +	 * the page ignored by defio.
>   	 * Do this by locking the page here and informing the caller
>   	 * about it with VM_FAULT_LOCKED.
>   	 */
> @@ -274,14 +263,13 @@ static void fb_deferred_io_work(struct work_struct *work)
>   	struct fb_deferred_io_pageref *pageref, *next;
>   	struct fb_deferred_io *fbdefio = info->fbdefio;
>   
> -	/* here we mkclean the pages, then do all deferred IO */
> +	/* here we wrprotect the page's mappings, then do all deferred IO. */
>   	mutex_lock(&fbdefio->lock);
>   	list_for_each_entry(pageref, &fbdefio->pagereflist, list) {
> -		struct folio *folio = page_folio(pageref->page);
> +		struct page *page = pageref->page;
> +		pgoff_t pgoff = pageref->offset >> PAGE_SHIFT;
>   
> -		folio_lock(folio);
> -		folio_mkclean(folio);
> -		folio_unlock(folio);
> +		mapping_wrprotect_page(fbdefio->mapping, pgoff, 1, page);
>   	}
>   
>   	/* driver's callback with pagereflist */
> @@ -337,6 +325,7 @@ void fb_deferred_io_open(struct fb_info *info,
>   {
>   	struct fb_deferred_io *fbdefio = info->fbdefio;
>   
> +	fbdefio->mapping = file->f_mapping;

Does this still work if more than one program opens the file?

Best regard
Thomas

>   	file->f_mapping->a_ops = &fb_deferred_io_aops;
>   	fbdefio->open_count++;
>   }
> @@ -344,13 +333,7 @@ EXPORT_SYMBOL_GPL(fb_deferred_io_open);
>   
>   static void fb_deferred_io_lastclose(struct fb_info *info)
>   {
> -	unsigned long i;
> -
>   	flush_delayed_work(&info->deferred_work);
> -
> -	/* clear out the mapping that we setup */
> -	for (i = 0; i < info->npagerefs; ++i)
> -		fb_deferred_io_pageref_clear(&info->pagerefs[i]);
>   }
>   
>   void fb_deferred_io_release(struct fb_info *info)
> @@ -370,5 +353,6 @@ void fb_deferred_io_cleanup(struct fb_info *info)
>   
>   	kvfree(info->pagerefs);
>   	mutex_destroy(&fbdefio->lock);
> +	fbdefio->mapping = NULL;
>   }
>   EXPORT_SYMBOL_GPL(fb_deferred_io_cleanup);
> diff --git a/include/linux/fb.h b/include/linux/fb.h
> index 5ba187e08cf7..cd653862ab99 100644
> --- a/include/linux/fb.h
> +++ b/include/linux/fb.h
> @@ -225,6 +225,7 @@ struct fb_deferred_io {
>   	int open_count; /* number of opened files; protected by fb_info lock */
>   	struct mutex lock; /* mutex that protects the pageref list */
>   	struct list_head pagereflist; /* list of pagerefs for touched pages */
> +	struct address_space *mapping; /* page cache object for fb device */
>   	/* callback */
>   	struct page *(*get_page)(struct fb_info *info, unsigned long offset);
>   	void (*deferred_io)(struct fb_info *info, struct list_head *pagelist);
Lorenzo Stoakes Feb. 4, 2025, 8:37 a.m. UTC | #3
On Tue, Feb 04, 2025 at 09:21:55AM +0100, Thomas Zimmermann wrote:
> Hi
>
>
> Am 31.01.25 um 19:28 schrieb Lorenzo Stoakes:
> > With the introduction of mapping_wrprotect_page() there is no need to use
> > folio_mkclean() in order to write-protect mappings of frame buffer pages,
> > and therefore no need to inappropriately set kernel-allocated page->index,
> > mapping fields to permit this operation.
> >
> > Instead, store the pointer to the page cache object for the mapped driver
> > in the fb_deferred_io object, and use the already stored page offset from
> > the pageref object to look up mappings in order to write-protect them.
> >
> > This is justified, as for the page objects to store a mapping pointer at
> > the point of assignment of pages, they must all reference the same
> > underlying address_space object. Since the life time of the pagerefs is
> > also the lifetime of the fb_deferred_io object, storing the pointer here
> > makes snese.
> >
> > This eliminates the need for all of the logic around setting and
> > maintaining page->index,mapping which we remove.
> >
> > This eliminates the use of folio_mkclean() entirely but otherwise should
> > have no functional change.
> >
> > Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> > Tested-by: Kajtar Zsolt <soci@c64.rulez.org>
> > ---
> >   drivers/video/fbdev/core/fb_defio.c | 38 +++++++++--------------------
> >   include/linux/fb.h                  |  1 +
> >   2 files changed, 12 insertions(+), 27 deletions(-)
> >
> > diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c
> > index 65363df8e81b..b9bab27a8c0f 100644
> > --- a/drivers/video/fbdev/core/fb_defio.c
> > +++ b/drivers/video/fbdev/core/fb_defio.c
> > @@ -69,14 +69,6 @@ static struct fb_deferred_io_pageref *fb_deferred_io_pageref_lookup(struct fb_in
> >   	return pageref;
> >   }
> > -static void fb_deferred_io_pageref_clear(struct fb_deferred_io_pageref *pageref)
> > -{
> > -	struct page *page = pageref->page;
> > -
> > -	if (page)
> > -		page->mapping = NULL;
> > -}
> > -
> >   static struct fb_deferred_io_pageref *fb_deferred_io_pageref_get(struct fb_info *info,
> >   								 unsigned long offset,
> >   								 struct page *page)
> > @@ -140,13 +132,10 @@ static vm_fault_t fb_deferred_io_fault(struct vm_fault *vmf)
> >   	if (!page)
> >   		return VM_FAULT_SIGBUS;
> > -	if (vmf->vma->vm_file)
> > -		page->mapping = vmf->vma->vm_file->f_mapping;
> > -	else
> > +	if (!vmf->vma->vm_file)
> >   		printk(KERN_ERR "no mapping available\n");
>
> fb_err() here.

Ack, will fix on respin.

>
> > -	BUG_ON(!page->mapping);
> > -	page->index = vmf->pgoff; /* for folio_mkclean() */
> > +	BUG_ON(!info->fbdefio->mapping);
> >   	vmf->page = page;
> >   	return 0;
> > @@ -194,9 +183,9 @@ static vm_fault_t fb_deferred_io_track_page(struct fb_info *info, unsigned long
> >   	/*
> >   	 * We want the page to remain locked from ->page_mkwrite until
> > -	 * the PTE is marked dirty to avoid folio_mkclean() being called
> > -	 * before the PTE is updated, which would leave the page ignored
> > -	 * by defio.
> > +	 * the PTE is marked dirty to avoid mapping_wrprotect_page()
> > +	 * being called before the PTE is updated, which would leave
> > +	 * the page ignored by defio.
> >   	 * Do this by locking the page here and informing the caller
> >   	 * about it with VM_FAULT_LOCKED.
> >   	 */
> > @@ -274,14 +263,13 @@ static void fb_deferred_io_work(struct work_struct *work)
> >   	struct fb_deferred_io_pageref *pageref, *next;
> >   	struct fb_deferred_io *fbdefio = info->fbdefio;
> > -	/* here we mkclean the pages, then do all deferred IO */
> > +	/* here we wrprotect the page's mappings, then do all deferred IO. */
> >   	mutex_lock(&fbdefio->lock);
> >   	list_for_each_entry(pageref, &fbdefio->pagereflist, list) {
> > -		struct folio *folio = page_folio(pageref->page);
> > +		struct page *page = pageref->page;
> > +		pgoff_t pgoff = pageref->offset >> PAGE_SHIFT;
> > -		folio_lock(folio);
> > -		folio_mkclean(folio);
> > -		folio_unlock(folio);
> > +		mapping_wrprotect_page(fbdefio->mapping, pgoff, 1, page);
> >   	}
> >   	/* driver's callback with pagereflist */
> > @@ -337,6 +325,7 @@ void fb_deferred_io_open(struct fb_info *info,
> >   {
> >   	struct fb_deferred_io *fbdefio = info->fbdefio;
> > +	fbdefio->mapping = file->f_mapping;
>
> Does this still work if more than one program opens the file?

Yes, the mapping (address_space) pointer will remain the same across the
board. The way defio is implemented absolutely relies on this assumption.

>
> Best regard
> Thomas
>
> >   	file->f_mapping->a_ops = &fb_deferred_io_aops;
> >   	fbdefio->open_count++;
> >   }
> > @@ -344,13 +333,7 @@ EXPORT_SYMBOL_GPL(fb_deferred_io_open);
> >   static void fb_deferred_io_lastclose(struct fb_info *info)
> >   {
> > -	unsigned long i;
> > -
> >   	flush_delayed_work(&info->deferred_work);
> > -
> > -	/* clear out the mapping that we setup */
> > -	for (i = 0; i < info->npagerefs; ++i)
> > -		fb_deferred_io_pageref_clear(&info->pagerefs[i]);
> >   }
> >   void fb_deferred_io_release(struct fb_info *info)
> > @@ -370,5 +353,6 @@ void fb_deferred_io_cleanup(struct fb_info *info)
> >   	kvfree(info->pagerefs);
> >   	mutex_destroy(&fbdefio->lock);
> > +	fbdefio->mapping = NULL;
> >   }
> >   EXPORT_SYMBOL_GPL(fb_deferred_io_cleanup);
> > diff --git a/include/linux/fb.h b/include/linux/fb.h
> > index 5ba187e08cf7..cd653862ab99 100644
> > --- a/include/linux/fb.h
> > +++ b/include/linux/fb.h
> > @@ -225,6 +225,7 @@ struct fb_deferred_io {
> >   	int open_count; /* number of opened files; protected by fb_info lock */
> >   	struct mutex lock; /* mutex that protects the pageref list */
> >   	struct list_head pagereflist; /* list of pagerefs for touched pages */
> > +	struct address_space *mapping; /* page cache object for fb device */
> >   	/* callback */
> >   	struct page *(*get_page)(struct fb_info *info, unsigned long offset);
> >   	void (*deferred_io)(struct fb_info *info, struct list_head *pagelist);
>
> --
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Frankenstrasse 146, 90461 Nuernberg, Germany
> GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
> HRB 36809 (AG Nuernberg)
>
Thomas Zimmermann Feb. 4, 2025, 8:57 a.m. UTC | #4
Hi


Am 04.02.25 um 09:37 schrieb Lorenzo Stoakes:
> On Tue, Feb 04, 2025 at 09:21:55AM +0100, Thomas Zimmermann wrote:
>> Hi
>>
>>
>> Am 31.01.25 um 19:28 schrieb Lorenzo Stoakes:
>>> With the introduction of mapping_wrprotect_page() there is no need to use
>>> folio_mkclean() in order to write-protect mappings of frame buffer pages,
>>> and therefore no need to inappropriately set kernel-allocated page->index,
>>> mapping fields to permit this operation.
>>>
>>> Instead, store the pointer to the page cache object for the mapped driver
>>> in the fb_deferred_io object, and use the already stored page offset from
>>> the pageref object to look up mappings in order to write-protect them.
>>>
>>> This is justified, as for the page objects to store a mapping pointer at
>>> the point of assignment of pages, they must all reference the same
>>> underlying address_space object. Since the life time of the pagerefs is
>>> also the lifetime of the fb_deferred_io object, storing the pointer here
>>> makes snese.
>>>
>>> This eliminates the need for all of the logic around setting and
>>> maintaining page->index,mapping which we remove.
>>>
>>> This eliminates the use of folio_mkclean() entirely but otherwise should
>>> have no functional change.
>>>
>>> Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>>> Tested-by: Kajtar Zsolt <soci@c64.rulez.org>
>>> ---
>>>    drivers/video/fbdev/core/fb_defio.c | 38 +++++++++--------------------
>>>    include/linux/fb.h                  |  1 +
>>>    2 files changed, 12 insertions(+), 27 deletions(-)
>>>
>>> diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c
>>> index 65363df8e81b..b9bab27a8c0f 100644
>>> --- a/drivers/video/fbdev/core/fb_defio.c
>>> +++ b/drivers/video/fbdev/core/fb_defio.c
>>> @@ -69,14 +69,6 @@ static struct fb_deferred_io_pageref *fb_deferred_io_pageref_lookup(struct fb_in
>>>    	return pageref;
>>>    }
>>> -static void fb_deferred_io_pageref_clear(struct fb_deferred_io_pageref *pageref)
>>> -{
>>> -	struct page *page = pageref->page;
>>> -
>>> -	if (page)
>>> -		page->mapping = NULL;
>>> -}
>>> -
>>>    static struct fb_deferred_io_pageref *fb_deferred_io_pageref_get(struct fb_info *info,
>>>    								 unsigned long offset,
>>>    								 struct page *page)
>>> @@ -140,13 +132,10 @@ static vm_fault_t fb_deferred_io_fault(struct vm_fault *vmf)
>>>    	if (!page)
>>>    		return VM_FAULT_SIGBUS;
>>> -	if (vmf->vma->vm_file)
>>> -		page->mapping = vmf->vma->vm_file->f_mapping;
>>> -	else
>>> +	if (!vmf->vma->vm_file)
>>>    		printk(KERN_ERR "no mapping available\n");
>> fb_err() here.
> Ack, will fix on respin.
>
>>> -	BUG_ON(!page->mapping);
>>> -	page->index = vmf->pgoff; /* for folio_mkclean() */
>>> +	BUG_ON(!info->fbdefio->mapping);
>>>    	vmf->page = page;
>>>    	return 0;
>>> @@ -194,9 +183,9 @@ static vm_fault_t fb_deferred_io_track_page(struct fb_info *info, unsigned long
>>>    	/*
>>>    	 * We want the page to remain locked from ->page_mkwrite until
>>> -	 * the PTE is marked dirty to avoid folio_mkclean() being called
>>> -	 * before the PTE is updated, which would leave the page ignored
>>> -	 * by defio.
>>> +	 * the PTE is marked dirty to avoid mapping_wrprotect_page()
>>> +	 * being called before the PTE is updated, which would leave
>>> +	 * the page ignored by defio.
>>>    	 * Do this by locking the page here and informing the caller
>>>    	 * about it with VM_FAULT_LOCKED.
>>>    	 */
>>> @@ -274,14 +263,13 @@ static void fb_deferred_io_work(struct work_struct *work)
>>>    	struct fb_deferred_io_pageref *pageref, *next;
>>>    	struct fb_deferred_io *fbdefio = info->fbdefio;
>>> -	/* here we mkclean the pages, then do all deferred IO */
>>> +	/* here we wrprotect the page's mappings, then do all deferred IO. */
>>>    	mutex_lock(&fbdefio->lock);
>>>    	list_for_each_entry(pageref, &fbdefio->pagereflist, list) {
>>> -		struct folio *folio = page_folio(pageref->page);
>>> +		struct page *page = pageref->page;
>>> +		pgoff_t pgoff = pageref->offset >> PAGE_SHIFT;
>>> -		folio_lock(folio);
>>> -		folio_mkclean(folio);
>>> -		folio_unlock(folio);
>>> +		mapping_wrprotect_page(fbdefio->mapping, pgoff, 1, page);
>>>    	}
>>>    	/* driver's callback with pagereflist */
>>> @@ -337,6 +325,7 @@ void fb_deferred_io_open(struct fb_info *info,
>>>    {
>>>    	struct fb_deferred_io *fbdefio = info->fbdefio;
>>> +	fbdefio->mapping = file->f_mapping;
>> Does this still work if more than one program opens the file?
> Yes, the mapping (address_space) pointer will remain the same across the
> board. The way defio is implemented absolutely relies on this assumption.

Great. With the fb_err() fixed, you can add

Acked-by: Thomas Zimmermann <tzimmermann@suse.de>

to the patch.

Best regards
Thomas

>
>> Best regard
>> Thomas
>>
>>>    	file->f_mapping->a_ops = &fb_deferred_io_aops;
>>>    	fbdefio->open_count++;
>>>    }
>>> @@ -344,13 +333,7 @@ EXPORT_SYMBOL_GPL(fb_deferred_io_open);
>>>    static void fb_deferred_io_lastclose(struct fb_info *info)
>>>    {
>>> -	unsigned long i;
>>> -
>>>    	flush_delayed_work(&info->deferred_work);
>>> -
>>> -	/* clear out the mapping that we setup */
>>> -	for (i = 0; i < info->npagerefs; ++i)
>>> -		fb_deferred_io_pageref_clear(&info->pagerefs[i]);
>>>    }
>>>    void fb_deferred_io_release(struct fb_info *info)
>>> @@ -370,5 +353,6 @@ void fb_deferred_io_cleanup(struct fb_info *info)
>>>    	kvfree(info->pagerefs);
>>>    	mutex_destroy(&fbdefio->lock);
>>> +	fbdefio->mapping = NULL;
>>>    }
>>>    EXPORT_SYMBOL_GPL(fb_deferred_io_cleanup);
>>> diff --git a/include/linux/fb.h b/include/linux/fb.h
>>> index 5ba187e08cf7..cd653862ab99 100644
>>> --- a/include/linux/fb.h
>>> +++ b/include/linux/fb.h
>>> @@ -225,6 +225,7 @@ struct fb_deferred_io {
>>>    	int open_count; /* number of opened files; protected by fb_info lock */
>>>    	struct mutex lock; /* mutex that protects the pageref list */
>>>    	struct list_head pagereflist; /* list of pagerefs for touched pages */
>>> +	struct address_space *mapping; /* page cache object for fb device */
>>>    	/* callback */
>>>    	struct page *(*get_page)(struct fb_info *info, unsigned long offset);
>>>    	void (*deferred_io)(struct fb_info *info, struct list_head *pagelist);
>> --
>> --
>> Thomas Zimmermann
>> Graphics Driver Developer
>> SUSE Software Solutions Germany GmbH
>> Frankenstrasse 146, 90461 Nuernberg, Germany
>> GF: Ivo Totev, Andrew Myers, Andrew McDonald, Boudien Moerman
>> HRB 36809 (AG Nuernberg)
>>
diff mbox series

Patch

diff --git a/drivers/video/fbdev/core/fb_defio.c b/drivers/video/fbdev/core/fb_defio.c
index 65363df8e81b..b9bab27a8c0f 100644
--- a/drivers/video/fbdev/core/fb_defio.c
+++ b/drivers/video/fbdev/core/fb_defio.c
@@ -69,14 +69,6 @@  static struct fb_deferred_io_pageref *fb_deferred_io_pageref_lookup(struct fb_in
 	return pageref;
 }
 
-static void fb_deferred_io_pageref_clear(struct fb_deferred_io_pageref *pageref)
-{
-	struct page *page = pageref->page;
-
-	if (page)
-		page->mapping = NULL;
-}
-
 static struct fb_deferred_io_pageref *fb_deferred_io_pageref_get(struct fb_info *info,
 								 unsigned long offset,
 								 struct page *page)
@@ -140,13 +132,10 @@  static vm_fault_t fb_deferred_io_fault(struct vm_fault *vmf)
 	if (!page)
 		return VM_FAULT_SIGBUS;
 
-	if (vmf->vma->vm_file)
-		page->mapping = vmf->vma->vm_file->f_mapping;
-	else
+	if (!vmf->vma->vm_file)
 		printk(KERN_ERR "no mapping available\n");
 
-	BUG_ON(!page->mapping);
-	page->index = vmf->pgoff; /* for folio_mkclean() */
+	BUG_ON(!info->fbdefio->mapping);
 
 	vmf->page = page;
 	return 0;
@@ -194,9 +183,9 @@  static vm_fault_t fb_deferred_io_track_page(struct fb_info *info, unsigned long
 
 	/*
 	 * We want the page to remain locked from ->page_mkwrite until
-	 * the PTE is marked dirty to avoid folio_mkclean() being called
-	 * before the PTE is updated, which would leave the page ignored
-	 * by defio.
+	 * the PTE is marked dirty to avoid mapping_wrprotect_page()
+	 * being called before the PTE is updated, which would leave
+	 * the page ignored by defio.
 	 * Do this by locking the page here and informing the caller
 	 * about it with VM_FAULT_LOCKED.
 	 */
@@ -274,14 +263,13 @@  static void fb_deferred_io_work(struct work_struct *work)
 	struct fb_deferred_io_pageref *pageref, *next;
 	struct fb_deferred_io *fbdefio = info->fbdefio;
 
-	/* here we mkclean the pages, then do all deferred IO */
+	/* here we wrprotect the page's mappings, then do all deferred IO. */
 	mutex_lock(&fbdefio->lock);
 	list_for_each_entry(pageref, &fbdefio->pagereflist, list) {
-		struct folio *folio = page_folio(pageref->page);
+		struct page *page = pageref->page;
+		pgoff_t pgoff = pageref->offset >> PAGE_SHIFT;
 
-		folio_lock(folio);
-		folio_mkclean(folio);
-		folio_unlock(folio);
+		mapping_wrprotect_page(fbdefio->mapping, pgoff, 1, page);
 	}
 
 	/* driver's callback with pagereflist */
@@ -337,6 +325,7 @@  void fb_deferred_io_open(struct fb_info *info,
 {
 	struct fb_deferred_io *fbdefio = info->fbdefio;
 
+	fbdefio->mapping = file->f_mapping;
 	file->f_mapping->a_ops = &fb_deferred_io_aops;
 	fbdefio->open_count++;
 }
@@ -344,13 +333,7 @@  EXPORT_SYMBOL_GPL(fb_deferred_io_open);
 
 static void fb_deferred_io_lastclose(struct fb_info *info)
 {
-	unsigned long i;
-
 	flush_delayed_work(&info->deferred_work);
-
-	/* clear out the mapping that we setup */
-	for (i = 0; i < info->npagerefs; ++i)
-		fb_deferred_io_pageref_clear(&info->pagerefs[i]);
 }
 
 void fb_deferred_io_release(struct fb_info *info)
@@ -370,5 +353,6 @@  void fb_deferred_io_cleanup(struct fb_info *info)
 
 	kvfree(info->pagerefs);
 	mutex_destroy(&fbdefio->lock);
+	fbdefio->mapping = NULL;
 }
 EXPORT_SYMBOL_GPL(fb_deferred_io_cleanup);
diff --git a/include/linux/fb.h b/include/linux/fb.h
index 5ba187e08cf7..cd653862ab99 100644
--- a/include/linux/fb.h
+++ b/include/linux/fb.h
@@ -225,6 +225,7 @@  struct fb_deferred_io {
 	int open_count; /* number of opened files; protected by fb_info lock */
 	struct mutex lock; /* mutex that protects the pageref list */
 	struct list_head pagereflist; /* list of pagerefs for touched pages */
+	struct address_space *mapping; /* page cache object for fb device */
 	/* callback */
 	struct page *(*get_page)(struct fb_info *info, unsigned long offset);
 	void (*deferred_io)(struct fb_info *info, struct list_head *pagelist);