[10/10] mm/migrate.c: call detach_page_private to cleanup code
diff mbox series

Message ID 20200517214718.468-11-guoqing.jiang@cloud.ionos.com
State New
Headers show
Series
  • Introduce attach/detach_page_private to cleanup code
Related show

Commit Message

Guoqing Jiang May 17, 2020, 9:47 p.m. UTC
We can cleanup code a little by call detach_page_private here.

Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
---
No change since RFC V3.

 mm/migrate.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

Comments

Andrew Morton May 19, 2020, 5:12 a.m. UTC | #1
On Sun, 17 May 2020 23:47:18 +0200 Guoqing Jiang <guoqing.jiang@cloud.ionos.com> wrote:

> We can cleanup code a little by call detach_page_private here.
> 
> ...
>
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -804,10 +804,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
>  	if (rc != MIGRATEPAGE_SUCCESS)
>  		goto unlock_buffers;
>  
> -	ClearPagePrivate(page);
> -	set_page_private(newpage, page_private(page));
> -	set_page_private(page, 0);
> -	put_page(page);
> +	set_page_private(newpage, detach_page_private(page));
>  	get_page(newpage);
>  
>  	bh = head;

mm/migrate.c: In function '__buffer_migrate_page':
./include/linux/mm_types.h:243:52: warning: assignment makes integer from pointer without a cast [-Wint-conversion]
 #define set_page_private(page, v) ((page)->private = (v))
                                                    ^
mm/migrate.c:800:2: note: in expansion of macro 'set_page_private'
  set_page_private(newpage, detach_page_private(page));
  ^~~~~~~~~~~~~~~~

The fact that set_page_private(detach_page_private()) generates a type
mismatch warning seems deeply wrong, surely.

Please let's get the types sorted out - either unsigned long or void *,
not half-one and half-the other.  Whatever needs the least typecasting
at callsites, I suggest.

And can we please implement set_page_private() and page_private() with
inlined C code?  There is no need for these to be macros.
Guoqing Jiang May 19, 2020, 7:35 a.m. UTC | #2
On 5/19/20 7:12 AM, Andrew Morton wrote:
> On Sun, 17 May 2020 23:47:18 +0200 Guoqing Jiang <guoqing.jiang@cloud.ionos.com> wrote:
>
>> We can cleanup code a little by call detach_page_private here.
>>
>> ...
>>
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -804,10 +804,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
>>   	if (rc != MIGRATEPAGE_SUCCESS)
>>   		goto unlock_buffers;
>>   
>> -	ClearPagePrivate(page);
>> -	set_page_private(newpage, page_private(page));
>> -	set_page_private(page, 0);
>> -	put_page(page);
>> +	set_page_private(newpage, detach_page_private(page));
>>   	get_page(newpage);
>>   
>>   	bh = head;
> mm/migrate.c: In function '__buffer_migrate_page':
> ./include/linux/mm_types.h:243:52: warning: assignment makes integer from pointer without a cast [-Wint-conversion]
>   #define set_page_private(page, v) ((page)->private = (v))
>                                                      ^
> mm/migrate.c:800:2: note: in expansion of macro 'set_page_private'
>    set_page_private(newpage, detach_page_private(page));
>    ^~~~~~~~~~~~~~~~
>
> The fact that set_page_private(detach_page_private()) generates a type
> mismatch warning seems deeply wrong, surely.
>
> Please let's get the types sorted out - either unsigned long or void *,
> not half-one and half-the other.  Whatever needs the least typecasting
> at callsites, I suggest.

Sorry about that, I should notice the warning before. I will double 
check if other
places need the typecast or not, then send a new version.

> And can we please implement set_page_private() and page_private() with
> inlined C code?  There is no need for these to be macros.

Just did a quick change.

-#define page_private(page)             ((page)->private)
-#define set_page_private(page, v)      ((page)->private = (v))
+static inline unsigned long page_private(struct page *page)
+{
+       return page->private;
+}
+
+static inline void set_page_private(struct page *page, unsigned long 
priv_data)
+{
+       page->private = priv_data;
+}

Then I get error like.

fs/erofs/zdata.h: In function ‘z_erofs_onlinepage_index’:
fs/erofs/zdata.h:126:8: error: lvalue required as unary ‘&’ operand
   u.v = &page_private(page);
         ^

I guess it is better to keep page_private as macro, please correct me in 
case I
missed something.

Thanks,
Guoqing
Gao Xiang May 19, 2020, 10:06 a.m. UTC | #3
On Tue, May 19, 2020 at 09:35:59AM +0200, Guoqing Jiang wrote:
> On 5/19/20 7:12 AM, Andrew Morton wrote:
> > On Sun, 17 May 2020 23:47:18 +0200 Guoqing Jiang <guoqing.jiang@cloud.ionos.com> wrote:
> >
> > > We can cleanup code a little by call detach_page_private here.
> > >
> > > ...
> > >
> > > --- a/mm/migrate.c
> > > +++ b/mm/migrate.c
> > > @@ -804,10 +804,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
> > >   	if (rc != MIGRATEPAGE_SUCCESS)
> > >   		goto unlock_buffers;
> > > -	ClearPagePrivate(page);
> > > -	set_page_private(newpage, page_private(page));
> > > -	set_page_private(page, 0);
> > > -	put_page(page);
> > > +	set_page_private(newpage, detach_page_private(page));
> > >   	get_page(newpage);
> > >   	bh = head;
> > mm/migrate.c: In function '__buffer_migrate_page':
> > ./include/linux/mm_types.h:243:52: warning: assignment makes integer from pointer without a cast [-Wint-conversion]
> >   #define set_page_private(page, v) ((page)->private = (v))
> >                                                      ^
> > mm/migrate.c:800:2: note: in expansion of macro 'set_page_private'
> >    set_page_private(newpage, detach_page_private(page));
> >    ^~~~~~~~~~~~~~~~
> >
> > The fact that set_page_private(detach_page_private()) generates a type
> > mismatch warning seems deeply wrong, surely.
> >
> > Please let's get the types sorted out - either unsigned long or void *,
> > not half-one and half-the other.  Whatever needs the least typecasting
> > at callsites, I suggest.
>
> Sorry about that, I should notice the warning before. I will double check if
> other
> places need the typecast or not, then send a new version.
>
> > And can we please implement set_page_private() and page_private() with
> > inlined C code?  There is no need for these to be macros.
>
> Just did a quick change.
>
> -#define page_private(page)Â Â Â Â Â Â Â Â Â Â Â Â  ((page)->private)
> -#define set_page_private(page, v)Â Â Â Â Â  ((page)->private = (v))
> +static inline unsigned long page_private(struct page *page)
> +{
> +Â Â Â Â Â Â  return page->private;
> +}
> +
> +static inline void set_page_private(struct page *page, unsigned long
> priv_data)
> +{
> +Â Â Â Â Â Â  page->private = priv_data;
> +}
>
> Then I get error like.
>
> fs/erofs/zdata.h: In function ‘z_erofs_onlinepage_index’:
> fs/erofs/zdata.h:126:8: error: lvalue required as unary ‘&’ operand
> Â  u.v = &page_private(page);
> Â Â Â Â Â Â Â  ^
>
> I guess it is better to keep page_private as macro, please correct me in
> case I
> missed something.

I guess that you could Cc me in the reply.

In that case, EROFS uses page->private as an atomic integer to
trace 2 partial subpages in one page.

I think that you could also use &page->private instead directly to
replace &page_private(page) here since I didn't find some hint to
pick &page_private(page) or &page->private.


In addition, I found some limitation of new {attach,detach}_page_private
helper (that is why I was interested in this series at that time [1] [2],
but I gave up finally) since many patterns (not all) in EROFS are

io_submit (origin, page locked):
attach_page_private(page);
...
put_page(page);

end_io (page locked):
SetPageUptodate(page);
unlock_page(page);

since the page is always locked, so io_submit could be simplified as
set_page_private(page, ...);
SetPagePrivate(page);
, which can save both one temporary get_page(page) and one
put_page(page) since it could be regarded as safe with page locked.


btw, I noticed the patchset versions are PATCH [3], RFC PATCH [4],
RFC PATCH v2 [5], RFC PATCH v3 [6], PATCH [7]. Although I also
noticed the patchset title was once changed, but it could be some
harder to trace the whole history discussion.

[1] https://lore.kernel.org/linux-fsdevel/20200419051404.GA30986@hsiangkao-HP-ZHAN-66-Pro-G1/
[2] https://lore.kernel.org/linux-fsdevel/20200427025752.GA3979@hsiangkao-HP-ZHAN-66-Pro-G1/
[3] https://lore.kernel.org/linux-fsdevel/20200418225123.31850-1-guoqing.jiang@cloud.ionos.com/
[4] https://lore.kernel.org/linux-fsdevel/20200426214925.10970-1-guoqing.jiang@cloud.ionos.com/
[5] https://lore.kernel.org/linux-fsdevel/20200430214450.10662-1-guoqing.jiang@cloud.ionos.com/
[6] https://lore.kernel.org/linux-fsdevel/20200507214400.15785-1-guoqing.jiang@cloud.ionos.com/
[7] https://lore.kernel.org/linux-fsdevel/20200517214718.468-1-guoqing.jiang@cloud.ionos.com/

Thanks,
Gao Xiang

>
> Thanks,
> Guoqing
>
>
>
Guoqing Jiang May 19, 2020, 11:02 a.m. UTC | #4
On 5/19/20 12:06 PM, Gao Xiang wrote:
> On Tue, May 19, 2020 at 09:35:59AM +0200, Guoqing Jiang wrote:
>> On 5/19/20 7:12 AM, Andrew Morton wrote:
>>> On Sun, 17 May 2020 23:47:18 +0200 Guoqing Jiang <guoqing.jiang@cloud.ionos.com> wrote:
>>>
>>>> We can cleanup code a little by call detach_page_private here.
>>>>
>>>> ...
>>>>
>>>> --- a/mm/migrate.c
>>>> +++ b/mm/migrate.c
>>>> @@ -804,10 +804,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
>>>>    	if (rc != MIGRATEPAGE_SUCCESS)
>>>>    		goto unlock_buffers;
>>>> -	ClearPagePrivate(page);
>>>> -	set_page_private(newpage, page_private(page));
>>>> -	set_page_private(page, 0);
>>>> -	put_page(page);
>>>> +	set_page_private(newpage, detach_page_private(page));
>>>>    	get_page(newpage);
>>>>    	bh = head;
>>> mm/migrate.c: In function '__buffer_migrate_page':
>>> ./include/linux/mm_types.h:243:52: warning: assignment makes integer from pointer without a cast [-Wint-conversion]
>>>    #define set_page_private(page, v) ((page)->private = (v))
>>>                                                       ^
>>> mm/migrate.c:800:2: note: in expansion of macro 'set_page_private'
>>>     set_page_private(newpage, detach_page_private(page));
>>>     ^~~~~~~~~~~~~~~~
>>>
>>> The fact that set_page_private(detach_page_private()) generates a type
>>> mismatch warning seems deeply wrong, surely.
>>>
>>> Please let's get the types sorted out - either unsigned long or void *,
>>> not half-one and half-the other.  Whatever needs the least typecasting
>>> at callsites, I suggest.
>> Sorry about that, I should notice the warning before. I will double check if
>> other
>> places need the typecast or not, then send a new version.
>>
>>> And can we please implement set_page_private() and page_private() with
>>> inlined C code?  There is no need for these to be macros.
>> Just did a quick change.
>>
>> -#define page_private(page)Â Â Â Â Â Â Â Â Â Â Â Â  ((page)->private)
>> -#define set_page_private(page, v)Â Â Â Â Â  ((page)->private = (v))
>> +static inline unsigned long page_private(struct page *page)
>> +{
>> +Â Â Â Â Â Â  return page->private;
>> +}
>> +
>> +static inline void set_page_private(struct page *page, unsigned long
>> priv_data)
>> +{
>> +Â Â Â Â Â Â  page->private = priv_data;
>> +}
>>
>> Then I get error like.
>>
>> fs/erofs/zdata.h: In function ‘z_erofs_onlinepage_index’:
>> fs/erofs/zdata.h:126:8: error: lvalue required as unary ‘&’ operand
>> Â  u.v = &page_private(page);
>> Â Â Â Â Â Â Â  ^
>>
>> I guess it is better to keep page_private as macro, please correct me in
>> case I
>> missed something.
> I guess that you could Cc me in the reply.

Sorry for not do that ...

> In that case, EROFS uses page->private as an atomic integer to
> trace 2 partial subpages in one page.
>
> I think that you could also use &page->private instead directly to
> replace &page_private(page) here since I didn't find some hint to
> pick &page_private(page) or &page->private.

Thanks for the input, I just did a quick test, so need to investigate more.
And I think it is better to have another thread to change those macros to
inline function, then fix related issues due to the change.

> In addition, I found some limitation of new {attach,detach}_page_private
> helper (that is why I was interested in this series at that time [1] [2],
> but I gave up finally) since many patterns (not all) in EROFS are
>
> io_submit (origin, page locked):
> attach_page_private(page);
> ...
> put_page(page);
>
> end_io (page locked):
> SetPageUptodate(page);
> unlock_page(page);
>
> since the page is always locked, so io_submit could be simplified as
> set_page_private(page, ...);
> SetPagePrivate(page);
> , which can save both one temporary get_page(page) and one
> put_page(page) since it could be regarded as safe with page locked.

The SetPageUptodate is not called inside {attach,detach}_page_private,
I could probably misunderstand your point, maybe you want the new pairs
can handle the locked page, care to elaborate more?

> btw, I noticed the patchset versions are PATCH [3], RFC PATCH [4],
> RFC PATCH v2 [5], RFC PATCH v3 [6], PATCH [7]. Although I also
> noticed the patchset title was once changed, but it could be some
> harder to trace the whole history discussion.
>
> [1] https://lore.kernel.org/linux-fsdevel/20200419051404.GA30986@hsiangkao-HP-ZHAN-66-Pro-G1/
> [2] https://lore.kernel.org/linux-fsdevel/20200427025752.GA3979@hsiangkao-HP-ZHAN-66-Pro-G1/
> [3] https://lore.kernel.org/linux-fsdevel/20200418225123.31850-1-guoqing.jiang@cloud.ionos.com/
> [4] https://lore.kernel.org/linux-fsdevel/20200426214925.10970-1-guoqing.jiang@cloud.ionos.com/
> [5] https://lore.kernel.org/linux-fsdevel/20200430214450.10662-1-guoqing.jiang@cloud.ionos.com/
> [6] https://lore.kernel.org/linux-fsdevel/20200507214400.15785-1-guoqing.jiang@cloud.ionos.com/
> [7] https://lore.kernel.org/linux-fsdevel/20200517214718.468-1-guoqing.jiang@cloud.ionos.com/

All the cover letter of those series are here.

RFC V3:https://lore.kernel.org/lkml/20200507214400.15785-1-guoqing.jiang@cloud.ionos.com/
RFC V2:https://lore.kernel.org/lkml/20200430214450.10662-1-guoqing.jiang@cloud.ionos.com/
RFC:https://lore.kernel.org/lkml/20200426214925.10970-1-guoqing.jiang@cloud.ionos.com/

And the latest one:

https://lore.kernel.org/lkml/20200430214450.10662-1-guoqing.jiang@cloud.ionos.com/


Thanks,
Guoqing
Gao Xiang May 19, 2020, 11:31 a.m. UTC | #5
On Tue, May 19, 2020 at 01:02:26PM +0200, Guoqing Jiang wrote:
> On 5/19/20 12:06 PM, Gao Xiang wrote:
> > On Tue, May 19, 2020 at 09:35:59AM +0200, Guoqing Jiang wrote:
> > > On 5/19/20 7:12 AM, Andrew Morton wrote:
> > > > On Sun, 17 May 2020 23:47:18 +0200 Guoqing Jiang <guoqing.jiang@cloud.ionos.com> wrote:
> > > >
> > > > > We can cleanup code a little by call detach_page_private here.
> > > > >
> > > > > ...
> > > > >
> > > > > --- a/mm/migrate.c
> > > > > +++ b/mm/migrate.c
> > > > > @@ -804,10 +804,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
> > > > >    	if (rc != MIGRATEPAGE_SUCCESS)
> > > > >    		goto unlock_buffers;
> > > > > -	ClearPagePrivate(page);
> > > > > -	set_page_private(newpage, page_private(page));
> > > > > -	set_page_private(page, 0);
> > > > > -	put_page(page);
> > > > > +	set_page_private(newpage, detach_page_private(page));
> > > > >    	get_page(newpage);
> > > > >    	bh = head;
> > > > mm/migrate.c: In function '__buffer_migrate_page':
> > > > ./include/linux/mm_types.h:243:52: warning: assignment makes integer from pointer without a cast [-Wint-conversion]
> > > >    #define set_page_private(page, v) ((page)->private = (v))
> > > >                                                       ^
> > > > mm/migrate.c:800:2: note: in expansion of macro 'set_page_private'
> > > >     set_page_private(newpage, detach_page_private(page));
> > > >     ^~~~~~~~~~~~~~~~
> > > >
> > > > The fact that set_page_private(detach_page_private()) generates a type
> > > > mismatch warning seems deeply wrong, surely.
> > > >
> > > > Please let's get the types sorted out - either unsigned long or void *,
> > > > not half-one and half-the other.  Whatever needs the least typecasting
> > > > at callsites, I suggest.
> > > Sorry about that, I should notice the warning before. I will double check if
> > > other
> > > places need the typecast or not, then send a new version.
> > >
> > > > And can we please implement set_page_private() and page_private() with
> > > > inlined C code?  There is no need for these to be macros.
> > > Just did a quick change.
> > >
> > > -#define page_private(page)Â Â Â Â Â Â Â Â Â Â Â Â  ((page)->private)
> > > -#define set_page_private(page, v)Â Â Â Â Â  ((page)->private = (v))
> > > +static inline unsigned long page_private(struct page *page)
> > > +{
> > > +Â Â Â Â Â Â  return page->private;
> > > +}
> > > +
> > > +static inline void set_page_private(struct page *page, unsigned long
> > > priv_data)
> > > +{
> > > +Â Â Â Â Â Â  page->private = priv_data;
> > > +}
> > >
> > > Then I get error like.
> > >
> > > fs/erofs/zdata.h: In function ‘z_erofs_onlinepage_index’:
> > > fs/erofs/zdata.h:126:8: error: lvalue required as unary ‘&’ operand
> > > Â  u.v = &page_private(page);
> > > Â Â Â Â Â Â Â  ^
> > >
> > > I guess it is better to keep page_private as macro, please correct me in
> > > case I
> > > missed something.
> > I guess that you could Cc me in the reply.
>
> Sorry for not do that ...
>
> > In that case, EROFS uses page->private as an atomic integer to
> > trace 2 partial subpages in one page.
> >
> > I think that you could also use &page->private instead directly to
> > replace &page_private(page) here since I didn't find some hint to
> > pick &page_private(page) or &page->private.
>
> Thanks for the input, I just did a quick test, so need to investigate more.
> And I think it is better to have another thread to change those macros to
> inline function, then fix related issues due to the change.

I have no problem with that. Actually I did some type punning,
but I'm not sure if it's in a proper way. I'm very happy to improve
that as well.

>
> > In addition, I found some limitation of new {attach,detach}_page_private
> > helper (that is why I was interested in this series at that time [1] [2],
> > but I gave up finally) since many patterns (not all) in EROFS are
> >
> > io_submit (origin, page locked):
> > attach_page_private(page);
> > ...
> > put_page(page);
> >
> > end_io (page locked):
> > SetPageUptodate(page);
> > unlock_page(page);
> >
> > since the page is always locked, so io_submit could be simplified as
> > set_page_private(page, ...);
> > SetPagePrivate(page);
> > , which can save both one temporary get_page(page) and one
> > put_page(page) since it could be regarded as safe with page locked.
>
> The SetPageUptodate is not called inside {attach,detach}_page_private,
> I could probably misunderstand your point, maybe you want the new pairs
> can handle the locked page, care to elaborate more?

It doesn't relate to SetPageUptodate.
I just want to say that some patterns might not be benefited.

These helpers are useful indeed.

>
> > btw, I noticed the patchset versions are PATCH [3], RFC PATCH [4],
> > RFC PATCH v2 [5], RFC PATCH v3 [6], PATCH [7]. Although I also
> > noticed the patchset title was once changed, but it could be some
> > harder to trace the whole history discussion.
> >
> > [1] https://lore.kernel.org/linux-fsdevel/20200419051404.GA30986@hsiangkao-HP-ZHAN-66-Pro-G1/
> > [2] https://lore.kernel.org/linux-fsdevel/20200427025752.GA3979@hsiangkao-HP-ZHAN-66-Pro-G1/
> > [3] https://lore.kernel.org/linux-fsdevel/20200418225123.31850-1-guoqing.jiang@cloud.ionos.com/
> > [4] https://lore.kernel.org/linux-fsdevel/20200426214925.10970-1-guoqing.jiang@cloud.ionos.com/
> > [5] https://lore.kernel.org/linux-fsdevel/20200430214450.10662-1-guoqing.jiang@cloud.ionos.com/
> > [6] https://lore.kernel.org/linux-fsdevel/20200507214400.15785-1-guoqing.jiang@cloud.ionos.com/
> > [7] https://lore.kernel.org/linux-fsdevel/20200517214718.468-1-guoqing.jiang@cloud.ionos.com/
>
> All the cover letter of those series are here.
>
> RFC V3:https://lore.kernel.org/lkml/20200507214400.15785-1-guoqing.jiang@cloud.ionos.com/
> RFC V2:https://lore.kernel.org/lkml/20200430214450.10662-1-guoqing.jiang@cloud.ionos.com/
> RFC:https://lore.kernel.org/lkml/20200426214925.10970-1-guoqing.jiang@cloud.ionos.com/
>
> And the latest one:
>
> https://lore.kernel.org/lkml/20200430214450.10662-1-guoqing.jiang@cloud.ionos.com/

Yeah, I noticed these links in this cover letter as well.
I was just little confused about these version numbers, especially
when the original patchset "[PATCH 0/5] export __clear_page_buffers
to cleanup code" included. That is minor as well.

Thanks for the explanation.

Thanks,
Gao Xiang

>
>
> Thanks,
> Guoqing
Matthew Wilcox May 19, 2020, 3:16 p.m. UTC | #6
On Tue, May 19, 2020 at 06:06:19PM +0800, Gao Xiang wrote:
> In addition, I found some limitation of new {attach,detach}_page_private
> helper (that is why I was interested in this series at that time [1] [2],
> but I gave up finally) since many patterns (not all) in EROFS are
> 
> io_submit (origin, page locked):
> attach_page_private(page);
> ...
> put_page(page);
> 
> end_io (page locked):
> SetPageUptodate(page);
> unlock_page(page);
> 
> since the page is always locked, so io_submit could be simplified as
> set_page_private(page, ...);
> SetPagePrivate(page);
> , which can save both one temporary get_page(page) and one
> put_page(page) since it could be regarded as safe with page locked.

It's fine to use page private like this without incrementing the refcount,
and I can't find any problematic cases in EROFS like those fixed by commit
8e47a457321ca1a74ad194ab5dcbca764bc70731

So I think the new helpers are not for you, and that's fine.  They'll be
useful for other filesystems which are using page_private differently
from the way that you do.
Gao Xiang May 19, 2020, 3:27 p.m. UTC | #7
Hi Matthew,

On Tue, May 19, 2020 at 08:16:32AM -0700, Matthew Wilcox wrote:
> On Tue, May 19, 2020 at 06:06:19PM +0800, Gao Xiang wrote:
> > In addition, I found some limitation of new {attach,detach}_page_private
> > helper (that is why I was interested in this series at that time [1] [2],
> > but I gave up finally) since many patterns (not all) in EROFS are
> >
> > io_submit (origin, page locked):
> > attach_page_private(page);
> > ...
> > put_page(page);
> >
> > end_io (page locked):
> > SetPageUptodate(page);
> > unlock_page(page);
> >
> > since the page is always locked, so io_submit could be simplified as
> > set_page_private(page, ...);
> > SetPagePrivate(page);
> > , which can save both one temporary get_page(page) and one
> > put_page(page) since it could be regarded as safe with page locked.
>
> It's fine to use page private like this without incrementing the refcount,
> and I can't find any problematic cases in EROFS like those fixed by commit
> 8e47a457321ca1a74ad194ab5dcbca764bc70731
>
> So I think the new helpers are not for you, and that's fine.  They'll be
> useful for other filesystems which are using page_private differently
> from the way that you do.

Yes, I agree. Although there are some dead code in EROFS to handle
some truncated case, which I'd like to use in the future. Maybe I
can get rid of it temporarily... But let me get LZMA fixed-sized
output compression for EROFS in shape at first, which seems useful
as a complement of LZ4...

Thanks,
Gao Xiang
Guoqing Jiang May 19, 2020, 8:44 p.m. UTC | #8
Hi Andrew,

On 5/19/20 9:35 AM, Guoqing Jiang wrote:
> On 5/19/20 7:12 AM, Andrew Morton wrote:
>> On Sun, 17 May 2020 23:47:18 +0200 Guoqing Jiang 
>> <guoqing.jiang@cloud.ionos.com> wrote:
>>
>>> We can cleanup code a little by call detach_page_private here.
>>>
>>> ...
>>>
>>> --- a/mm/migrate.c
>>> +++ b/mm/migrate.c
>>> @@ -804,10 +804,7 @@ static int __buffer_migrate_page(struct 
>>> address_space *mapping,
>>>       if (rc != MIGRATEPAGE_SUCCESS)
>>>           goto unlock_buffers;
>>>   -    ClearPagePrivate(page);
>>> -    set_page_private(newpage, page_private(page));
>>> -    set_page_private(page, 0);
>>> -    put_page(page);
>>> +    set_page_private(newpage, detach_page_private(page));
>>>       get_page(newpage);
>>>         bh = head;
>> mm/migrate.c: In function '__buffer_migrate_page':
>> ./include/linux/mm_types.h:243:52: warning: assignment makes integer 
>> from pointer without a cast [-Wint-conversion]
>>   #define set_page_private(page, v) ((page)->private = (v))
>>                                                      ^
>> mm/migrate.c:800:2: note: in expansion of macro 'set_page_private'
>>    set_page_private(newpage, detach_page_private(page));
>>    ^~~~~~~~~~~~~~~~
>>
>> The fact that set_page_private(detach_page_private()) generates a type
>> mismatch warning seems deeply wrong, surely.
>>
>> Please let's get the types sorted out - either unsigned long or void *,
>> not half-one and half-the other.  Whatever needs the least typecasting
>> at callsites, I suggest.
>
> Sorry about that, I should notice the warning before. I will double 
> check if other
> places need the typecast or not, then send a new version.

Only this patch missed the typecast. I guess I just need to send an 
updated patch
to replace this one (I am fine to send a new patch set if you want), 
sorry again for
the trouble.

>
>> And can we please implement set_page_private() and page_private() with
>> inlined C code?  There is no need for these to be macros.
>
> Just did a quick change.
>
> -#define page_private(page)             ((page)->private)
> -#define set_page_private(page, v)      ((page)->private = (v))
> +static inline unsigned long page_private(struct page *page)
> +{
> +       return page->private;
> +}
> +
> +static inline void set_page_private(struct page *page, unsigned long 
> priv_data)
> +{
> +       page->private = priv_data;
> +}
>
> Then I get error like.
>
> fs/erofs/zdata.h: In function ‘z_erofs_onlinepage_index’:
> fs/erofs/zdata.h:126:8: error: lvalue required as unary ‘&’ operand
>   u.v = &page_private(page);
>         ^
>
> I guess it is better to keep page_private as macro, please correct me 
> in case I
> missed something.

Lost of problems need to be fixed if change page_private to inline 
function, so I
think it is better to keep it and only convert set_page_private.

mm/compaction.c: In function ‘isolate_migratepages_block’:
./include/linux/compiler.h:287:20: error: lvalue required as unary ‘&’ 
operand
    __read_once_size(&(x), __u.__c, sizeof(x));  \
                     ^
./include/linux/compiler.h:293:22: note: in expansion of macro ‘__READ_ONCE’
  #define READ_ONCE(x) __READ_ONCE(x, 1)
                       ^~~~~~~~~~~
mm/internal.h:293:34: note: in expansion of macro ‘READ_ONCE’
  #define page_order_unsafe(page)  READ_ONCE(page_private(page))


Thanks,
Guoqing
Dave Chinner May 21, 2020, 10:52 p.m. UTC | #9
On Sun, May 17, 2020 at 11:47:18PM +0200, Guoqing Jiang wrote:
> We can cleanup code a little by call detach_page_private here.
> 
> Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
> ---
> No change since RFC V3.
> 
>  mm/migrate.c | 5 +----
>  1 file changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 5fed0305d2ec..f99502bc113c 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -804,10 +804,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
>  	if (rc != MIGRATEPAGE_SUCCESS)
>  		goto unlock_buffers;
>  
> -	ClearPagePrivate(page);
> -	set_page_private(newpage, page_private(page));
> -	set_page_private(page, 0);
> -	put_page(page);
> +	set_page_private(newpage, detach_page_private(page));

attach_page_private(newpage, detach_page_private(page));

Cheers,

Dave.
Guoqing Jiang May 22, 2020, 7:18 a.m. UTC | #10
Hi Dave,

On 5/22/20 12:52 AM, Dave Chinner wrote:
> On Sun, May 17, 2020 at 11:47:18PM +0200, Guoqing Jiang wrote:
>> We can cleanup code a little by call detach_page_private here.
>>
>> Signed-off-by: Guoqing Jiang <guoqing.jiang@cloud.ionos.com>
>> ---
>> No change since RFC V3.
>>
>>   mm/migrate.c | 5 +----
>>   1 file changed, 1 insertion(+), 4 deletions(-)
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 5fed0305d2ec..f99502bc113c 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -804,10 +804,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
>>   	if (rc != MIGRATEPAGE_SUCCESS)
>>   		goto unlock_buffers;
>>   
>> -	ClearPagePrivate(page);
>> -	set_page_private(newpage, page_private(page));
>> -	set_page_private(page, 0);
>> -	put_page(page);
>> +	set_page_private(newpage, detach_page_private(page));
> attach_page_private(newpage, detach_page_private(page));

Mattew had suggested it as follows, but not sure if we can reorder of 
the setup of
the bh and setting PagePrivate, so I didn't want to break the original 
syntax.

@@ -797,11 +797,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
         if (rc != MIGRATEPAGE_SUCCESS)
                 goto unlock_buffers;
  
-       ClearPagePrivate(page);
-       set_page_private(newpage, page_private(page));
-       set_page_private(page, 0);
-       put_page(page);
-       get_page(newpage);
+       attach_page_private(newpage, detach_page_private(page));
  
         bh = head;
         do {
@@ -810,8 +806,6 @@ static int __buffer_migrate_page(struct address_space *mapping,
  
         } while (bh != head);
  
-       SetPagePrivate(newpage);
-
         if (mode != MIGRATE_SYNC_NO_COPY)



[1]. 
https://lore.kernel.org/lkml/20200502004158.GD29705@bombadil.infradead.org/
[2]. 
https://lore.kernel.org/lkml/e4d5ddc0-877f-6499-f697-2b7c0ddbf386@cloud.ionos.com/

Thanks,
Guoqing
Andrew Morton May 22, 2020, 11:53 p.m. UTC | #11
On Fri, 22 May 2020 09:18:25 +0200 Guoqing Jiang <guoqing.jiang@cloud.ionos.com> wrote:

> >> -	ClearPagePrivate(page);
> >> -	set_page_private(newpage, page_private(page));
> >> -	set_page_private(page, 0);
> >> -	put_page(page);
> >> +	set_page_private(newpage, detach_page_private(page));
> > attach_page_private(newpage, detach_page_private(page));
> 
> Mattew had suggested it as follows, but not sure if we can reorder of 
> the setup of
> the bh and setting PagePrivate, so I didn't want to break the original 
> syntax.
> 
> @@ -797,11 +797,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
>          if (rc != MIGRATEPAGE_SUCCESS)
>                  goto unlock_buffers;
>   
> -       ClearPagePrivate(page);
> -       set_page_private(newpage, page_private(page));
> -       set_page_private(page, 0);
> -       put_page(page);
> -       get_page(newpage);
> +       attach_page_private(newpage, detach_page_private(page));
>   
>          bh = head;
>          do {
> @@ -810,8 +806,6 @@ static int __buffer_migrate_page(struct address_space *mapping,
>   
>          } while (bh != head);
>   
> -       SetPagePrivate(newpage);
> -
>          if (mode != MIGRATE_SYNC_NO_COPY)

This is OK - coherency between PG_private and the page's buffer
ring is maintained by holding lock_page().

I have (effectively) applied the above change.
Guoqing Jiang May 24, 2020, 7:02 p.m. UTC | #12
On 5/23/20 1:53 AM, Andrew Morton wrote:
> On Fri, 22 May 2020 09:18:25 +0200 Guoqing Jiang <guoqing.jiang@cloud.ionos.com> wrote:
>
>>>> -	ClearPagePrivate(page);
>>>> -	set_page_private(newpage, page_private(page));
>>>> -	set_page_private(page, 0);
>>>> -	put_page(page);
>>>> +	set_page_private(newpage, detach_page_private(page));
>>> attach_page_private(newpage, detach_page_private(page));
>> Mattew had suggested it as follows, but not sure if we can reorder of
>> the setup of
>> the bh and setting PagePrivate, so I didn't want to break the original
>> syntax.
>>
>> @@ -797,11 +797,7 @@ static int __buffer_migrate_page(struct address_space *mapping,
>>           if (rc != MIGRATEPAGE_SUCCESS)
>>                   goto unlock_buffers;
>>    
>> -       ClearPagePrivate(page);
>> -       set_page_private(newpage, page_private(page));
>> -       set_page_private(page, 0);
>> -       put_page(page);
>> -       get_page(newpage);
>> +       attach_page_private(newpage, detach_page_private(page));
>>    
>>           bh = head;
>>           do {
>> @@ -810,8 +806,6 @@ static int __buffer_migrate_page(struct address_space *mapping,
>>    
>>           } while (bh != head);
>>    
>> -       SetPagePrivate(newpage);
>> -
>>           if (mode != MIGRATE_SYNC_NO_COPY)
> This is OK - coherency between PG_private and the page's buffer
> ring is maintained by holding lock_page().

Appreciate for your explanation.

Thanks,
Guoqing

> I have (effectively) applied the above change.

Patch
diff mbox series

diff --git a/mm/migrate.c b/mm/migrate.c
index 5fed0305d2ec..f99502bc113c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -804,10 +804,7 @@  static int __buffer_migrate_page(struct address_space *mapping,
 	if (rc != MIGRATEPAGE_SUCCESS)
 		goto unlock_buffers;
 
-	ClearPagePrivate(page);
-	set_page_private(newpage, page_private(page));
-	set_page_private(page, 0);
-	put_page(page);
+	set_page_private(newpage, detach_page_private(page));
 	get_page(newpage);
 
 	bh = head;