Message ID | 1581640185-95731-1-git-send-email-yang.shi@linux.alibaba.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | mm: migrate.c: migrate PG_readahead flag | expand |
On Fri, 14 Feb 2020 08:29:45 +0800 Yang Shi <yang.shi@linux.alibaba.com> wrote: > Currently migration code doesn't migrate PG_readahead flag. > Theoretically this would incur slight performance loss as the > application might have to ramp its readahead back up again. Even though > such problem happens, it might be hidden by something else since > migration is typically triggered by compaction and NUMA balancing, any > of which should be more noticeable. > > Migrate the flag after end_page_writeback() since it may clear > PG_reclaim flag, which is the same bit as PG_readahead, for the new > page. > > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -647,6 +647,14 @@ void migrate_page_states(struct page *newpage, struct page *page) > if (PageWriteback(newpage)) > end_page_writeback(newpage); > > + /* > + * PG_readahead share the same bit with PG_reclaim, the above > + * end_page_writeback() may clear PG_readahead mistakenly, so set > + * the bit after that. > + */ > + if (PageReadahead(page)) > + SetPageReadahead(newpage); > + > copy_page_owner(page, newpage); > Why not if (PageWriteback(newpage)) { end_page_writeback(newpage); /* * PG_readahead share the same bit with PG_reclaim, the above * end_page_writeback() may clear PG_readahead mistakenly, so * set the bit after that. */ if (PageReadahead(page)) SetPageReadahead(newpage); } ?
On 2/13/20 6:55 PM, Andrew Morton wrote: > On Fri, 14 Feb 2020 08:29:45 +0800 Yang Shi <yang.shi@linux.alibaba.com> wrote: > >> Currently migration code doesn't migrate PG_readahead flag. >> Theoretically this would incur slight performance loss as the >> application might have to ramp its readahead back up again. Even though >> such problem happens, it might be hidden by something else since >> migration is typically triggered by compaction and NUMA balancing, any >> of which should be more noticeable. >> >> Migrate the flag after end_page_writeback() since it may clear >> PG_reclaim flag, which is the same bit as PG_readahead, for the new >> page. >> >> --- a/mm/migrate.c >> +++ b/mm/migrate.c >> @@ -647,6 +647,14 @@ void migrate_page_states(struct page *newpage, struct page *page) >> if (PageWriteback(newpage)) >> end_page_writeback(newpage); >> >> + /* >> + * PG_readahead share the same bit with PG_reclaim, the above >> + * end_page_writeback() may clear PG_readahead mistakenly, so set >> + * the bit after that. >> + */ >> + if (PageReadahead(page)) >> + SetPageReadahead(newpage); >> + >> copy_page_owner(page, newpage); >> > Why not The newpage may not have writeback set, migrating readahead flag should not depend on it. > > if (PageWriteback(newpage)) { > end_page_writeback(newpage); > /* > * PG_readahead share the same bit with PG_reclaim, the above > * end_page_writeback() may clear PG_readahead mistakenly, so > * set the bit after that. > */ > if (PageReadahead(page)) > SetPageReadahead(newpage); > } > > ?
On Thu, Feb 13, 2020 at 07:58:40PM -0800, Yang Shi wrote: > On 2/13/20 6:55 PM, Andrew Morton wrote: > > On Fri, 14 Feb 2020 08:29:45 +0800 Yang Shi <yang.shi@linux.alibaba.com> wrote: > > > Currently migration code doesn't migrate PG_readahead flag. > > > Theoretically this would incur slight performance loss as the > > > application might have to ramp its readahead back up again. Even though > > > such problem happens, it might be hidden by something else since > > > migration is typically triggered by compaction and NUMA balancing, any > > > of which should be more noticeable. > > > > > > Migrate the flag after end_page_writeback() since it may clear > > > PG_reclaim flag, which is the same bit as PG_readahead, for the new > > > page. > > > > > > --- a/mm/migrate.c > > > +++ b/mm/migrate.c > > > @@ -647,6 +647,14 @@ void migrate_page_states(struct page *newpage, struct page *page) > > > if (PageWriteback(newpage)) > > > end_page_writeback(newpage); > > > + /* > > > + * PG_readahead share the same bit with PG_reclaim, the above > > > + * end_page_writeback() may clear PG_readahead mistakenly, so set > > > + * the bit after that. > > > + */ > > > + if (PageReadahead(page)) > > > + SetPageReadahead(newpage); > > > + > > > copy_page_owner(page, newpage); > > Why not > > The newpage may not have writeback set, migrating readahead flag should not > depend on it. Indeed, if the page has writeback set, then the page does not have the readahead flag set; it has the reclaim flag set. The original patch is correct, afaict. > > if (PageWriteback(newpage)) { > > end_page_writeback(newpage); > > /* > > * PG_readahead share the same bit with PG_reclaim, the above > > * end_page_writeback() may clear PG_readahead mistakenly, so > > * set the bit after that. > > */ > > if (PageReadahead(page)) > > SetPageReadahead(newpage); > > } > > > > ? >
On Fri 14-02-20 08:29:45, Yang Shi wrote: > Currently migration code doesn't migrate PG_readahead flag. > Theoretically this would incur slight performance loss as the > application might have to ramp its readahead back up again. Even though > such problem happens, it might be hidden by something else since > migration is typically triggered by compaction and NUMA balancing, any > of which should be more noticeable. > > Migrate the flag after end_page_writeback() since it may clear > PG_reclaim flag, which is the same bit as PG_readahead, for the new > page. Looks like an omission. The readahead flag has been added later (2.6.23) while the migration predates 2.6.17. > Cc: Matthew Wilcox <willy@infradead.org> > Cc: Michal Hocko <mhocko@suse.com> > Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com> Acked-by: Michal Hocko <mhocko@suse.com> > --- > I didn't experience any real problem, found by visual inspection. And, this was > discussed in thread: https://lore.kernel.org/linux-mm/185ce762-f25d-a013-6daa-8c288f1ff791@linux.alibaba.com/T/#m1977ce1de513401b7d09d6fa14fcffe849580aae > > mm/migrate.c | 8 ++++++++ > 1 file changed, 8 insertions(+) > > diff --git a/mm/migrate.c b/mm/migrate.c > index edf42ed..f3c492d 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -647,6 +647,14 @@ void migrate_page_states(struct page *newpage, struct page *page) > if (PageWriteback(newpage)) > end_page_writeback(newpage); > > + /* > + * PG_readahead share the same bit with PG_reclaim, the above > + * end_page_writeback() may clear PG_readahead mistakenly, so set > + * the bit after that. > + */ > + if (PageReadahead(page)) > + SetPageReadahead(newpage); > + > copy_page_owner(page, newpage); > > mem_cgroup_migrate(page, newpage); > -- > 1.8.3.1
On 14.02.20 01:29, Yang Shi wrote: > Currently migration code doesn't migrate PG_readahead flag. > Theoretically this would incur slight performance loss as the > application might have to ramp its readahead back up again. Even though > such problem happens, it might be hidden by something else since > migration is typically triggered by compaction and NUMA balancing, any > of which should be more noticeable. > > Migrate the flag after end_page_writeback() since it may clear > PG_reclaim flag, which is the same bit as PG_readahead, for the new > page. > > Cc: Matthew Wilcox <willy@infradead.org> > Cc: Michal Hocko <mhocko@suse.com> > Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com> > --- > I didn't experience any real problem, found by visual inspection. And, this was > discussed in thread: https://lore.kernel.org/linux-mm/185ce762-f25d-a013-6daa-8c288f1ff791@linux.alibaba.com/T/#m1977ce1de513401b7d09d6fa14fcffe849580aae > > mm/migrate.c | 8 ++++++++ > 1 file changed, 8 insertions(+) > > diff --git a/mm/migrate.c b/mm/migrate.c > index edf42ed..f3c492d 100644 > --- a/mm/migrate.c > +++ b/mm/migrate.c > @@ -647,6 +647,14 @@ void migrate_page_states(struct page *newpage, struct page *page) > if (PageWriteback(newpage)) > end_page_writeback(newpage); > > + /* > + * PG_readahead share the same bit with PG_reclaim, the above > + * end_page_writeback() may clear PG_readahead mistakenly, so set > + * the bit after that. > + */ > + if (PageReadahead(page)) > + SetPageReadahead(newpage); > + > copy_page_owner(page, newpage); > > mem_cgroup_migrate(page, newpage); > Looks good to me! Reviewed-by: David Hildenbrand <david@redhat.com>
diff --git a/mm/migrate.c b/mm/migrate.c index edf42ed..f3c492d 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -647,6 +647,14 @@ void migrate_page_states(struct page *newpage, struct page *page) if (PageWriteback(newpage)) end_page_writeback(newpage); + /* + * PG_readahead share the same bit with PG_reclaim, the above + * end_page_writeback() may clear PG_readahead mistakenly, so set + * the bit after that. + */ + if (PageReadahead(page)) + SetPageReadahead(newpage); + copy_page_owner(page, newpage); mem_cgroup_migrate(page, newpage);
Currently migration code doesn't migrate PG_readahead flag. Theoretically this would incur slight performance loss as the application might have to ramp its readahead back up again. Even though such problem happens, it might be hidden by something else since migration is typically triggered by compaction and NUMA balancing, any of which should be more noticeable. Migrate the flag after end_page_writeback() since it may clear PG_reclaim flag, which is the same bit as PG_readahead, for the new page. Cc: Matthew Wilcox <willy@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com> --- I didn't experience any real problem, found by visual inspection. And, this was discussed in thread: https://lore.kernel.org/linux-mm/185ce762-f25d-a013-6daa-8c288f1ff791@linux.alibaba.com/T/#m1977ce1de513401b7d09d6fa14fcffe849580aae mm/migrate.c | 8 ++++++++ 1 file changed, 8 insertions(+)