Message ID | 1574166203-151975-4-git-send-email-alex.shi@linux.alibaba.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | per lruvec lru_lock for memcg | expand |
On Tue, Nov 19, 2019 at 08:23:17PM +0800, Alex Shi wrote: > +static inline struct lruvec *lock_page_lruvec_irqsave(struct page *page, > + struct pglist_data *pgdat, unsigned long *flags) > +{ > + struct lruvec *lruvec = mem_cgroup_page_lruvec(page, pgdat); > + > + spin_lock_irqsave(&lruvec->lru_lock, *flags); > + > + return lruvec; > +} This should be a macro, not a function. You basically can't do this; spin_lock_irqsave needs to write to a variable which can then be passed to spin_unlock_irqrestore(). What you're doing here will dereference the pointer in _this_ function, but won't propagate the modified value back to the caller. I suppose you could do something like this ... static inline struct lruvec *lock_page_lruvec_irqsave(struct page *page, struct pglist_data *pgdat, unsigned long *flagsp) { unsigned long flags; struct lruvec *lruvec = mem_cgroup_page_lruvec(page, pgdat); spin_lock_irqsave(&lruvec->lru_lock, flags); *flagsp = flags; return lruvec; } Almost certainly easier to write a macro though. You shouldn't need the two prior patches with this kind of change.
On Tue, Nov 19, 2019 at 08:23:17PM +0800, Alex Shi wrote: > This patchset move lru_lock into lruvec, give a lru_lock for each of > lruvec, thus bring a lru_lock for each of memcg per node. > > This is the main patch to replace per node lru_lock with per memcg > lruvec lock. > > We introduce function lock_page_lruvec, it's same as vanilla pgdat lock > when memory cgroup unset, w/o memcg, the function will keep repin the > lruvec's lock to guard from page->mem_cgroup changes in page > migrations between memcgs. (Thanks Hugh Dickins and Konstantin > Khlebnikov reminder on this. Than the core logical is same as their > previous patchs) > > According to Daniel Jordan's suggestion, I run 64 'dd' with on 32 > containers on my 2s* 8 core * HT box with the modefied case: > https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice > > With this and later patches, the dd performance is 144MB/s, the vanilla > kernel performance is 123MB/s. 17% performance increased. > > Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com> > Cc: Johannes Weiner <hannes@cmpxchg.org> > Cc: Michal Hocko <mhocko@kernel.org> > Cc: Vladimir Davydov <vdavydov.dev@gmail.com> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Roman Gushchin <guro@fb.com> > Cc: Shakeel Butt <shakeelb@google.com> > Cc: Chris Down <chris@chrisdown.name> > Cc: Thomas Gleixner <tglx@linutronix.de> > Cc: Mel Gorman <mgorman@techsingularity.net> > Cc: Vlastimil Babka <vbabka@suse.cz> > Cc: Qian Cai <cai@lca.pw> > Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> > Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> > Cc: "Jérôme Glisse" <jglisse@redhat.com> > Cc: Andrea Arcangeli <aarcange@redhat.com> > Cc: Yang Shi <yang.shi@linux.alibaba.com> > Cc: David Rientjes <rientjes@google.com> > Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> > Cc: swkhack <swkhack@gmail.com> > Cc: "Potyra, Stefan" <Stefan.Potyra@elektrobit.com> > Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> > Cc: Stephen Rothwell <sfr@canb.auug.org.au> > Cc: Colin Ian King <colin.king@canonical.com> > Cc: Jason Gunthorpe <jgg@ziepe.ca> > Cc: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> > Cc: Matthew Wilcox <willy@infradead.org> > Cc: Peng Fan <peng.fan@nxp.com> > Cc: Nikolay Borisov <nborisov@suse.com> > Cc: Ira Weiny <ira.weiny@intel.com> > Cc: Kirill Tkhai <ktkhai@virtuozzo.com> > Cc: Yafang Shao <laoar.shao@gmail.com> > Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> > Cc: Hugh Dickins <hughd@google.com> > Cc: Tejun Heo <tj@kernel.org> > Cc: linux-kernel@vger.kernel.org > Cc: linux-mm@kvack.org > Cc: cgroups@vger.kernel.org > --- > include/linux/memcontrol.h | 24 +++++++++++++++ > include/linux/mmzone.h | 2 ++ > mm/compaction.c | 67 ++++++++++++++++++++++++++++------------- > mm/huge_memory.c | 15 ++++------ > mm/memcontrol.c | 75 +++++++++++++++++++++++++++++++++++----------- > mm/mlock.c | 31 ++++++++++--------- > mm/mmzone.c | 1 + > mm/page_idle.c | 5 ++-- > mm/swap.c | 74 +++++++++++++++++++-------------------------- > mm/vmscan.c | 58 +++++++++++++++++------------------ > 10 files changed, 214 insertions(+), 138 deletions(-) > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h > index 5b86287fa069..9538253998a6 100644 > --- a/include/linux/memcontrol.h > +++ b/include/linux/memcontrol.h > @@ -418,6 +418,10 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg, > > struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_data *); > > +struct lruvec *lock_page_lruvec_irq(struct page *, struct pglist_data *); > +struct lruvec *lock_page_lruvec_irqsave(struct page *, struct pglist_data *, > + unsigned long*); > + > struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); > > struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); > @@ -901,6 +905,26 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page, > return &pgdat->__lruvec; > } > > +static inline struct lruvec *lock_page_lruvec_irq(struct page *page, > + struct pglist_data *pgdat) > +{ > + struct lruvec *lruvec = mem_cgroup_page_lruvec(page, pgdat); > + > + spin_lock_irq(&lruvec->lru_lock); > + > + return lruvec; While this works in practice, it looks wrong because it doesn't follow the mem_cgroup_page_lruvec() rules. Please open-code spin_lock_irq(&pgdat->__lruvec->lru_lock) instead. > @@ -1246,6 +1245,46 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd > return lruvec; > } > > +struct lruvec *lock_page_lruvec_irq(struct page *page, > + struct pglist_data *pgdat) > +{ > + struct lruvec *lruvec; > + > +again: > + rcu_read_lock(); > + lruvec = mem_cgroup_page_lruvec(page, pgdat); > + spin_lock_irq(&lruvec->lru_lock); > + rcu_read_unlock(); The spinlock doesn't prevent the lruvec from being freed. You deleted the rules from the mem_cgroup_page_lruvec() documentation, but they still apply: if the page is already !PageLRU() by the time you get here, it could get reclaimed or migrated to another cgroup, and that can free the memcg/lruvec. Merely having the lru_lock held does not prevent this. Either the page needs to be locked, or the page needs to be PageLRU with the lru_lock held to prevent somebody else from isolating it. Otherwise, the lruvec is not safe to use.
On Tue, Nov 19, 2019 at 07:57:04AM -0800, Matthew Wilcox wrote: > On Tue, Nov 19, 2019 at 08:23:17PM +0800, Alex Shi wrote: > > +static inline struct lruvec *lock_page_lruvec_irqsave(struct page *page, > > + struct pglist_data *pgdat, unsigned long *flags) > > +{ > > + struct lruvec *lruvec = mem_cgroup_page_lruvec(page, pgdat); > > + > > + spin_lock_irqsave(&lruvec->lru_lock, *flags); > > + > > + return lruvec; > > +} > > This should be a macro, not a function. You basically can't do this; > spin_lock_irqsave needs to write to a variable which can then be passed > to spin_unlock_irqrestore(). What you're doing here will dereference the > pointer in _this_ function, but won't propagate the modified value back to > the caller. I suppose you could do something like this ... This works because spin_lock_irqsave and local_irq_save() are macros. It boils down to '*flags = arch_local_irq_save()' in this function, and therefor does the right thing. We exploit that in quite a few places: $ git grep 'spin_lock_irqsave(.*\*flags' | wc -l 39
On Tue, Nov 19, 2019 at 4:24 AM Alex Shi <alex.shi@linux.alibaba.com> wrote: > > This patchset move lru_lock into lruvec, give a lru_lock for each of > lruvec, thus bring a lru_lock for each of memcg per node. > > This is the main patch to replace per node lru_lock with per memcg > lruvec lock. > > We introduce function lock_page_lruvec, it's same as vanilla pgdat lock > when memory cgroup unset, w/o memcg, the function will keep repin the > lruvec's lock to guard from page->mem_cgroup changes in page > migrations between memcgs. (Thanks Hugh Dickins and Konstantin > Khlebnikov reminder on this. Than the core logical is same as their > previous patchs) > > According to Daniel Jordan's suggestion, I run 64 'dd' with on 32 > containers on my 2s* 8 core * HT box with the modefied case: > https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice > > With this and later patches, the dd performance is 144MB/s, the vanilla > kernel performance is 123MB/s. 17% performance increased. > > Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com> > Cc: Johannes Weiner <hannes@cmpxchg.org> > Cc: Michal Hocko <mhocko@kernel.org> > Cc: Vladimir Davydov <vdavydov.dev@gmail.com> > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Roman Gushchin <guro@fb.com> > Cc: Shakeel Butt <shakeelb@google.com> > Cc: Chris Down <chris@chrisdown.name> > Cc: Thomas Gleixner <tglx@linutronix.de> > Cc: Mel Gorman <mgorman@techsingularity.net> > Cc: Vlastimil Babka <vbabka@suse.cz> > Cc: Qian Cai <cai@lca.pw> > Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> > Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> > Cc: "Jérôme Glisse" <jglisse@redhat.com> > Cc: Andrea Arcangeli <aarcange@redhat.com> > Cc: Yang Shi <yang.shi@linux.alibaba.com> > Cc: David Rientjes <rientjes@google.com> > Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> > Cc: swkhack <swkhack@gmail.com> > Cc: "Potyra, Stefan" <Stefan.Potyra@elektrobit.com> > Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> > Cc: Stephen Rothwell <sfr@canb.auug.org.au> > Cc: Colin Ian King <colin.king@canonical.com> > Cc: Jason Gunthorpe <jgg@ziepe.ca> > Cc: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> > Cc: Matthew Wilcox <willy@infradead.org> > Cc: Peng Fan <peng.fan@nxp.com> > Cc: Nikolay Borisov <nborisov@suse.com> > Cc: Ira Weiny <ira.weiny@intel.com> > Cc: Kirill Tkhai <ktkhai@virtuozzo.com> > Cc: Yafang Shao <laoar.shao@gmail.com> > Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> > Cc: Hugh Dickins <hughd@google.com> > Cc: Tejun Heo <tj@kernel.org> > Cc: linux-kernel@vger.kernel.org > Cc: linux-mm@kvack.org > Cc: cgroups@vger.kernel.org This patch (and series) still have unsafe accesses to lruvec. Alex, I was hoping that you would drop this series in favor of Hugh's patches. Anyways I will post Hugh patches for review to be considered for 5.6. I will run a couple of performance experiments. Shakeel
在 2019/11/20 上午12:04, Johannes Weiner 写道: >> + >> + return lruvec; > While this works in practice, it looks wrong because it doesn't follow > the mem_cgroup_page_lruvec() rules. > > Please open-code spin_lock_irq(&pgdat->__lruvec->lru_lock) instead. > That's right. Thanks for suggestion! >> @@ -1246,6 +1245,46 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd >> return lruvec; >> } >> >> +struct lruvec *lock_page_lruvec_irq(struct page *page, >> + struct pglist_data *pgdat) >> +{ >> + struct lruvec *lruvec; >> + >> +again: >> + rcu_read_lock(); >> + lruvec = mem_cgroup_page_lruvec(page, pgdat); >> + spin_lock_irq(&lruvec->lru_lock); >> + rcu_read_unlock(); > The spinlock doesn't prevent the lruvec from being freed > > You deleted the rules from the mem_cgroup_page_lruvec() documentation, > but they still apply: if the page is already !PageLRU() by the time > you get here, it could get reclaimed or migrated to another cgroup, > and that can free the memcg/lruvec. Merely having the lru_lock held > does not prevent this. Forgive my idiot, I still don't know the details of unsafe lruvec here. From my shortsight, the spin_lock_irq(embedded a preempt_disable) could block all rcu syncing thus, keep all memcg alive until the preempt_enabled in unspinlock, is this right? If so even the page->mem_cgroup is migrated to others cgroups, the new and old cgroup should still be alive here. > > Either the page needs to be locked, or the page needs to be PageLRU > with the lru_lock held to prevent somebody else from isolating > it. Otherwise, the lruvec is not safe to use. Do you mean that we may get the wrong lruvec->lru_lock if !PageLRU, so, the page may got freed by others? Sorry I got last there. Thanks Alex
在 2019/11/20 上午12:44, Johannes Weiner 写道: > On Tue, Nov 19, 2019 at 07:57:04AM -0800, Matthew Wilcox wrote: >> On Tue, Nov 19, 2019 at 08:23:17PM +0800, Alex Shi wrote: >>> +static inline struct lruvec *lock_page_lruvec_irqsave(struct page *page, >>> + struct pglist_data *pgdat, unsigned long *flags) >>> +{ >>> + struct lruvec *lruvec = mem_cgroup_page_lruvec(page, pgdat); >>> + >>> + spin_lock_irqsave(&lruvec->lru_lock, *flags); >>> + >>> + return lruvec; >>> +} >> >> This should be a macro, not a function. You basically can't do this; >> spin_lock_irqsave needs to write to a variable which can then be passed >> to spin_unlock_irqrestore(). What you're doing here will dereference the >> pointer in _this_ function, but won't propagate the modified value back to >> the caller. I suppose you could do something like this ... > > This works because spin_lock_irqsave and local_irq_save() are > macros. It boils down to '*flags = arch_local_irq_save()' in this > function, and therefor does the right thing. > > We exploit that in quite a few places: > > $ git grep 'spin_lock_irqsave(.*\*flags' | wc -l > 39 > Yes, thank you all for point this!
On Wed, Nov 20, 2019 at 07:41:44PM +0800, Alex Shi wrote: > 在 2019/11/20 上午12:04, Johannes Weiner 写道: > >> @@ -1246,6 +1245,46 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd > >> return lruvec; > >> } > >> > >> +struct lruvec *lock_page_lruvec_irq(struct page *page, > >> + struct pglist_data *pgdat) > >> +{ > >> + struct lruvec *lruvec; > >> + > >> +again: > >> + rcu_read_lock(); > >> + lruvec = mem_cgroup_page_lruvec(page, pgdat); > >> + spin_lock_irq(&lruvec->lru_lock); > >> + rcu_read_unlock(); > > The spinlock doesn't prevent the lruvec from being freed > > > > You deleted the rules from the mem_cgroup_page_lruvec() documentation, > > but they still apply: if the page is already !PageLRU() by the time > > you get here, it could get reclaimed or migrated to another cgroup, > > and that can free the memcg/lruvec. Merely having the lru_lock held > > does not prevent this. > > > Forgive my idiot, I still don't know the details of unsafe lruvec here. > From my shortsight, the spin_lock_irq(embedded a preempt_disable) could block all rcu syncing thus, keep all memcg alive until the preempt_enabled in unspinlock, is this right? > If so even the page->mem_cgroup is migrated to others cgroups, the new and old cgroup should still be alive here. You are right about the freeing part, I missed this. And I should have read this email here before sending out my "fix" to the current code; thankfully Hugh re-iterated my mistake on that thread. My apologies. But I still don't understand how the moving part is safe. You look up the lruvec optimistically, lock it, then verify the lookup. What keeps page->mem_cgroup from changing after you verified it? lock_page_lruvec(): mem_cgroup_move_account(): again: rcu_read_lock() lruvec = page->mem_cgroup->lruvec isolate_lru_page() spin_lock_irq(&lruvec->lru_lock) rcu_read_unlock() if page->mem_cgroup->lruvec != lruvec: spin_unlock_irq(&lruvec->lru_lock) goto again; page->mem_cgroup = new cgroup putback_lru_page() // new lruvec SetPageLRU() return lruvec; // old lruvec The caller assumes page belongs to the returned lruvec and will then change the page's lru state with a mismatched page and lruvec. If we could restrict lock_page_lruvec() to working only on PageLRU pages, we could fix the problem with memory barriers. But this won't work for split_huge_page(), which is AFAICT the only user that needs to freeze the lru state of a page that could be isolated elsewhere. So AFAICS the only option is to lock out mem_cgroup_move_account() entirely when the lru_lock is held. Which I guess should be fine.
在 2019/11/22 上午6:06, Johannes Weiner 写道: >> >> Forgive my idiot, I still don't know the details of unsafe lruvec here. >> From my shortsight, the spin_lock_irq(embedded a preempt_disable) could block all rcu syncing thus, keep all memcg alive until the preempt_enabled in unspinlock, is this right? >> If so even the page->mem_cgroup is migrated to others cgroups, the new and old cgroup should still be alive here. > You are right about the freeing part, I missed this. And I should have > read this email here before sending out my "fix" to the current code; > thankfully Hugh re-iterated my mistake on that thread. My apologies. > That's all right. You and Hugh do give me a lot of help! :) > But I still don't understand how the moving part is safe. You look up > the lruvec optimistically, lock it, then verify the lookup. What keeps > page->mem_cgroup from changing after you verified it? > > lock_page_lruvec(): mem_cgroup_move_account(): > again: > rcu_read_lock() > lruvec = page->mem_cgroup->lruvec > isolate_lru_page() > spin_lock_irq(&lruvec->lru_lock) > rcu_read_unlock() > if page->mem_cgroup->lruvec != lruvec: > spin_unlock_irq(&lruvec->lru_lock) > goto again; > page->mem_cgroup = new cgroup > putback_lru_page() // new lruvec > SetPageLRU() > return lruvec; // old lruvec > > The caller assumes page belongs to the returned lruvec and will then > change the page's lru state with a mismatched page and lruvec. > Yes, that's the problem we have to deal. > If we could restrict lock_page_lruvec() to working only on PageLRU > pages, we could fix the problem with memory barriers. But this won't > work for split_huge_page(), which is AFAICT the only user that needs > to freeze the lru state of a page that could be isolated elsewhere. > > So AFAICS the only option is to lock out mem_cgroup_move_account() > entirely when the lru_lock is held. Which I guess should be fine. I guess we can try from lock_page_memcg, is that a good start? diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 7e6387ad01f0..f4bbbf72c5b8 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1224,7 +1224,7 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd goto out; } - memcg = page->mem_cgroup; + memcg = lock_page_memcg(page); /* * Swapcache readahead pages are added to the LRU - and * possibly migrated - before they are charged. Thanks a lot! Alex
On Fri, Nov 22, 2019 at 10:36:32AM +0800, Alex Shi wrote: > 在 2019/11/22 上午6:06, Johannes Weiner 写道: > > If we could restrict lock_page_lruvec() to working only on PageLRU > > pages, we could fix the problem with memory barriers. But this won't > > work for split_huge_page(), which is AFAICT the only user that needs > > to freeze the lru state of a page that could be isolated elsewhere. > > > > So AFAICS the only option is to lock out mem_cgroup_move_account() > > entirely when the lru_lock is held. Which I guess should be fine. > > I guess we can try from lock_page_memcg, is that a good start? Yes. > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 7e6387ad01f0..f4bbbf72c5b8 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -1224,7 +1224,7 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd > goto out; > } > > - memcg = page->mem_cgroup; > + memcg = lock_page_memcg(page); > /* > * Swapcache readahead pages are added to the LRU - and > * possibly migrated - before they are charged. test_clear_page_writeback() calls this function with that lock already held so that would deadlock. Let's keep locking in lock_page_lruvec(). lock_page_lruvec(): memcg = lock_page_memcg(page); lruvec = mem_cgroup_lruvec(page_pgdat(page), memcg); spin_lock_irqsave(&lruvec->lru_lock, *flags); return lruvec; unlock_lruvec(); spin_unlock_irqrestore(&lruvec->lru_lock); __unlock_page_memcg(lruvec_memcg(lruvec)); The lock ordering should be fine as well. But it might be a good idea to stick a might_lock(&memcg->move_lock) in lock_page_memcg() before that atomic_read() and test with lockdep enabled. But that leaves me with one more worry: compaction. We locked out charge moving now, so between that and knowing that the page is alive, we have page->mem_cgroup stable. But compaction doesn't know whether the page is alive - it comes from a pfn and finds out using PageLRU. In the current code, pgdat->lru_lock remains the same before and after the page is charged to a cgroup, so once compaction has that locked and it observes PageLRU, it can go ahead and isolate the page. But lruvec->lru_lock changes during charging, and then compaction may hold the wrong lock during isolation: compaction: generic_file_buffered_read: page_cache_alloc() !PageBuddy() lock_page_lruvec(page) lruvec = mem_cgroup_page_lruvec() spin_lock(&lruvec->lru_lock) if lruvec != mem_cgroup_page_lruvec() goto again add_to_page_cache_lru() mem_cgroup_commit_charge() page->mem_cgroup = foo lru_cache_add() __pagevec_lru_add() SetPageLRU() if PageLRU(page): __isolate_lru_page() I don't see what prevents the lruvec from changing under compaction, neither in your patches nor in Hugh's. Maybe I'm missing something?
On Fri, 22 Nov 2019, Johannes Weiner wrote: > > But that leaves me with one more worry: compaction. We locked out > charge moving now, so between that and knowing that the page is alive, > we have page->mem_cgroup stable. But compaction doesn't know whether > the page is alive - it comes from a pfn and finds out using PageLRU. > > In the current code, pgdat->lru_lock remains the same before and after > the page is charged to a cgroup, so once compaction has that locked > and it observes PageLRU, it can go ahead and isolate the page. > > But lruvec->lru_lock changes during charging, and then compaction may > hold the wrong lock during isolation: > > compaction: generic_file_buffered_read: > > page_cache_alloc() > > !PageBuddy() > > lock_page_lruvec(page) > lruvec = mem_cgroup_page_lruvec() > spin_lock(&lruvec->lru_lock) > if lruvec != mem_cgroup_page_lruvec() > goto again > > add_to_page_cache_lru() > mem_cgroup_commit_charge() > page->mem_cgroup = foo > lru_cache_add() > __pagevec_lru_add() > SetPageLRU() > > if PageLRU(page): > __isolate_lru_page() > > I don't see what prevents the lruvec from changing under compaction, > neither in your patches nor in Hugh's. Maybe I'm missing something? Speaking for my patches only: I'm humbled, I think you have caught me, I cannot find any argument against the race you suggest here. The race with mem_cgroup_move_account(), which Konstantin pointed out in 2012's https://lore.kernel.org/lkml/4F433418.3010401@openvz.org/ but I later misunderstood, and came to think I needed no patch against, until this week coming to perceive the same race in isolate_lru_page(): that one is easily and satisfactorily fixed by holding lruvec lock in mem_cgroup_move_account() - embarrassing, but not too serious. Your race here (again, lruvec lock taken then PageLRU observed, but page->mem_cgroup changed in between) really questions my whole scheme: I am not going to propose a solution now, I'll have to go back and recheck my assumptions all over. Certainly isolate_migratepage_block() has a harder job than any other, but I need to re-review it all. Maybe we got it right back in the days of PageCgroupUsed, and then I paid too little attention when rebasing to your welcome simplifications. I don't think any of us want to bring back PageCgroupUsed! And maybe we could get it right by always holding lruvec lock in commit_charge(), lrucare or not; but that's a much hotter path, and not a change I'd expect anyone to embrace. I'll go away and re-examine it all; probably start by verifying that your race actually happens in practice, though we never observed it. Heavy-hearted thanks, Hannes! Hugh
在 2019/11/23 上午12:16, Johannes Weiner 写道: > On Fri, Nov 22, 2019 at 10:36:32AM +0800, Alex Shi wrote: >> 在 2019/11/22 上午6:06, Johannes Weiner 写道: >>> If we could restrict lock_page_lruvec() to working only on PageLRU >>> pages, we could fix the problem with memory barriers. But this won't >>> work for split_huge_page(), which is AFAICT the only user that needs >>> to freeze the lru state of a page that could be isolated elsewhere. >>> >>> So AFAICS the only option is to lock out mem_cgroup_move_account() >>> entirely when the lru_lock is held. Which I guess should be fine. >> >> I guess we can try from lock_page_memcg, is that a good start? > > Yes. > >> diff --git a/mm/memcontrol.c b/mm/memcontrol.c >> index 7e6387ad01f0..f4bbbf72c5b8 100644 >> --- a/mm/memcontrol.c >> +++ b/mm/memcontrol.c >> @@ -1224,7 +1224,7 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd >> goto out; >> } >> >> - memcg = page->mem_cgroup; >> + memcg = lock_page_memcg(page); >> /* >> * Swapcache readahead pages are added to the LRU - and >> * possibly migrated - before they are charged. > > test_clear_page_writeback() calls this function with that lock already > held so that would deadlock. Let's keep locking in lock_page_lruvec(). > > lock_page_lruvec(): > > memcg = lock_page_memcg(page); > lruvec = mem_cgroup_lruvec(page_pgdat(page), memcg); > > spin_lock_irqsave(&lruvec->lru_lock, *flags); > return lruvec; > > unlock_lruvec(); > > spin_unlock_irqrestore(&lruvec->lru_lock); > __unlock_page_memcg(lruvec_memcg(lruvec)); > > The lock ordering should be fine as well. But it might be a good idea > to stick a might_lock(&memcg->move_lock) in lock_page_memcg() before > that atomic_read() and test with lockdep enabled. Hi Johannes, Thanks a lot for the suggestion. I will look into this and try. > > > But that leaves me with one more worry: compaction. We locked out > charge moving now, so between that and knowing that the page is alive, > we have page->mem_cgroup stable. But compaction doesn't know whether > the page is alive - it comes from a pfn and finds out using PageLRU. > > In the current code, pgdat->lru_lock remains the same before and after > the page is charged to a cgroup, so once compaction has that locked > and it observes PageLRU, it can go ahead and isolate the page. > > But lruvec->lru_lock changes during charging, and then compaction may > hold the wrong lock during isolation: > > compaction: generic_file_buffered_read: > > page_cache_alloc() > > !PageBuddy() > > lock_page_lruvec(page) > lruvec = mem_cgroup_page_lruvec() > spin_lock(&lruvec->lru_lock) > if lruvec != mem_cgroup_page_lruvec() > goto again > > add_to_page_cache_lru() > mem_cgroup_commit_charge() > page->mem_cgroup = foo > lru_cache_add() > __pagevec_lru_add() > SetPageLRU() > > if PageLRU(page): > __isolate_lru_page() > > I don't see what prevents the lruvec from changing under compaction, > neither in your patches nor in Hugh's. Maybe I'm missing something? Yes, it's a problem. Guess we could move the lruvec recheck after PageLRU() test in compaction. Then it could be safe, and add a bit more burden on compaction should be fine. at last we have no disturb to file read. Thanks Alex
> > But that leaves me with one more worry: compaction. We locked out > charge moving now, so between that and knowing that the page is alive, > we have page->mem_cgroup stable. But compaction doesn't know whether > the page is alive - it comes from a pfn and finds out using PageLRU. > > In the current code, pgdat->lru_lock remains the same before and after > the page is charged to a cgroup, so once compaction has that locked > and it observes PageLRU, it can go ahead and isolate the page. > > But lruvec->lru_lock changes during charging, and then compaction may > hold the wrong lock during isolation: > > compaction: generic_file_buffered_read: > > page_cache_alloc() > > !PageBuddy() > > lock_page_lruvec(page) > lruvec = mem_cgroup_page_lruvec() > spin_lock(&lruvec->lru_lock) > if lruvec != mem_cgroup_page_lruvec() > goto again > > add_to_page_cache_lru() > mem_cgroup_commit_charge() > page->mem_cgroup = foo > lru_cache_add() > __pagevec_lru_add() > SetPageLRU() > > if PageLRU(page): > __isolate_lru_page() > > I don't see what prevents the lruvec from changing under compaction, > neither in your patches nor in Hugh's. Maybe I'm missing something? > Hi Johannes, It looks my patch do the lruvec recheck/relock after PageLRU in compaction.c. It should be fine for your question. So I will try more testing after all changes. Thanks Alex @@ -949,10 +959,26 @@ static bool too_many_isolated(pg_data_t *pgdat) if (!(cc->gfp_mask & __GFP_FS) && page_mapping(page)) goto isolate_fail; + rcu_read_lock(); +reget_lruvec: + lruvec = mem_cgroup_page_lruvec(page, pgdat); + /* If we already hold the lock, we can skip some rechecking */ - if (!locked) { - locked = compact_lock_irqsave(&pgdat->lru_lock, - &flags, cc); + if (lruvec != locked_lruvec) { + if (locked_lruvec) { + spin_unlock_irqrestore(&locked_lruvec->lru_lock, + flags); + locked_lruvec = NULL; + } + if (compact_lock_irqsave(&lruvec->lru_lock, + &flags, cc)) + locked_lruvec = lruvec; + + + if (lruvec != mem_cgroup_page_lruvec(page, pgdat)) + goto reget_lruvec; + + rcu_read_unlock(); /* Try get exclusive access under lock */ if (!skip_updated) { @@ -974,9 +1000,9 @@ static bool too_many_isolated(pg_data_t *pgdat) low_pfn += compound_nr(page) - 1; goto isolate_fail; } - } + } else + rcu_read_unlock(); - lruvec = mem_cgroup_page_lruvec(page, pgdat); /* Try isolate the page */ if (__isolate_lru_page(page, isolate_mode) != 0) @@ -1017,9 +1043,10 @@ static bool too_many_isolated(pg_data_t *pgdat) * page anyway. */ if (nr_isolated) { - if (locked) { - spin_unlock_irqrestore(&pgdat->lru_lock, flags); - locked = false; + if (locked_lruvec) { + spin_unlock_irqrestore(&locked_lruvec->lru_lock, + flags); + locked_lruvec = NULL; } putback_movable_pages(&cc->migratepages);
On Mon, Nov 25, 2019 at 1:26 AM Alex Shi <alex.shi@linux.alibaba.com> wrote: > > > > > > But that leaves me with one more worry: compaction. We locked out > > charge moving now, so between that and knowing that the page is alive, > > we have page->mem_cgroup stable. But compaction doesn't know whether > > the page is alive - it comes from a pfn and finds out using PageLRU. > > > > In the current code, pgdat->lru_lock remains the same before and after > > the page is charged to a cgroup, so once compaction has that locked > > and it observes PageLRU, it can go ahead and isolate the page. > > > > But lruvec->lru_lock changes during charging, and then compaction may > > hold the wrong lock during isolation: > > > > compaction: generic_file_buffered_read: > > > > page_cache_alloc() > > > > !PageBuddy() > > > > lock_page_lruvec(page) > > lruvec = mem_cgroup_page_lruvec() > > spin_lock(&lruvec->lru_lock) > > if lruvec != mem_cgroup_page_lruvec() > > goto again > > > > add_to_page_cache_lru() > > mem_cgroup_commit_charge() > > page->mem_cgroup = foo > > lru_cache_add() > > __pagevec_lru_add() > > SetPageLRU() > > > > if PageLRU(page): > > __isolate_lru_page() > > > > I don't see what prevents the lruvec from changing under compaction, > > neither in your patches nor in Hugh's. Maybe I'm missing something? > > > > Hi Johannes, > > It looks my patch do the lruvec recheck/relock after PageLRU in compaction.c. > It should be fine for your question. So I will try more testing after all changes. Actually no, unless PageLRU check and taking lruvec lock are atomic, the race mentioned by Johannes still exist. Shakeel
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 5b86287fa069..9538253998a6 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -418,6 +418,10 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg, struct lruvec *mem_cgroup_page_lruvec(struct page *, struct pglist_data *); +struct lruvec *lock_page_lruvec_irq(struct page *, struct pglist_data *); +struct lruvec *lock_page_lruvec_irqsave(struct page *, struct pglist_data *, + unsigned long*); + struct mem_cgroup *mem_cgroup_from_task(struct task_struct *p); struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm); @@ -901,6 +905,26 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page, return &pgdat->__lruvec; } +static inline struct lruvec *lock_page_lruvec_irq(struct page *page, + struct pglist_data *pgdat) +{ + struct lruvec *lruvec = mem_cgroup_page_lruvec(page, pgdat); + + spin_lock_irq(&lruvec->lru_lock); + + return lruvec; +} + +static inline struct lruvec *lock_page_lruvec_irqsave(struct page *page, + struct pglist_data *pgdat, unsigned long *flags) +{ + struct lruvec *lruvec = mem_cgroup_page_lruvec(page, pgdat); + + spin_lock_irqsave(&lruvec->lru_lock, *flags); + + return lruvec; +} + static inline bool mm_match_cgroup(struct mm_struct *mm, struct mem_cgroup *memcg) { diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 07b784ffbb14..a13b8a602ee5 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -267,6 +267,8 @@ struct lruvec { unsigned long refaults; /* Various lruvec state flags (enum lruvec_flags) */ unsigned long flags; + /* per lruvec lru_lock for memcg */ + spinlock_t lru_lock; #ifdef CONFIG_MEMCG struct pglist_data *pgdat; #endif diff --git a/mm/compaction.c b/mm/compaction.c index d20816b21b55..f8018226fdbc 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -787,7 +787,7 @@ static bool too_many_isolated(pg_data_t *pgdat) unsigned long nr_scanned = 0, nr_isolated = 0; struct lruvec *lruvec; unsigned long flags = 0; - bool locked = false; + struct lruvec *locked_lruvec = NULL; struct page *page = NULL, *valid_page = NULL; unsigned long start_pfn = low_pfn; bool skip_on_failure = false; @@ -847,11 +847,21 @@ static bool too_many_isolated(pg_data_t *pgdat) * contention, to give chance to IRQs. Abort completely if * a fatal signal is pending. */ - if (!(low_pfn % SWAP_CLUSTER_MAX) - && compact_unlock_should_abort(&pgdat->lru_lock, - flags, &locked, cc)) { - low_pfn = 0; - goto fatal_pending; + if (!(low_pfn % SWAP_CLUSTER_MAX)) { + if (locked_lruvec) { + spin_unlock_irqrestore(&locked_lruvec->lru_lock, + flags); + locked_lruvec = NULL; + } + + if (fatal_signal_pending(current)) { + cc->contended = true; + + low_pfn = 0; + goto fatal_pending; + } + + cond_resched(); } if (!pfn_valid_within(low_pfn)) @@ -920,10 +930,10 @@ static bool too_many_isolated(pg_data_t *pgdat) */ if (unlikely(__PageMovable(page)) && !PageIsolated(page)) { - if (locked) { - spin_unlock_irqrestore(&pgdat->lru_lock, - flags); - locked = false; + if (locked_lruvec) { + spin_unlock_irqrestore(&locked_lruvec->lru_lock, + flags); + locked_lruvec = NULL; } if (!isolate_movable_page(page, isolate_mode)) @@ -949,10 +959,26 @@ static bool too_many_isolated(pg_data_t *pgdat) if (!(cc->gfp_mask & __GFP_FS) && page_mapping(page)) goto isolate_fail; + rcu_read_lock(); +reget_lruvec: + lruvec = mem_cgroup_page_lruvec(page, pgdat); + /* If we already hold the lock, we can skip some rechecking */ - if (!locked) { - locked = compact_lock_irqsave(&pgdat->lru_lock, - &flags, cc); + if (lruvec != locked_lruvec) { + if (locked_lruvec) { + spin_unlock_irqrestore(&locked_lruvec->lru_lock, + flags); + locked_lruvec = NULL; + } + if (compact_lock_irqsave(&lruvec->lru_lock, + &flags, cc)) + locked_lruvec = lruvec; + + + if (lruvec != mem_cgroup_page_lruvec(page, pgdat)) + goto reget_lruvec; + + rcu_read_unlock(); /* Try get exclusive access under lock */ if (!skip_updated) { @@ -974,9 +1000,9 @@ static bool too_many_isolated(pg_data_t *pgdat) low_pfn += compound_nr(page) - 1; goto isolate_fail; } - } + } else + rcu_read_unlock(); - lruvec = mem_cgroup_page_lruvec(page, pgdat); /* Try isolate the page */ if (__isolate_lru_page(page, isolate_mode) != 0) @@ -1017,9 +1043,10 @@ static bool too_many_isolated(pg_data_t *pgdat) * page anyway. */ if (nr_isolated) { - if (locked) { - spin_unlock_irqrestore(&pgdat->lru_lock, flags); - locked = false; + if (locked_lruvec) { + spin_unlock_irqrestore(&locked_lruvec->lru_lock, + flags); + locked_lruvec = NULL; } putback_movable_pages(&cc->migratepages); cc->nr_migratepages = 0; @@ -1044,8 +1071,8 @@ static bool too_many_isolated(pg_data_t *pgdat) low_pfn = end_pfn; isolate_abort: - if (locked) - spin_unlock_irqrestore(&pgdat->lru_lock, flags); + if (locked_lruvec) + spin_unlock_irqrestore(&locked_lruvec->lru_lock, flags); /* * Updated the cached scanner pfn once the pageblock has been scanned diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 4cdfdbeb6b2b..cf37c58f0558 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2495,17 +2495,13 @@ static void __split_huge_page_tail(struct page *head, int tail, } static void __split_huge_page(struct page *page, struct list_head *list, - pgoff_t end, unsigned long flags) + struct lruvec *lruvec, pgoff_t end, unsigned long flags) { struct page *head = compound_head(page); - pg_data_t *pgdat = page_pgdat(head); - struct lruvec *lruvec; struct address_space *swap_cache = NULL; unsigned long offset = 0; int i; - lruvec = mem_cgroup_page_lruvec(head, pgdat); - /* complete memcg works before add pages to LRU */ mem_cgroup_split_huge_fixup(head); @@ -2554,7 +2550,7 @@ static void __split_huge_page(struct page *page, struct list_head *list, xa_unlock(&head->mapping->i_pages); } - spin_unlock_irqrestore(&pgdat->lru_lock, flags); + spin_unlock_irqrestore(&lruvec->lru_lock, flags); remap_page(head); @@ -2697,6 +2693,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) struct deferred_split *ds_queue = get_deferred_split_queue(page); struct anon_vma *anon_vma = NULL; struct address_space *mapping = NULL; + struct lruvec *lruvec; int count, mapcount, extra_pins, ret; bool mlocked; unsigned long flags = 0; @@ -2766,7 +2763,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) lru_add_drain(); /* prevent PageLRU to go away from under us, and freeze lru stats */ - spin_lock_irqsave(&pgdata->lru_lock, flags); + lruvec = lock_page_lruvec_irqsave(head, pgdata, &flags); if (mapping) { XA_STATE(xas, &mapping->i_pages, page_index(head)); @@ -2797,7 +2794,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) } spin_unlock(&ds_queue->split_queue_lock); - __split_huge_page(page, list, end, flags); + __split_huge_page(page, list, lruvec, end, flags); if (PageSwapCache(head)) { swp_entry_t entry = { .val = page_private(head) }; @@ -2816,7 +2813,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) spin_unlock(&ds_queue->split_queue_lock); fail: if (mapping) xa_unlock(&mapping->i_pages); - spin_unlock_irqrestore(&pgdata->lru_lock, flags); + spin_unlock_irqrestore(&lruvec->lru_lock, flags); remap_page(head); ret = -EBUSY; } diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 62470325f9bc..7e6387ad01f0 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1210,9 +1210,8 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg, * @page: the page * @pgdat: pgdat of the page * - * This function is only safe when following the LRU page isolation - * and putback protocol: the LRU lock must be held, and the page must - * either be PageLRU() or the caller must have isolated/allocated it. + * The caller needs to grantee the page's mem_cgroup is undisturbed during + * using. That could be done by lock_page_memcg or lock_page_lruvec. */ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgdat) { @@ -1225,7 +1224,7 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd goto out; } - memcg = page->mem_cgroup; + memcg = READ_ONCE(page->mem_cgroup); /* * Swapcache readahead pages are added to the LRU - and * possibly migrated - before they are charged. @@ -1246,6 +1245,46 @@ struct lruvec *mem_cgroup_page_lruvec(struct page *page, struct pglist_data *pgd return lruvec; } +struct lruvec *lock_page_lruvec_irq(struct page *page, + struct pglist_data *pgdat) +{ + struct lruvec *lruvec; + +again: + rcu_read_lock(); + lruvec = mem_cgroup_page_lruvec(page, pgdat); + spin_lock_irq(&lruvec->lru_lock); + rcu_read_unlock(); + + /* lruvec may changed in commit_charge() */ + if (lruvec != mem_cgroup_page_lruvec(page, pgdat)) { + spin_unlock_irq(&lruvec->lru_lock); + goto again; + } + + return lruvec; +} + +struct lruvec *lock_page_lruvec_irqsave(struct page *page, + struct pglist_data *pgdat, unsigned long *flags) +{ + struct lruvec *lruvec; + +again: + rcu_read_lock(); + lruvec = mem_cgroup_page_lruvec(page, pgdat); + spin_lock_irqsave(&lruvec->lru_lock, *flags); + rcu_read_unlock(); + + /* lruvec may changed in commit_charge() */ + if (lruvec != mem_cgroup_page_lruvec(page, pgdat)) { + spin_unlock_irqrestore(&lruvec->lru_lock, *flags); + goto again; + } + + return lruvec; +} + /** * mem_cgroup_update_lru_size - account for adding or removing an lru page * @lruvec: mem_cgroup per zone lru vector @@ -2571,41 +2610,43 @@ static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages) css_put_many(&memcg->css, nr_pages); } -static void lock_page_lru(struct page *page, int *isolated) +static struct lruvec *lock_page_lru(struct page *page, int *isolated) { pg_data_t *pgdat = page_pgdat(page); + struct lruvec *lruvec = lock_page_lruvec_irq(page, pgdat); - spin_lock_irq(&pgdat->lru_lock); if (PageLRU(page)) { - struct lruvec *lruvec; - lruvec = mem_cgroup_page_lruvec(page, pgdat); ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_lru(page)); *isolated = 1; } else *isolated = 0; + + return lruvec; } -static void unlock_page_lru(struct page *page, int isolated) +static void unlock_page_lru(struct page *page, int isolated, + struct lruvec *locked_lruvec) { - pg_data_t *pgdat = page_pgdat(page); + struct lruvec *lruvec; - if (isolated) { - struct lruvec *lruvec; + spin_unlock_irq(&locked_lruvec->lru_lock); + lruvec = lock_page_lruvec_irq(page, page_pgdat(page)); - lruvec = mem_cgroup_page_lruvec(page, pgdat); + if (isolated) { VM_BUG_ON_PAGE(PageLRU(page), page); SetPageLRU(page); add_page_to_lru_list(page, lruvec, page_lru(page)); } - spin_unlock_irq(&pgdat->lru_lock); + spin_unlock_irq(&lruvec->lru_lock); } static void commit_charge(struct page *page, struct mem_cgroup *memcg, bool lrucare) { int isolated; + struct lruvec *lruvec; VM_BUG_ON_PAGE(page->mem_cgroup, page); @@ -2614,7 +2655,7 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg, * may already be on some other mem_cgroup's LRU. Take care of it. */ if (lrucare) - lock_page_lru(page, &isolated); + lruvec = lock_page_lru(page, &isolated); /* * Nobody should be changing or seriously looking at @@ -2633,7 +2674,7 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg, page->mem_cgroup = memcg; if (lrucare) - unlock_page_lru(page, isolated); + unlock_page_lru(page, isolated, lruvec); } #ifdef CONFIG_MEMCG_KMEM @@ -2929,7 +2970,7 @@ void __memcg_kmem_uncharge(struct page *page, int order) /* * Because tail pages are not marked as "used", set it. We're under - * pgdat->lru_lock and migration entries setup in all page mappings. + * pgdat->lruvec.lru_lock and migration entries setup in all page mappings. */ void mem_cgroup_split_huge_fixup(struct page *head) { diff --git a/mm/mlock.c b/mm/mlock.c index a72c1eeded77..b509b80b8513 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -106,12 +106,10 @@ void mlock_vma_page(struct page *page) * Isolate a page from LRU with optional get_page() pin. * Assumes lru_lock already held and page already pinned. */ -static bool __munlock_isolate_lru_page(struct page *page, bool getpage) +static bool __munlock_isolate_lru_page(struct page *page, + struct lruvec *lruvec, bool getpage) { if (PageLRU(page)) { - struct lruvec *lruvec; - - lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); if (getpage) get_page(page); ClearPageLRU(page); @@ -183,6 +181,7 @@ unsigned int munlock_vma_page(struct page *page) { int nr_pages; pg_data_t *pgdat = page_pgdat(page); + struct lruvec *lruvec; /* For try_to_munlock() and to serialize with page migration */ BUG_ON(!PageLocked(page)); @@ -194,7 +193,7 @@ unsigned int munlock_vma_page(struct page *page) * might otherwise copy PageMlocked to part of the tail pages before * we clear it in the head page. It also stabilizes hpage_nr_pages(). */ - spin_lock_irq(&pgdat->lru_lock); + lruvec = lock_page_lruvec_irq(page, pgdat); if (!TestClearPageMlocked(page)) { /* Potentially, PTE-mapped THP: do not skip the rest PTEs */ @@ -205,15 +204,15 @@ unsigned int munlock_vma_page(struct page *page) nr_pages = hpage_nr_pages(page); __mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages); - if (__munlock_isolate_lru_page(page, true)) { - spin_unlock_irq(&pgdat->lru_lock); + if (__munlock_isolate_lru_page(page, lruvec, true)) { + spin_unlock_irq(&lruvec->lru_lock); __munlock_isolated_page(page); goto out; } __munlock_isolation_failed(page); unlock_out: - spin_unlock_irq(&pgdat->lru_lock); + spin_unlock_irq(&lruvec->lru_lock); out: return nr_pages - 1; @@ -291,28 +290,29 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) { int i; int nr = pagevec_count(pvec); - int delta_munlocked = -nr; struct pagevec pvec_putback; + struct lruvec *lruvec = NULL; int pgrescued = 0; pagevec_init(&pvec_putback); /* Phase 1: page isolation */ - spin_lock_irq(&zone->zone_pgdat->lru_lock); for (i = 0; i < nr; i++) { struct page *page = pvec->pages[i]; + lruvec = lock_page_lruvec_irq(page, page_pgdat(page)); + if (TestClearPageMlocked(page)) { /* * We already have pin from follow_page_mask() * so we can spare the get_page() here. */ - if (__munlock_isolate_lru_page(page, false)) + if (__munlock_isolate_lru_page(page, lruvec, false)) { + __mod_zone_page_state(zone, NR_MLOCK, -1); + spin_unlock_irq(&lruvec->lru_lock); continue; - else + } else __munlock_isolation_failed(page); - } else { - delta_munlocked++; } /* @@ -323,9 +323,8 @@ static void __munlock_pagevec(struct pagevec *pvec, struct zone *zone) */ pagevec_add(&pvec_putback, pvec->pages[i]); pvec->pages[i] = NULL; + spin_unlock_irq(&lruvec->lru_lock); } - __mod_zone_page_state(zone, NR_MLOCK, delta_munlocked); - spin_unlock_irq(&zone->zone_pgdat->lru_lock); /* Now we can release pins of pages that we are not munlocking */ pagevec_release(&pvec_putback); diff --git a/mm/mmzone.c b/mm/mmzone.c index 4686fdc23bb9..3750a90ed4a0 100644 --- a/mm/mmzone.c +++ b/mm/mmzone.c @@ -91,6 +91,7 @@ void lruvec_init(struct lruvec *lruvec) enum lru_list lru; memset(lruvec, 0, sizeof(struct lruvec)); + spin_lock_init(&lruvec->lru_lock); for_each_lru(lru) INIT_LIST_HEAD(&lruvec->lists[lru]); diff --git a/mm/page_idle.c b/mm/page_idle.c index 295512465065..25f4b1cf3e0f 100644 --- a/mm/page_idle.c +++ b/mm/page_idle.c @@ -32,6 +32,7 @@ static struct page *page_idle_get_page(unsigned long pfn) { struct page *page; pg_data_t *pgdat; + struct lruvec *lruvec; if (!pfn_valid(pfn)) return NULL; @@ -42,12 +43,12 @@ static struct page *page_idle_get_page(unsigned long pfn) return NULL; pgdat = page_pgdat(page); - spin_lock_irq(&pgdat->lru_lock); + lruvec = lock_page_lruvec_irq(page, pgdat); if (unlikely(!PageLRU(page))) { put_page(page); page = NULL; } - spin_unlock_irq(&pgdat->lru_lock); + spin_unlock_irq(&lruvec->lru_lock); return page; } diff --git a/mm/swap.c b/mm/swap.c index c36a10244d07..05fee145e382 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -64,12 +64,11 @@ static void __page_cache_release(struct page *page) struct lruvec *lruvec; unsigned long flags = 0; - spin_lock_irqsave(&pgdat->lru_lock, flags); - lruvec = mem_cgroup_page_lruvec(page, pgdat); + lruvec = lock_page_lruvec_irqsave(page, pgdat, &flags); VM_BUG_ON_PAGE(!PageLRU(page), page); __ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_off_lru(page)); - spin_unlock_irqrestore(&pgdat->lru_lock, flags); + spin_unlock_irqrestore(&lruvec->lru_lock, flags); } __ClearPageWaiters(page); } @@ -192,26 +191,18 @@ static void pagevec_lru_move_fn(struct pagevec *pvec, void *arg) { int i; - struct pglist_data *pgdat = NULL; - struct lruvec *lruvec; + struct lruvec *lruvec = NULL; unsigned long flags = 0; for (i = 0; i < pagevec_count(pvec); i++) { struct page *page = pvec->pages[i]; - struct pglist_data *pagepgdat = page_pgdat(page); - if (pagepgdat != pgdat) { - if (pgdat) - spin_unlock_irqrestore(&pgdat->lru_lock, flags); - pgdat = pagepgdat; - spin_lock_irqsave(&pgdat->lru_lock, flags); - } + lruvec = lock_page_lruvec_irqsave(page, page_pgdat(page), &flags); - lruvec = mem_cgroup_page_lruvec(page, pgdat); (*move_fn)(page, lruvec, arg); + spin_unlock_irqrestore(&lruvec->lru_lock, flags); } - if (pgdat) - spin_unlock_irqrestore(&pgdat->lru_lock, flags); + release_pages(pvec->pages, pvec->nr); pagevec_reinit(pvec); } @@ -252,7 +243,6 @@ void rotate_reclaimable_page(struct page *page) !PageUnevictable(page) && PageLRU(page)) { struct pagevec *pvec; unsigned long flags; - get_page(page); local_irq_save(flags); pvec = this_cpu_ptr(&lru_rotate_pvecs); @@ -325,11 +315,12 @@ static inline void activate_page_drain(int cpu) void activate_page(struct page *page) { pg_data_t *pgdat = page_pgdat(page); + struct lruvec *lruvec; page = compound_head(page); - spin_lock_irq(&pgdat->lru_lock); - __activate_page(page, mem_cgroup_page_lruvec(page, pgdat), NULL); - spin_unlock_irq(&pgdat->lru_lock); + lruvec = lock_page_lruvec_irq(page, pgdat); + __activate_page(page, lruvec, NULL); + spin_unlock_irq(&lruvec->lru_lock); } #endif @@ -780,8 +771,7 @@ void release_pages(struct page **pages, int nr) { int i; LIST_HEAD(pages_to_free); - struct pglist_data *locked_pgdat = NULL; - struct lruvec *lruvec; + struct lruvec *lruvec = NULL; unsigned long uninitialized_var(flags); unsigned int uninitialized_var(lock_batch); @@ -791,21 +781,20 @@ void release_pages(struct page **pages, int nr) /* * Make sure the IRQ-safe lock-holding time does not get * excessive with a continuous string of pages from the - * same pgdat. The lock is held only if pgdat != NULL. + * same lruvec. The lock is held only if lruvec != NULL. */ - if (locked_pgdat && ++lock_batch == SWAP_CLUSTER_MAX) { - spin_unlock_irqrestore(&locked_pgdat->lru_lock, flags); - locked_pgdat = NULL; + if (lruvec && ++lock_batch == SWAP_CLUSTER_MAX) { + spin_unlock_irqrestore(&lruvec->lru_lock, flags); + lruvec = NULL; } if (is_huge_zero_page(page)) continue; if (is_zone_device_page(page)) { - if (locked_pgdat) { - spin_unlock_irqrestore(&locked_pgdat->lru_lock, - flags); - locked_pgdat = NULL; + if (lruvec) { + spin_unlock_irqrestore(&lruvec->lru_lock, flags); + lruvec = NULL; } /* * ZONE_DEVICE pages that return 'false' from @@ -822,27 +811,24 @@ void release_pages(struct page **pages, int nr) continue; if (PageCompound(page)) { - if (locked_pgdat) { - spin_unlock_irqrestore(&locked_pgdat->lru_lock, flags); - locked_pgdat = NULL; + if (lruvec) { + spin_unlock_irqrestore(&lruvec->lru_lock, flags); + lruvec = NULL; } __put_compound_page(page); continue; } if (PageLRU(page)) { - struct pglist_data *pgdat = page_pgdat(page); + struct lruvec *new_lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page)); - if (pgdat != locked_pgdat) { - if (locked_pgdat) - spin_unlock_irqrestore(&locked_pgdat->lru_lock, - flags); + if (new_lruvec != lruvec) { + if (lruvec) + spin_unlock_irqrestore(&lruvec->lru_lock, flags); lock_batch = 0; - locked_pgdat = pgdat; - spin_lock_irqsave(&locked_pgdat->lru_lock, flags); - } + lruvec = lock_page_lruvec_irqsave(page, page_pgdat(page), &flags); - lruvec = mem_cgroup_page_lruvec(page, locked_pgdat); + } VM_BUG_ON_PAGE(!PageLRU(page), page); __ClearPageLRU(page); del_page_from_lru_list(page, lruvec, page_off_lru(page)); @@ -854,8 +840,8 @@ void release_pages(struct page **pages, int nr) list_add(&page->lru, &pages_to_free); } - if (locked_pgdat) - spin_unlock_irqrestore(&locked_pgdat->lru_lock, flags); + if (lruvec) + spin_unlock_irqrestore(&lruvec->lru_lock, flags); mem_cgroup_uncharge_list(&pages_to_free); free_unref_page_list(&pages_to_free); @@ -893,7 +879,7 @@ void lru_add_page_tail(struct page *page, struct page *page_tail, VM_BUG_ON_PAGE(!PageHead(page), page); VM_BUG_ON_PAGE(PageCompound(page_tail), page); VM_BUG_ON_PAGE(PageLRU(page_tail), page); - lockdep_assert_held(&lruvec_pgdat(lruvec)->lru_lock); + lockdep_assert_held(&lruvec->lru_lock); if (!list) SetPageLRU(page_tail); diff --git a/mm/vmscan.c b/mm/vmscan.c index d97985262dda..3cdf343e7a27 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1755,8 +1755,7 @@ int isolate_lru_page(struct page *page) pg_data_t *pgdat = page_pgdat(page); struct lruvec *lruvec; - spin_lock_irq(&pgdat->lru_lock); - lruvec = mem_cgroup_page_lruvec(page, pgdat); + lruvec = lock_page_lruvec_irq(page, pgdat); if (PageLRU(page)) { int lru = page_lru(page); get_page(page); @@ -1764,7 +1763,7 @@ int isolate_lru_page(struct page *page) del_page_from_lru_list(page, lruvec, lru); ret = 0; } - spin_unlock_irq(&pgdat->lru_lock); + spin_unlock_irq(&lruvec->lru_lock); } return ret; } @@ -1829,7 +1828,6 @@ static int too_many_isolated(struct pglist_data *pgdat, int file, static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, struct list_head *list) { - struct pglist_data *pgdat = lruvec_pgdat(lruvec); int nr_pages, nr_moved = 0; LIST_HEAD(pages_to_free); struct page *page; @@ -1840,12 +1838,11 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, VM_BUG_ON_PAGE(PageLRU(page), page); if (unlikely(!page_evictable(page))) { list_del(&page->lru); - spin_unlock_irq(&pgdat->lru_lock); + spin_unlock_irq(&lruvec->lru_lock); putback_lru_page(page); - spin_lock_irq(&pgdat->lru_lock); + spin_lock_irq(&lruvec->lru_lock); continue; } - lruvec = mem_cgroup_page_lruvec(page, pgdat); SetPageLRU(page); lru = page_lru(page); @@ -1860,9 +1857,9 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec, del_page_from_lru_list(page, lruvec, lru); if (unlikely(PageCompound(page))) { - spin_unlock_irq(&pgdat->lru_lock); + spin_unlock_irq(&lruvec->lru_lock); (*get_compound_page_dtor(page))(page); - spin_lock_irq(&pgdat->lru_lock); + spin_lock_irq(&lruvec->lru_lock); } else list_add(&page->lru, &pages_to_free); } else { @@ -1925,7 +1922,7 @@ static int current_may_throttle(void) lru_add_drain(); - spin_lock_irq(&pgdat->lru_lock); + spin_lock_irq(&lruvec->lru_lock); nr_taken = isolate_lru_pages(nr_to_scan, lruvec, &page_list, &nr_scanned, sc, lru); @@ -1937,7 +1934,7 @@ static int current_may_throttle(void) if (!cgroup_reclaim(sc)) __count_vm_events(item, nr_scanned); __count_memcg_events(lruvec_memcg(lruvec), item, nr_scanned); - spin_unlock_irq(&pgdat->lru_lock); + spin_unlock_irq(&lruvec->lru_lock); if (nr_taken == 0) return 0; @@ -1945,7 +1942,7 @@ static int current_may_throttle(void) nr_reclaimed = shrink_page_list(&page_list, pgdat, sc, 0, &stat, false); - spin_lock_irq(&pgdat->lru_lock); + spin_lock_irq(&lruvec->lru_lock); item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT; if (!cgroup_reclaim(sc)) @@ -1958,7 +1955,7 @@ static int current_may_throttle(void) __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); - spin_unlock_irq(&pgdat->lru_lock); + spin_unlock_irq(&lruvec->lru_lock); mem_cgroup_uncharge_list(&page_list); free_unref_page_list(&page_list); @@ -2011,7 +2008,7 @@ static void shrink_active_list(unsigned long nr_to_scan, lru_add_drain(); - spin_lock_irq(&pgdat->lru_lock); + spin_lock_irq(&lruvec->lru_lock); nr_taken = isolate_lru_pages(nr_to_scan, lruvec, &l_hold, &nr_scanned, sc, lru); @@ -2022,7 +2019,7 @@ static void shrink_active_list(unsigned long nr_to_scan, __count_vm_events(PGREFILL, nr_scanned); __count_memcg_events(lruvec_memcg(lruvec), PGREFILL, nr_scanned); - spin_unlock_irq(&pgdat->lru_lock); + spin_unlock_irq(&lruvec->lru_lock); while (!list_empty(&l_hold)) { cond_resched(); @@ -2068,7 +2065,7 @@ static void shrink_active_list(unsigned long nr_to_scan, /* * Move pages back to the lru list. */ - spin_lock_irq(&pgdat->lru_lock); + spin_lock_irq(&lruvec->lru_lock); /* * Count referenced pages from currently used mappings as rotated, * even though only some of them are actually re-activated. This @@ -2086,7 +2083,7 @@ static void shrink_active_list(unsigned long nr_to_scan, __count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate); __mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken); - spin_unlock_irq(&pgdat->lru_lock); + spin_unlock_irq(&lruvec->lru_lock); mem_cgroup_uncharge_list(&l_active); free_unref_page_list(&l_active); @@ -2371,7 +2368,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, file = lruvec_lru_size(lruvec, LRU_ACTIVE_FILE, MAX_NR_ZONES) + lruvec_lru_size(lruvec, LRU_INACTIVE_FILE, MAX_NR_ZONES); - spin_lock_irq(&pgdat->lru_lock); + spin_lock_irq(&lruvec->lru_lock); if (unlikely(reclaim_stat->recent_scanned[0] > anon / 4)) { reclaim_stat->recent_scanned[0] /= 2; reclaim_stat->recent_rotated[0] /= 2; @@ -2392,7 +2389,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, fp = file_prio * (reclaim_stat->recent_scanned[1] + 1); fp /= reclaim_stat->recent_rotated[1] + 1; - spin_unlock_irq(&pgdat->lru_lock); + spin_unlock_irq(&lruvec->lru_lock); fraction[0] = ap; fraction[1] = fp; @@ -4285,24 +4282,25 @@ int page_evictable(struct page *page) */ void check_move_unevictable_pages(struct pagevec *pvec) { - struct lruvec *lruvec; - struct pglist_data *pgdat = NULL; + struct lruvec *lruvec = NULL; int pgscanned = 0; int pgrescued = 0; int i; for (i = 0; i < pvec->nr; i++) { struct page *page = pvec->pages[i]; - struct pglist_data *pagepgdat = page_pgdat(page); + struct pglist_data *pgdat = page_pgdat(page); + struct lruvec *new_lruvec = mem_cgroup_page_lruvec(page, pgdat); + pgscanned++; - if (pagepgdat != pgdat) { - if (pgdat) - spin_unlock_irq(&pgdat->lru_lock); - pgdat = pagepgdat; - spin_lock_irq(&pgdat->lru_lock); + + if (lruvec != new_lruvec) { + if (lruvec) + spin_unlock_irq(&lruvec->lru_lock); + lruvec = new_lruvec; + spin_lock_irq(&lruvec->lru_lock); } - lruvec = mem_cgroup_page_lruvec(page, pgdat); if (!PageLRU(page) || !PageUnevictable(page)) continue; @@ -4318,10 +4316,10 @@ void check_move_unevictable_pages(struct pagevec *pvec) } } - if (pgdat) { + if (lruvec) { __count_vm_events(UNEVICTABLE_PGRESCUED, pgrescued); __count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned); - spin_unlock_irq(&pgdat->lru_lock); + spin_unlock_irq(&lruvec->lru_lock); } } EXPORT_SYMBOL_GPL(check_move_unevictable_pages);
This patchset move lru_lock into lruvec, give a lru_lock for each of lruvec, thus bring a lru_lock for each of memcg per node. This is the main patch to replace per node lru_lock with per memcg lruvec lock. We introduce function lock_page_lruvec, it's same as vanilla pgdat lock when memory cgroup unset, w/o memcg, the function will keep repin the lruvec's lock to guard from page->mem_cgroup changes in page migrations between memcgs. (Thanks Hugh Dickins and Konstantin Khlebnikov reminder on this. Than the core logical is same as their previous patchs) According to Daniel Jordan's suggestion, I run 64 'dd' with on 32 containers on my 2s* 8 core * HT box with the modefied case: https://git.kernel.org/pub/scm/linux/kernel/git/wfg/vm-scalability.git/tree/case-lru-file-readtwice With this and later patches, the dd performance is 144MB/s, the vanilla kernel performance is 123MB/s. 17% performance increased. Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Chris Down <chris@chrisdown.name> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Qian Cai <cai@lca.pw> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: "Jérôme Glisse" <jglisse@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Yang Shi <yang.shi@linux.alibaba.com> Cc: David Rientjes <rientjes@google.com> Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> Cc: swkhack <swkhack@gmail.com> Cc: "Potyra, Stefan" <Stefan.Potyra@elektrobit.com> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Colin Ian King <colin.king@canonical.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> Cc: Matthew Wilcox <willy@infradead.org> Cc: Peng Fan <peng.fan@nxp.com> Cc: Nikolay Borisov <nborisov@suse.com> Cc: Ira Weiny <ira.weiny@intel.com> Cc: Kirill Tkhai <ktkhai@virtuozzo.com> Cc: Yafang Shao <laoar.shao@gmail.com> Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Cc: Hugh Dickins <hughd@google.com> Cc: Tejun Heo <tj@kernel.org> Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org Cc: cgroups@vger.kernel.org --- include/linux/memcontrol.h | 24 +++++++++++++++ include/linux/mmzone.h | 2 ++ mm/compaction.c | 67 ++++++++++++++++++++++++++++------------- mm/huge_memory.c | 15 ++++------ mm/memcontrol.c | 75 +++++++++++++++++++++++++++++++++++----------- mm/mlock.c | 31 ++++++++++--------- mm/mmzone.c | 1 + mm/page_idle.c | 5 ++-- mm/swap.c | 74 +++++++++++++++++++-------------------------- mm/vmscan.c | 58 +++++++++++++++++------------------ 10 files changed, 214 insertions(+), 138 deletions(-)