Message ID | 20200401225723.14164-3-pasha.tatashin@soleen.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | initialize deferred pages with interrupts enabled | expand |
On 02.04.20 00:57, Pavel Tatashin wrote: > Initializing struct pages is a long task and keeping interrupts disabled > for the duration of this operation introduces a number of problems. > > 1. jiffies are not updated for long period of time, and thus incorrect time > is reported. See proposed solution and discussion here: > lkml/20200311123848.118638-1-shile.zhang@linux.alibaba.com > 2. It prevents farther improving deferred page initialization by allowing > intra-node multi-threading. > > We are keeping interrupts disabled to solve a rather theoretical problem > that was never observed in real world (See 3a2d7fa8a3d5). > > Lets keep interrupts enabled. In case we ever encounter a scenario where > an interrupt thread wants to allocate large amount of memory this early in > boot we can deal with that by growing zone (see deferred_grow_zone()) by > the needed amount before starting deferred_init_memmap() threads. > > Before: > [ 1.232459] node 0 initialised, 12058412 pages in 1ms > > After: > [ 1.632580] node 0 initialised, 12051227 pages in 436ms > > Fixes: 3a2d7fa8a3d5 ("mm: disable interrupts while initializing deferred pages") > Cc: stable@vger.kernel.org # 4.17+ > Can you please add my details about the use of cond_resched() fixing detection of RCU stalls? https://lore.kernel.org/linux-mm/20200401104156.11564-2-david@redhat.com/ In the meantime, I'll give these two patches a churn. Thanks
> + /* > + * Once we unlock here, the zone cannot be grown anymore, thus if an > + * interrupt thread must allocate this early in boot, zone must be > + * pre-grown prior to start of deferred page initialization. > + */ > + pgdat_resize_unlock(pgdat, &flags); > + > /* Only the highest zone is deferred so find it */ > for (zid = 0; zid < MAX_NR_ZONES; zid++) { > zone = pgdat->node_zones + zid; > @@ -1809,11 +1816,9 @@ static int __init deferred_init_memmap(void *data) > */ > while (spfn < epfn) { > nr_pages += deferred_init_maxorder(&i, zone, &spfn, &epfn); > - touch_nmi_watchdog(); > + cond_resched(); I do wonder if this change is strictly required in this patch (IOW, if we could keep calling touch_nmi_watchdog() also without holding a spinlock) Anyhow, it's the right thing to do. > } > zone_empty: > - pgdat_resize_unlock(pgdat, &flags); > - > /* Sanity check that the next zone really is unpopulated */ > WARN_ON(++zid < MAX_NR_ZONES && populated_zone(++zone)); > > @@ -1855,17 +1860,6 @@ deferred_grow_zone(struct zone *zone, unsigned int order) > > pgdat_resize_lock(pgdat, &flags); > > - /* > - * If deferred pages have been initialized while we were waiting for > - * the lock, return true, as the zone was grown. The caller will retry > - * this zone. We won't return to this function since the caller also > - * has this static branch. > - */ > - if (!static_branch_unlikely(&deferred_pages)) { > - pgdat_resize_unlock(pgdat, &flags); > - return true; > - } > - > /* > * If someone grew this zone while we were waiting for spinlock, return > * true, as there might be enough pages already. > I think we should also look into cleaning up deferred_grow_zone( next), we still have that touch_nmi_watchdog() in there. We should rework locking. (I think Michal requested that as well) For now, this seems to survive my basic testing (RCU stalls gone)
On Thu 02-04-20 09:38:54, David Hildenbrand wrote: > > + /* > > + * Once we unlock here, the zone cannot be grown anymore, thus if an > > + * interrupt thread must allocate this early in boot, zone must be > > + * pre-grown prior to start of deferred page initialization. > > + */ > > + pgdat_resize_unlock(pgdat, &flags); > > + > > /* Only the highest zone is deferred so find it */ > > for (zid = 0; zid < MAX_NR_ZONES; zid++) { > > zone = pgdat->node_zones + zid; > > @@ -1809,11 +1816,9 @@ static int __init deferred_init_memmap(void *data) > > */ > > while (spfn < epfn) { > > nr_pages += deferred_init_maxorder(&i, zone, &spfn, &epfn); > > - touch_nmi_watchdog(); > > + cond_resched(); > > I do wonder if this change is strictly required in this patch (IOW, if > we could keep calling touch_nmi_watchdog() also without holding a spinlock) Exactly. I would go with your patch on top. > Anyhow, it's the right thing to do. > > > } > > zone_empty: > > - pgdat_resize_unlock(pgdat, &flags); > > - > > /* Sanity check that the next zone really is unpopulated */ > > WARN_ON(++zid < MAX_NR_ZONES && populated_zone(++zone)); > > > > @@ -1855,17 +1860,6 @@ deferred_grow_zone(struct zone *zone, unsigned int order) > > > > pgdat_resize_lock(pgdat, &flags); > > > > - /* > > - * If deferred pages have been initialized while we were waiting for > > - * the lock, return true, as the zone was grown. The caller will retry > > - * this zone. We won't return to this function since the caller also > > - * has this static branch. > > - */ > > - if (!static_branch_unlikely(&deferred_pages)) { > > - pgdat_resize_unlock(pgdat, &flags); > > - return true; > > - } > > - > > /* > > * If someone grew this zone while we were waiting for spinlock, return > > * true, as there might be enough pages already. > > > > > I think we should also look into cleaning up deferred_grow_zone( next), > we still have that touch_nmi_watchdog() in there. We should rework > locking. (I think Michal requested that as well) > > For now, this seems to survive my basic testing (RCU stalls gone) > > -- > Thanks, > > David / dhildenb >
On 4/2/20 12:57 AM, Pavel Tatashin wrote: > Initializing struct pages is a long task and keeping interrupts disabled > for the duration of this operation introduces a number of problems. > > 1. jiffies are not updated for long period of time, and thus incorrect time > is reported. See proposed solution and discussion here: > lkml/20200311123848.118638-1-shile.zhang@linux.alibaba.com > 2. It prevents farther improving deferred page initialization by allowing > intra-node multi-threading. > > We are keeping interrupts disabled to solve a rather theoretical problem > that was never observed in real world (See 3a2d7fa8a3d5). > > Lets keep interrupts enabled. In case we ever encounter a scenario where > an interrupt thread wants to allocate large amount of memory this early in > boot we can deal with that by growing zone (see deferred_grow_zone()) by > the needed amount before starting deferred_init_memmap() threads. > > Before: > [ 1.232459] node 0 initialised, 12058412 pages in 1ms > > After: > [ 1.632580] node 0 initialised, 12051227 pages in 436ms > > Fixes: 3a2d7fa8a3d5 ("mm: disable interrupts while initializing deferred pages") > Cc: stable@vger.kernel.org # 4.17+ TBH I don't remember my concern anymore. Reading my mail now [1] it seems I was thinking the problem could happen not just in interrupt context, but with other kthreads as well. Anyway I agree with the approach of waiting for actual issues being reported and then eventually pre-growing more. Acked-by: Vlastimil Babka <vbabka@suse.cz> [1] https://lore.kernel.org/linux-mm/33e3a3ff-0318-1a07-3c57-6be638046c87@suse.cz/ > Reported-by: Shile Zhang <shile.zhang@linux.alibaba.com> > Signed-off-by: Pavel Tatashin <pasha.tatashin@soleen.com> > Reviewed-by: Daniel Jordan <daniel.m.jordan@oracle.com> > Acked-by: Michal Hocko <mhocko@suse.com> > --- > include/linux/mmzone.h | 2 ++ > mm/page_alloc.c | 22 ++++++++-------------- > 2 files changed, 10 insertions(+), 14 deletions(-) > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 462f6873905a..c5bdf55da034 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -721,6 +721,8 @@ typedef struct pglist_data { > /* > * Must be held any time you expect node_start_pfn, > * node_present_pages, node_spanned_pages or nr_zones to stay constant. > + * Also synchronizes pgdat->first_deferred_pfn during deferred page > + * init. > * > * pgdat_resize_lock() and pgdat_resize_unlock() are provided to > * manipulate node_size_lock without checking for CONFIG_MEMORY_HOTPLUG > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index e8ff6a176164..68669d3a5a66 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1790,6 +1790,13 @@ static int __init deferred_init_memmap(void *data) > BUG_ON(pgdat->first_deferred_pfn > pgdat_end_pfn(pgdat)); > pgdat->first_deferred_pfn = ULONG_MAX; > > + /* > + * Once we unlock here, the zone cannot be grown anymore, thus if an > + * interrupt thread must allocate this early in boot, zone must be > + * pre-grown prior to start of deferred page initialization. > + */ > + pgdat_resize_unlock(pgdat, &flags); > + > /* Only the highest zone is deferred so find it */ > for (zid = 0; zid < MAX_NR_ZONES; zid++) { > zone = pgdat->node_zones + zid; > @@ -1809,11 +1816,9 @@ static int __init deferred_init_memmap(void *data) > */ > while (spfn < epfn) { > nr_pages += deferred_init_maxorder(&i, zone, &spfn, &epfn); > - touch_nmi_watchdog(); > + cond_resched(); > } > zone_empty: > - pgdat_resize_unlock(pgdat, &flags); > - > /* Sanity check that the next zone really is unpopulated */ > WARN_ON(++zid < MAX_NR_ZONES && populated_zone(++zone)); > > @@ -1855,17 +1860,6 @@ deferred_grow_zone(struct zone *zone, unsigned int order) > > pgdat_resize_lock(pgdat, &flags); > > - /* > - * If deferred pages have been initialized while we were waiting for > - * the lock, return true, as the zone was grown. The caller will retry > - * this zone. We won't return to this function since the caller also > - * has this static branch. > - */ > - if (!static_branch_unlikely(&deferred_pages)) { > - pgdat_resize_unlock(pgdat, &flags); > - return true; > - } > - > /* > * If someone grew this zone while we were waiting for spinlock, return > * true, as there might be enough pages already. >
> TBH I don't remember my concern anymore. Reading my mail now [1] it seems I was > thinking the problem could happen not just in interrupt context, but with other > kthreads as well. > Anyway I agree with the approach of waiting for actual issues being reported and > then eventually pre-growing more. > > Acked-by: Vlastimil Babka <vbabka@suse.cz> Thank you. Pasha
> > I do wonder if this change is strictly required in this patch (IOW, if > > we could keep calling touch_nmi_watchdog() also without holding a spinlock) > > Exactly. I would go with your patch on top. > > > Anyhow, it's the right thing to do. Michal, The reason I changed it here is because in the original patch that this patch fixes we changed cond_sched() to touch_nmi_watchdog(): $ git show 3a2d7fa8a3d5 | grep -E '(nmi|sched)' - cond_resched(); + touch_nmi_watchdog(); - cond_resched(); + touch_nmi_watchdog(); So, should I move it to a separate patch or is it OK to keep it here? Thank you, Pasha
> Can you please add my details about the use of cond_resched() fixing > detection of RCU stalls? > > https://lore.kernel.org/linux-mm/20200401104156.11564-2-david@redhat.com/ > > In the meantime, I'll give these two patches a churn. Thanks Hi David, I will add your RCU details in the next revision. Thank you, Pasha
On Thu 02-04-20 11:13:49, Pavel Tatashin wrote: > > > I do wonder if this change is strictly required in this patch (IOW, if > > > we could keep calling touch_nmi_watchdog() also without holding a spinlock) > > > > Exactly. I would go with your patch on top. > > > > > Anyhow, it's the right thing to do. > > Michal, > > The reason I changed it here is because in the original patch that > this patch fixes we changed cond_sched() to touch_nmi_watchdog(): > $ git show 3a2d7fa8a3d5 | grep -E '(nmi|sched)' > - cond_resched(); > + touch_nmi_watchdog(); > - cond_resched(); > + touch_nmi_watchdog(); > > So, should I move it to a separate patch or is it OK to keep it here? Having in a separate patch would be better IMO. If for nothing else, the RCU stall would be easier to see.
> > So, should I move it to a separate patch or is it OK to keep it here? > > Having in a separate patch would be better IMO. If for nothing else, the > RCU stall would be easier to see. Sure, I will submit it as a separate patch. Thank you, Pasha
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 462f6873905a..c5bdf55da034 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -721,6 +721,8 @@ typedef struct pglist_data { /* * Must be held any time you expect node_start_pfn, * node_present_pages, node_spanned_pages or nr_zones to stay constant. + * Also synchronizes pgdat->first_deferred_pfn during deferred page + * init. * * pgdat_resize_lock() and pgdat_resize_unlock() are provided to * manipulate node_size_lock without checking for CONFIG_MEMORY_HOTPLUG diff --git a/mm/page_alloc.c b/mm/page_alloc.c index e8ff6a176164..68669d3a5a66 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1790,6 +1790,13 @@ static int __init deferred_init_memmap(void *data) BUG_ON(pgdat->first_deferred_pfn > pgdat_end_pfn(pgdat)); pgdat->first_deferred_pfn = ULONG_MAX; + /* + * Once we unlock here, the zone cannot be grown anymore, thus if an + * interrupt thread must allocate this early in boot, zone must be + * pre-grown prior to start of deferred page initialization. + */ + pgdat_resize_unlock(pgdat, &flags); + /* Only the highest zone is deferred so find it */ for (zid = 0; zid < MAX_NR_ZONES; zid++) { zone = pgdat->node_zones + zid; @@ -1809,11 +1816,9 @@ static int __init deferred_init_memmap(void *data) */ while (spfn < epfn) { nr_pages += deferred_init_maxorder(&i, zone, &spfn, &epfn); - touch_nmi_watchdog(); + cond_resched(); } zone_empty: - pgdat_resize_unlock(pgdat, &flags); - /* Sanity check that the next zone really is unpopulated */ WARN_ON(++zid < MAX_NR_ZONES && populated_zone(++zone)); @@ -1855,17 +1860,6 @@ deferred_grow_zone(struct zone *zone, unsigned int order) pgdat_resize_lock(pgdat, &flags); - /* - * If deferred pages have been initialized while we were waiting for - * the lock, return true, as the zone was grown. The caller will retry - * this zone. We won't return to this function since the caller also - * has this static branch. - */ - if (!static_branch_unlikely(&deferred_pages)) { - pgdat_resize_unlock(pgdat, &flags); - return true; - } - /* * If someone grew this zone while we were waiting for spinlock, return * true, as there might be enough pages already.