Message ID | 201710011444.IBD05725.VJSFHOOMOFtLQF@I-love.SAKURA.ne.jp (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Michal Hocko wrote: > [Hmm, I do not see the original patch which this has been a reply to] urbl.hostedemail.com and b.barracudacentral.org blocked my IP address, and the rest are "Recipient address rejected: Greylisted" or "Deferred: 451-4.3.0 Multiple destination domains per transaction is unsupported.", and after all dropped at the servers. Sad... > > On Mon 02-10-17 06:59:12, Michael S. Tsirkin wrote: > > On Sun, Oct 01, 2017 at 02:44:34PM +0900, Tetsuo Handa wrote: > > > Tetsuo Handa wrote: > > > > Michael S. Tsirkin wrote: > > > > > On Mon, Sep 11, 2017 at 07:27:19PM +0900, Tetsuo Handa wrote: > > > > > > Hello. > > > > > > > > > > > > I noticed that virtio_balloon is using register_oom_notifier() and > > > > > > leak_balloon() from virtballoon_oom_notify() might depend on > > > > > > __GFP_DIRECT_RECLAIM memory allocation. > > > > > > > > > > > > In leak_balloon(), mutex_lock(&vb->balloon_lock) is called in order to > > > > > > serialize against fill_balloon(). But in fill_balloon(), > > > > > > alloc_page(GFP_HIGHUSER[_MOVABLE] | __GFP_NOMEMALLOC | __GFP_NORETRY) is > > > > > > called with vb->balloon_lock mutex held. Since GFP_HIGHUSER[_MOVABLE] implies > > > > > > __GFP_DIRECT_RECLAIM | __GFP_IO | __GFP_FS, this allocation attempt might > > > > > > depend on somebody else's __GFP_DIRECT_RECLAIM | !__GFP_NORETRY memory > > > > > > allocation. Such __GFP_DIRECT_RECLAIM | !__GFP_NORETRY allocation can reach > > > > > > __alloc_pages_may_oom() and hold oom_lock mutex and call out_of_memory(). > > > > > > And leak_balloon() is called by virtballoon_oom_notify() via > > > > > > blocking_notifier_call_chain() callback when vb->balloon_lock mutex is already > > > > > > held by fill_balloon(). As a result, despite __GFP_NORETRY is specified, > > > > > > fill_balloon() can indirectly get stuck waiting for vb->balloon_lock mutex > > > > > > at leak_balloon(). > > This is really nasty! And I would argue that this is an abuse of the oom > notifier interface from the virtio code. OOM notifiers are an ugly hack > on its own but all its users have to be really careful to not depend on > any allocation request because that is a straight deadlock situation. Please describe such warning at "int register_oom_notifier(struct notifier_block *nb)" definition. > > I do not think that making oom notifier API more complex is the way to > go. Can we simply change the lock to try_lock? Using mutex_trylock(&vb->balloon_lock) alone is not sufficient. Inside the mutex, __GFP_DIRECT_RECLAIM && !__GFP_NORETRY allocation attempt is used which will fail to make progress due to oom_lock already held. Therefore, virtballoon_oom_notify() needs to guarantee that all allocation attempts use GFP_NOWAIT when called from virtballoon_oom_notify(). virtballoon_oom_notify() can guarantee GFP_NOIO using memalloc_noio_{save,restore}() (which is currently missing because blocking_notifier_call_chain() might be called by GFP_NOIO allocation request (e.g. disk_events_workfn)). But there is no memalloc_nodirectreclaim_{save,restore}() for guaranteeing GFP_NOWAIT is used. virtballoon_oom_notify() will need to use some flag and switch GFP_NOWAIT and GFP_KERNEL based on that flag. I worry that such approach is prone to oversight. > If the lock is held we > would simply fall back to the normal OOM handling. As a follow up it > would be great if virtio could use some other form of aging e.g. > shrinker. > -- > Michal Hocko > SUSE Labs >
diff --git a/mm/oom_kill.c b/mm/oom_kill.c index dee0f75..d9744f7 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -981,9 +981,37 @@ static void check_panic_on_oom(struct oom_control *oc, } static BLOCKING_NOTIFIER_HEAD(oom_notify_list); +static bool oom_notifier_requested; +static unsigned long oom_notifier_freed; +static struct task_struct *oom_notifier_th; +static DECLARE_WAIT_QUEUE_HEAD(oom_notifier_request_wait); +static DECLARE_WAIT_QUEUE_HEAD(oom_notifier_response_wait); + +static int oom_notifier(void *unused) +{ + while (true) { + wait_event_freezable(oom_notifier_request_wait, + oom_notifier_requested); + blocking_notifier_call_chain(&oom_notify_list, 0, + &oom_notifier_freed); + oom_notifier_requested = false; + wake_up(&oom_notifier_response_wait); + } + return 0; +} int register_oom_notifier(struct notifier_block *nb) { + if (!oom_notifier_th) { + struct task_struct *th = kthread_run(oom_notifier, NULL, + "oom_notifier"); + + if (IS_ERR(th)) { + pr_err("Unable to start OOM notifier thread.\n"); + return (int) PTR_ERR(th); + } + oom_notifier_th = th; + } return blocking_notifier_chain_register(&oom_notify_list, nb); } EXPORT_SYMBOL_GPL(register_oom_notifier); @@ -1005,17 +1033,21 @@ int unregister_oom_notifier(struct notifier_block *nb) */ bool out_of_memory(struct oom_control *oc) { - unsigned long freed = 0; enum oom_constraint constraint = CONSTRAINT_NONE; if (oom_killer_disabled) return false; - if (!is_memcg_oom(oc)) { - blocking_notifier_call_chain(&oom_notify_list, 0, &freed); - if (freed > 0) + if (!is_memcg_oom(oc) && oom_notifier_th) { + oom_notifier_requested = true; + wake_up(&oom_notifier_request_wait); + wait_event_timeout(oom_notifier_response_wait, + !oom_notifier_requested, 5 * HZ); + if (oom_notifier_freed) { + oom_notifier_freed = 0; /* Got some memory back in the last second. */ return true; + } } /*