From patchwork Mon Nov 21 14:01:22 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Paul E. McKenney" X-Patchwork-Id: 9486473 X-Mozilla-Keys: nonjunk Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on sandeen.net X-Spam-Level: X-Spam-Status: No, score=-2.0 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,RP_MATCHES_RCVD autolearn=ham autolearn_force=no version=3.4.0 X-Spam-HP: BAYES_00=-1.9,HEADER_FROM_DIFFERENT_DOMAINS=0.001, RP_MATCHES_RCVD=-0.1 X-Original-To: sandeen@sandeen.net Delivered-To: sandeen@sandeen.net Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by sandeen.net (Postfix) with ESMTP id 22A0811655 for ; Mon, 21 Nov 2016 08:00:50 -0600 (CST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754155AbcKUOB1 (ORCPT ); Mon, 21 Nov 2016 09:01:27 -0500 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:39262 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1752505AbcKUOBZ (ORCPT ); Mon, 21 Nov 2016 09:01:25 -0500 Received: from pps.filterd (m0098414.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.17/8.16.0.17) with SMTP id uALDwdnc037423 for ; Mon, 21 Nov 2016 09:01:24 -0500 Received: from e32.co.us.ibm.com (e32.co.us.ibm.com [32.97.110.150]) by mx0b-001b2d01.pphosted.com with ESMTP id 26uy9masb7-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Mon, 21 Nov 2016 09:01:24 -0500 Received: from localhost by e32.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Mon, 21 Nov 2016 07:01:24 -0700 Received: from d03dlp02.boulder.ibm.com (9.17.202.178) by e32.co.us.ibm.com (192.168.1.132) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Mon, 21 Nov 2016 07:01:20 -0700 Received: from b03cxnp08025.gho.boulder.ibm.com (b03cxnp08025.gho.boulder.ibm.com [9.17.130.17]) by d03dlp02.boulder.ibm.com (Postfix) with ESMTP id D759D3E4003E; Mon, 21 Nov 2016 07:01:19 -0700 (MST) Received: from b03ledav001.gho.boulder.ibm.com (b03ledav001.gho.boulder.ibm.com [9.17.130.232]) by b03cxnp08025.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id uALE0oAf56492218; Mon, 21 Nov 2016 07:01:19 -0700 Received: from b03ledav001.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 9F6B76E044; Mon, 21 Nov 2016 07:01:19 -0700 (MST) Received: from paulmck-ThinkPad-W541 (unknown [9.80.219.105]) by b03ledav001.gho.boulder.ibm.com (Postfix) with ESMTP id 5F04F6E03F; Mon, 21 Nov 2016 07:01:19 -0700 (MST) Received: by paulmck-ThinkPad-W541 (Postfix, from userid 1000) id B970B16C095F; Mon, 21 Nov 2016 06:01:22 -0800 (PST) Date: Mon, 21 Nov 2016 06:01:22 -0800 From: "Paul E. McKenney" To: Michal Hocko Cc: Paul Menzel , linux-xfs@vger.kernel.org, linux-kernel@vger.kernel.org, Josh Triplett , dvteam@molgen.mpg.de Subject: Re: INFO: rcu_sched detected stalls on CPUs/tasks with `kswapd` and `mem_cgroup_shrink_node` Reply-To: paulmck@linux.vnet.ibm.com References: <24c226a5-1a4a-173e-8b4e-5107a2baac04@molgen.mpg.de> <28a9fabb-c9fe-c865-016a-467a4d5e2a34@molgen.mpg.de> <20161108170340.GB4127@linux.vnet.ibm.com> <6c717122-e671-b086-77ed-4b3c26398564@molgen.mpg.de> <20161108183938.GD4127@linux.vnet.ibm.com> <9f87f8f0-9d0f-f78f-8dca-993b09b19a69@molgen.mpg.de> <20161116173036.GK3612@linux.vnet.ibm.com> <20161121134130.GB18112@dhcp22.suse.cz> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20161121134130.GB18112@dhcp22.suse.cz> User-Agent: Mutt/1.5.21 (2010-09-15) X-TM-AS-GCONF: 00 X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16112114-0004-0000-0000-000010E5D699 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00006117; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000189; SDB=6.00783433; UDB=6.00378335; IPR=6.00561053; BA=6.00004898; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00013395; XFM=3.00000011; UTC=2016-11-21 14:01:22 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16112114-0005-0000-0000-00007AC1CD20 Message-Id: <20161121140122.GU3612@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2016-11-21_10:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1609300000 definitions=main-1611210243 Sender: linux-xfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-xfs@vger.kernel.org On Mon, Nov 21, 2016 at 02:41:31PM +0100, Michal Hocko wrote: > On Wed 16-11-16 09:30:36, Paul E. McKenney wrote: > > On Wed, Nov 16, 2016 at 06:01:19PM +0100, Paul Menzel wrote: > > > Dear Linux folks, > > > > > > > > > On 11/08/16 19:39, Paul E. McKenney wrote: > > > >On Tue, Nov 08, 2016 at 06:38:18PM +0100, Paul Menzel wrote: > > > >>On 11/08/16 18:03, Paul E. McKenney wrote: > > > >>>On Tue, Nov 08, 2016 at 01:22:28PM +0100, Paul Menzel wrote: > > > >> > > > >>>>Could you please help me shedding some light into the messages below? > > > >>>> > > > >>>>With Linux 4.4.X, these messages were not seen. When updating to > > > >>>>Linux 4.8.4, and Linux 4.8.6 they started to appear. In that > > > >>>>version, we enabled several CGROUP options. > > > >>>> > > > >>>>>$ dmesg -T > > > >>>>>[…] > > > >>>>>[Mon Nov 7 15:09:45 2016] INFO: rcu_sched detected stalls on CPUs/tasks: > > > >>>>>[Mon Nov 7 15:09:45 2016] 3-...: (493 ticks this GP) idle=515/140000000000000/0 softirq=5504423/5504423 fqs=13876 > > > >>>>>[Mon Nov 7 15:09:45 2016] (detected by 5, t=60002 jiffies, g=1363193, c=1363192, q=268508) > > > >>>>>[Mon Nov 7 15:09:45 2016] Task dump for CPU 3: > > > >>>>>[Mon Nov 7 15:09:45 2016] kswapd1 R running task 0 87 2 0x00000008 > > > >>>>>[Mon Nov 7 15:09:45 2016] ffffffff81aabdfd ffff8810042a5cb8 ffff88080ad34000 ffff88080ad33dc8 > > > >>>>>[Mon Nov 7 15:09:45 2016] ffff88080ad33d00 0000000000003501 0000000000000000 0000000000000000 > > > >>>>>[Mon Nov 7 15:09:45 2016] 0000000000000000 0000000000000000 0000000000022316 000000000002bc9f > > > >>>>>[Mon Nov 7 15:09:45 2016] Call Trace: > > > >>>>>[Mon Nov 7 15:09:45 2016] [] ? __schedule+0x21d/0x5b0 > > > >>>>>[Mon Nov 7 15:09:45 2016] [] ? shrink_node+0xbf/0x1c0 > > > >>>>>[Mon Nov 7 15:09:45 2016] [] ? kswapd+0x315/0x5f0 > > > >>>>>[Mon Nov 7 15:09:45 2016] [] ? mem_cgroup_shrink_node+0x90/0x90 > > > >>>>>[Mon Nov 7 15:09:45 2016] [] ? kthread+0xc4/0xe0 > > > >>>>>[Mon Nov 7 15:09:45 2016] [] ? ret_from_fork+0x1f/0x40 > > > >>>>>[Mon Nov 7 15:09:45 2016] [] ? kthread_worker_fn+0x160/0x160 > > > >>>> > > > >>>>Even after reading `stallwarn.txt` [1], I don’t know what could > > > >>>>cause this. All items in the backtrace seem to belong to the Linux > > > >>>>kernel. > > > >>>> > > > >>>>There is also nothing suspicious in the monitoring graphs during that time. > > > >>> > > > >>>If you let it be, do you get a later stall warning a few minutes later? > > > >>>If so, how does the stack trace compare? > > > >> > > > >>With Linux 4.8.6 this is the only occurrence since yesterday. > > > >> > > > >>With Linux 4.8.3, and 4.8.4 the following stack traces were seen. > > > > > > > >Looks to me like one or both of the loops in shrink_node() need > > > >an cond_resched_rcu_qs(). > > > > > > Thank you for the pointer. I haven’t had time yet to look into it. > > > > In theory, it is quite straightforward, as shown by the patch below. > > In practice, the MM guys might wish to call cond_resched_rcu_qs() less > > frequently, but I will leave that to their judgment. My guess is that > > the overhead of the cond_resched_rcu_qs() is way down in the noise, > > but I have been surprised in the past. > > > > Anyway, please give this patch a try and let me know how it goes. > > I am not seeing the full thread in my inbox but I am wondering what is > actually going on here. The reclaim path (shrink_node_memcg resp. > shrink_slab should have preemption points and there is not done much > except of iterating over all memcgs other than that. Are there > gazillions of memcgs configured (most of them with the low limit > configured)? In other words is the system configured properly? > > To the patch. I cannot say I would like it. cond_resched_rcu_qs sounds > way too lowlevel for this usage. If anything cond_resched somewhere inside > mem_cgroup_iter would be more appropriate to me. Like this? Thanx, Paul ------------------------------------------------------------------------ --- To unsubscribe from this list: send the line "unsubscribe linux-xfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/mm/memcontrol.c b/mm/memcontrol.c index ae052b5e3315..81cb30d5b2fc 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -867,6 +867,7 @@ struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *root, out: if (prev && prev != root) css_put(&prev->css); + cond_resched_rcu_qs(); return memcg; }