From patchwork Fri May 6 13:21:40 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Dario Faggioli X-Patchwork-Id: 9033181 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork2.web.kernel.org (Postfix) with ESMTP id 45B4EBF29F for ; Fri, 6 May 2016 13:24:13 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 2DDD72038A for ; Fri, 6 May 2016 13:24:12 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id AB606201E4 for ; Fri, 6 May 2016 13:24:10 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ayfhc-0007g6-FT; Fri, 06 May 2016 13:21:56 +0000 Received: from mail6.bemta6.messagelabs.com ([85.158.143.247]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1ayfhb-0007fw-Jv for xen-devel@lists.xenproject.org; Fri, 06 May 2016 13:21:55 +0000 Received: from [85.158.143.35] by server-3.bemta-6.messagelabs.com id 8E/6C-07120-07A9C275; Fri, 06 May 2016 13:21:52 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFupkleJIrShJLcpLzFFi42JxWrrBXjd/lk6 4wf8P3Bbft0xmcmD0OPzhCksAYxRrZl5SfkUCa8bWeWvYCu77VTyaup+1gfGmZxcjJ4eEQIjE /b+b2EFsXgFDia7t9xlBbGGBTImz76cxg9hsAgYSb3bsZQWxRQT8JDp2fwazmQUUJU7dngFWw yKgInFs2Q8mEJtTQEti4uWXQDO5OIQEdjFJ3HvQC7aAX0BS4taXj8wQzdUSp1bfZYI4Ql/i+L wVbBBHCEqcnPmEBcQWElCTmDH3MitEDbfE7dNTmScw8s9C0j4LSQtE3EGip+0TI4StKdG6/Tc 7hK0tsWzha2YIO0pi3YYjUPWKElO6H0LVJEvc3/SZFcK2lVi37j1UjY3EpqsLoGbKS2x/O4d5 ASP3Kkb14tSistQiXUO9pKLM9IyS3MTMHF1DAzO93NTi4sT01JzEpGK95PzcTYzAKGIAgh2MO 587HWKU5GBSEuX9XqATLsSXlJ9SmZFYnBFfVJqTWnyIUYaDQ0mC13cmUE6wKDU9tSItMwcYzz BpCQ4eJRHeBSBp3uKCxNzizHSI1ClGRSlx3n6QhABIIqM0D64NlkIuMcpKCfMyAh0ixFOQWpS bWYIq/4pRnINRSZg3FWQKT2ZeCdz0V0CLmYAWv5+rCbK4JBEhJdXAyFeQa1Peal4SwSe9gltl sbfc0bD1arG83MJFxjMv2EpzT/q3dYrw3LjIWw8/Z7pMt5JvWSX4W/Dl5OK7Yum7lEJ9LnVwG 598p3Ht8GuBlcfWJZde3ZRuIDkz8fr0mJqADGfT9LbIKUvi7xzadYNjzaWQjvk+P5vPNZgcO/ 94R8jX4MnLcp9wKrEUZyQaajEXFScCAHU+F+4cAwAA X-Env-Sender: prvs=9276d401f=dario.faggioli@citrix.com X-Msg-Ref: server-2.tower-21.messagelabs.com!1462540909!3173274!1 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 8.34; banners=-,-,- X-VirusChecked: Checked Received: (qmail 43291 invoked from network); 6 May 2016 13:21:50 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP; 6 May 2016 13:21:50 -0000 X-IronPort-AV: E=Sophos;i="5.24,587,1454976000"; d="asc'?scan'208";a="358796607" Message-ID: <1462540900.3355.43.camel@citrix.com> From: Dario Faggioli To: George Dunlap , Date: Fri, 6 May 2016 15:21:40 +0200 In-Reply-To: <572A32BF.5030404@citrix.com> References: <146231184906.25631.6550047090421454264.stgit@Solace.fritz.box> <146231199550.25631.15153219462074625034.stgit@Solace.fritz.box> <572A1128.7010609@citrix.com> <1462377503.6981.21.camel@citrix.com> <572A2BDD.4050506@citrix.com> <1462382486.6981.30.camel@citrix.com> <572A32BF.5030404@citrix.com> Organization: Citrix Inc. X-Mailer: Evolution 3.18.5.2 (3.18.5.2-1.fc23) MIME-Version: 1.0 X-DLP: MIA2 Cc: Wei Liu Subject: Re: [Xen-devel] [PATCH for 4.7 1/4] xen: sched: avoid spuriously re-enabling IRQs in csched2_switch_sched() X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP On Wed, 2016-05-04 at 18:34 +0100, George Dunlap wrote: > On 04/05/16 18:21, Dario Faggioli wrote: > >  > > After all, I'm fine with an ASSERT() too, but then I think we > > should > > add one to the same effect to csched_switch_sched() too. > Well an ASSERT() is sort of like a comment, in that if you see > ASSERT(irqs_disabled()), you know there's no need to save irqs > because > they should already disabled.  But it has the advantage that osstest > will be able to "read" it once we get some proper cpupool tests for > osstest. :-) > > If we weren't in the feature freeze, I'd definitely favor adding an > ASSERT to credit1.  As it is, I think either way (adding now or > waiting > until the 4.8 development window) should be fine. > Ok, here you go (inline and attached) the patch with ASSERT()-s both in Credit2 and Credit1 (despite the freeze, I think it's the best thing to do, see the changelog). Thanks and Regards, Dario Reviewed-by: George Dunlap --- commit cbabd44e171d0bd2169f1c7100e69a9e48289980 Author: Dario Faggioli Date:   Tue Apr 26 18:56:56 2016 +0200     xen: sched: avoid spuriously re-enabling IRQs in csched2_switch_sched()          interrupts are already disabled when calling the hook     (from schedule_cpu_switch()), so we must use spin_lock()     and spin_unlock().          Add an ASSERT(), so we will notice if this code and its     caller get out of sync with respect to disabling interrupts     (and add one at the same exact occurrence of this pattern     in Credit1 too)          Signed-off-by: Dario Faggioli     ---     Cc: George Dunlap     Cc: Wei Liu     --     Changes from v1:      * add the ASSERT(), as requested by George      * add the ASSERT in Credit1 too     --     For Wei:      - the Credit2 spin_lock_irq()-->spin_lock() change needs        to go in, as it fixes a bug;      - adding the ASSERT was requested during review;      - adding the ASSERT in Credit1 is not strictly necessary,        but imptoves code quality and consistency at zero cost        and risk, so I think we should just go for it now, instead        of waitign for 4.8 (it's basically like I'm adding a        comment!). commit cbabd44e171d0bd2169f1c7100e69a9e48289980 Author: Dario Faggioli Date: Tue Apr 26 18:56:56 2016 +0200 xen: sched: avoid spuriously re-enabling IRQs in csched2_switch_sched() interrupts are already disabled when calling the hook (from schedule_cpu_switch()), so we must use spin_lock() and spin_unlock(). Add an ASSERT(), so we will notice if this code and its caller get out of sync with respect to disabling interrupts (and add one at the same exact occurrence of this pattern in Credit1 too) Signed-off-by: Dario Faggioli --- Cc: George Dunlap Cc: Wei Liu -- Changes from v1: * add the ASSERT(), as requested by George * add the ASSERT in Credit1 too -- For Wei: - the Credit2 spin_lock_irq()-->spin_lock() change needs to go in, as it fixes a bug; - adding the ASSERT was requested during review; - adding the ASSERT in Credit1 is not strictly necessary, but imptoves code quality and consistency at zero cost and risk, so I think we should just go for it now, instead of waitign for 4.8 (it's basically like I'm adding a comment!). diff --git a/xen/common/sched_credit.c b/xen/common/sched_credit.c index db4d42a..a38a63d 100644 --- a/xen/common/sched_credit.c +++ b/xen/common/sched_credit.c @@ -615,6 +615,7 @@ csched_switch_sched(struct scheduler *new_ops, unsigned int cpu, * schedule_cpu_switch()). It actually may or may not be the 'right' * one for this cpu, but that is ok for preventing races. */ + ASSERT(!local_irq_is_enabled()); spin_lock(&prv->lock); init_pdata(prv, pdata, cpu); spin_unlock(&prv->lock); diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c index f3b62ac..f95e509 100644 --- a/xen/common/sched_credit2.c +++ b/xen/common/sched_credit2.c @@ -2238,7 +2238,8 @@ csched2_switch_sched(struct scheduler *new_ops, unsigned int cpu, * And owning exactly that one (the lock of the old scheduler of this * cpu) is what is necessary to prevent races. */ - spin_lock_irq(&prv->lock); + ASSERT(!local_irq_is_enabled()); + spin_lock(&prv->lock); idle_vcpu[cpu]->sched_priv = vdata; @@ -2263,7 +2264,7 @@ csched2_switch_sched(struct scheduler *new_ops, unsigned int cpu, smp_mb(); per_cpu(schedule_data, cpu).schedule_lock = &prv->rqd[rqi].lock; - spin_unlock_irq(&prv->lock); + spin_unlock(&prv->lock); } static void