From patchwork Sat Jun 29 15:15:09 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ard Biesheuvel X-Patchwork-Id: 2803471 Return-Path: X-Original-To: patchwork-linux-arm@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork2.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.19.201]) by patchwork2.web.kernel.org (Postfix) with ESMTP id BCD5DBF4A1 for ; Sat, 29 Jun 2013 15:23:49 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id D2D3D200FE for ; Sat, 29 Jun 2013 15:23:48 +0000 (UTC) Received: from casper.infradead.org (casper.infradead.org [85.118.1.10]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 95A1B200FA for ; Sat, 29 Jun 2013 15:23:47 +0000 (UTC) Received: from merlin.infradead.org ([2001:4978:20e::2]) by casper.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Uswzx-0002f1-Gu; Sat, 29 Jun 2013 15:23:37 +0000 Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1Uswzu-00015e-Qy; Sat, 29 Jun 2013 15:23:34 +0000 Received: from mail-we0-f173.google.com ([74.125.82.173]) by merlin.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1Uswzr-00015C-2g for linux-arm-kernel@lists.infradead.org; Sat, 29 Jun 2013 15:23:32 +0000 Received: by mail-we0-f173.google.com with SMTP id x54so2065010wes.18 for ; Sat, 29 Jun 2013 08:23:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:x-gm-message-state; bh=ZP5+Kw2BjwpG5VQDIQFf1VqWYdrPsgRx8Jt8xBvE1HA=; b=fAWfUm1Oib8+2hFpntcyhL6zSkqln9VanBfPnTVVtobaaudSCCN75MFjhCibBrbTgL A5Ex9WT4xLGhvoscwV3LgCQbN0x/ZtiovQ8cnJJh717bsmc7wqrYBiNR7LHtTs6DYHEo /SpZ4Zs6ENsg1MIUn2VJpeZ/LoVjurD00GQZFgL1X0HfqfbpEZvI2cPMBAXsTRC6Sqbg LrdqvKVM8X6NCjQ5YD6O7vl11hjKBQXseayBSyivg++vufHDlJzfx5oGGyZgqH7mXmhr HgbUwCS/+d/kF+i+ZPwCARy9Rzr4FlQgiCOST1IQYnKAcpOgq8GQpJBbSyesTNifAdAN rjhQ== X-Received: by 10.194.8.163 with SMTP id s3mr14555729wja.41.1372518918618; Sat, 29 Jun 2013 08:15:18 -0700 (PDT) Received: from localhost.localdomain (dhcp-077-251-109-016.chello.nl. [77.251.109.16]) by mx.google.com with ESMTPSA id cd11sm4503179wib.10.2013.06.29.08.15.17 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Sat, 29 Jun 2013 08:15:17 -0700 (PDT) From: Ard Biesheuvel To: linux-kernel@vger.kernel.org Subject: [RFC PATCH] sched: add preempt_[disable|enable]_strict() Date: Sat, 29 Jun 2013 17:15:09 +0200 Message-Id: <1372518909-12609-1-git-send-email-ard.biesheuvel@linaro.org> X-Mailer: git-send-email 1.7.9.5 X-Gm-Message-State: ALoCoQlgJiY2tp1JZeWYRUO17ugd9ig+R5S3LK+HK3uFG5NAwiNZompnLl0ZaUHVSuEiwAynMJbo X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20130629_112331_232925_C8126BB2 X-CRM114-Status: GOOD ( 10.47 ) X-Spam-Score: -2.6 (--) Cc: linux@arm.linux.org.uk, arnd@arndb.de, Ard Biesheuvel , catalin.marinas@arm.com, will.deacon@arm.com, linux-arm-kernel@lists.infradead.org X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+patchwork-linux-arm=patchwork.kernel.org@lists.infradead.org X-Spam-Status: No, score=-5.5 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, RP_MATCHES_RCVD, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Add preempt_disable_strict and preempt_enable_strict functions that can be used to demarcate atomic sections for which we would like to enforce -even on non-PREEMPT builds with CONFIG_DEBUG_ATOMIC_SLEEP disabled- that sleeping is not allowed. The rationale is that in some cases, the risk of data corruption is high while the likelihood of immediate detection is low, e.g., when using the NEON unit in kernel mode on arm64. Signed-off-by: Ard Biesheuvel --- include/linux/preempt.h | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/include/linux/preempt.h b/include/linux/preempt.h index f5d4723..178bf2e 100644 --- a/include/linux/preempt.h +++ b/include/linux/preempt.h @@ -56,29 +56,33 @@ do { \ #endif /* CONFIG_PREEMPT */ -#ifdef CONFIG_PREEMPT_COUNT - -#define preempt_disable() \ +#define preempt_disable_strict() \ do { \ inc_preempt_count(); \ barrier(); \ } while (0) -#define sched_preempt_enable_no_resched() \ +#define __atomic_end() \ do { \ barrier(); \ dec_preempt_count(); \ } while (0) -#define preempt_enable_no_resched() sched_preempt_enable_no_resched() - -#define preempt_enable() \ +#define preempt_enable_strict() \ do { \ - preempt_enable_no_resched(); \ + __atomic_end(); \ barrier(); \ preempt_check_resched(); \ } while (0) +#ifdef CONFIG_PREEMPT_COUNT + +#define preempt_disable() preempt_disable_strict() +#define preempt_enable() preempt_enable_strict() + +#define sched_preempt_enable_no_resched() __atomic_end() +#define preempt_enable_no_resched() __atomic_end() + /* For debugging and tracer internals only! */ #define add_preempt_count_notrace(val) \ do { preempt_count() += (val); } while (0)