From patchwork Wed Jun 28 08:05:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Andrzej Siewior X-Patchwork-Id: 13295328 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 366A9EB64DA for ; Wed, 28 Jun 2023 08:06:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=soR7ESgMuuFw2vCg0clif28yAKpbVYazlRhzjC/LicI=; b=UG7R49XrZt6gle CisLPnoIF4FrwqAo6W9KPcsA6P2YSFovVR42QUG3SYuq+RZxGPL1vYxwlar9rNM8Xjp8N/SqQ51cv 8nt9vrK+6ffiOOqSBzsbTvDkrg1HNtSsZOL9yr7F7CjhZ3B9st8fkVuKbqBtHYnzMriYv8VIGAkkC RidlNT3Cy0mQulPw9CcaE5VlGsaEch951FW3NcQYlrLpCGpXiTvOKLd5s295TLuEV0Gvi/2NZd5co qX0VZdelgy+Im13dCim7l2aCWlcJ4/6ztV+pEhVaUpZl+aP8mLG0c2G0JmewTJylDZWmPhmFZAztV jYqT2kDwtTClv4QmYZkA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1qEQBI-00F7Yt-23; Wed, 28 Jun 2023 08:05:40 +0000 Received: from galois.linutronix.de ([193.142.43.55]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1qEQBF-00F7XJ-2n for linux-arm-kernel@lists.infradead.org; Wed, 28 Jun 2023 08:05:39 +0000 From: Sebastian Andrzej Siewior DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1687939532; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qfXLkJc4J7D887ljO74gBwu9WWMz7jLm1uwGGp6XxxQ=; b=VIjKInJpg4yDYRV2sa5ApZHoCufaEZK1Puz1Xn4Z1vuwHLjhexIEBEE+fxqWBAmZqpRuzo oafhFUikbOUsloj8BIKw+1yQeFwd0ZMyvAza3zEEIXRKs3yV6N01a5KPGRjB+IRusgHEu/ fjwEKc1/4D5h1/KSCOu2KH2N88yeEjUQiJ1lSitCQjhWhWFQG8TR4jammSDTkR/Jt0Mofy 5YrnrCdNXzuW6TKO83OlSXMLHPlFBQXCPbgrLLn22V/hd95oEVPFe7U9jheA8boZGLIoXc KkQA1B7VtBSCGZy8Dcl3kLHJjptfTqB0oNUNhmXF//6t4R1HRxQTZZWV+aSUBg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1687939532; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=qfXLkJc4J7D887ljO74gBwu9WWMz7jLm1uwGGp6XxxQ=; b=mHmWz9nPtsW72DO18HVcVZNHFvAcIiZ4ecke80dQuLS3n74+lilx5eXG4JuxD5ak9qF/Y6 NhYQSVdo+D7t5ABw== To: linux-arm-kernel@lists.infradead.org Cc: Russell King , Ard Biesheuvel , Thomas Gleixner , Sebastian Andrzej Siewior Subject: [PATCH v2 1/4] ARM: vfp: Provide vfp_lock() for VFP locking. Date: Wed, 28 Jun 2023 10:05:13 +0200 Message-Id: <20230628080516.798032-2-bigeasy@linutronix.de> In-Reply-To: <20230628080516.798032-1-bigeasy@linutronix.de> References: <20230628080516.798032-1-bigeasy@linutronix.de> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230628_010538_052423_D644AC7E X-CRM114-Status: GOOD ( 14.08 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org kernel_neon_begin() uses local_bh_disable() to ensure exclusive access to the VFP unit. This is broken on PREEMPT_RT because a BH disabled section remains preemptible on PREEMPT_RT. Introduce vfp_lock() which uses local_bh_disable() and preempt_disable() on PREEMPT_RT. Since softirqs are processed always in thread context, disabling preemption is enough to ensure that the current context won't get interrupted by something that is using the VFP. Use it in kernel_neon_begin(). Signed-off-by: Sebastian Andrzej Siewior --- arch/arm/vfp/vfpmodule.c | 32 ++++++++++++++++++++++++++++++-- 1 file changed, 30 insertions(+), 2 deletions(-) diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c index 58a9442add24b..0a21e13095809 100644 --- a/arch/arm/vfp/vfpmodule.c +++ b/arch/arm/vfp/vfpmodule.c @@ -54,6 +54,34 @@ extern unsigned int VFP_arch_feroceon __alias(VFP_arch); */ union vfp_state *vfp_current_hw_state[NR_CPUS]; +/* + * Claim ownership of the VFP unit. + * + * The caller may change VFP registers until vfp_unlock() is called. + * + * local_bh_disable() is used to disable preemption and to disable VFP + * processing in softirq context. On PREEMPT_RT kernels local_bh_disable() is + * not sufficient because it only serializes soft interrupt related sections + * via a local lock, but stays preemptible. Disabling preemption is the right + * choice here as bottom half processing is always in thread context on RT + * kernels so it implicitly prevents bottom half processing as well. + */ +static void vfp_lock(void) +{ + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + local_bh_disable(); + else + preempt_disable(); +} + +static void vfp_unlock(void) +{ + if (!IS_ENABLED(CONFIG_PREEMPT_RT)) + local_bh_enable(); + else + preempt_enable(); +} + /* * Is 'thread's most up to date state stored in this CPUs hardware? * Must be called from non-preemptible context. @@ -818,7 +846,7 @@ void kernel_neon_begin(void) unsigned int cpu; u32 fpexc; - local_bh_disable(); + vfp_lock(); /* * Kernel mode NEON is only allowed outside of hardirq context with @@ -849,7 +877,7 @@ void kernel_neon_end(void) { /* Disable the NEON/VFP unit. */ fmxr(FPEXC, fmrx(FPEXC) & ~FPEXC_EN); - local_bh_enable(); + vfp_unlock(); } EXPORT_SYMBOL(kernel_neon_end);