From patchwork Wed Jun 26 13:03:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Ghiti X-Patchwork-Id: 13712919 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 1994CC27C4F for ; Wed, 26 Jun 2024 13:14:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=fY7R4DX1U79JlbwPNwi96m5MHeWPxKNChKj85yKfNVM=; b=QwfnNjeMm54/nL g6joRI3uklepeZtFTyR/Jj69SFL9zyihkpXNfj2OuboOdG3j6IhobiMOpiBVuVg6p0Pb+15AJDLbb y3xgzuO4tSc5w+OQ5Ufm57dYozdcU/QUpeQ1vfUbXe8xkqcd8LO+JmzRHUVttSxA9iFOZ+fP9RVW4 RQUSBGzsQ0kKGP9J2is1P/lNvwbBeh2/aYL4nWXlPce9Ru1QNNL0jX/85FE+exD+8UtkSla8DZCgs peSUAE4RDJXDIo7GRV84S/6Gj7Vzi5W2drgOlFH0qSTPnNCRmjcvxPQLVodexoUn/y0SED3R2uIAu 8PSWC/49q8uXgdg8cWVQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1sMSTQ-00000006uV3-2zpM; Wed, 26 Jun 2024 13:14:08 +0000 Received: from mail-wm1-x32f.google.com ([2a00:1450:4864:20::32f]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1sMSTN-00000006uTO-1ObT for linux-riscv@lists.infradead.org; Wed, 26 Jun 2024 13:14:07 +0000 Received: by mail-wm1-x32f.google.com with SMTP id 5b1f17b1804b1-42561715d41so1646755e9.0 for ; Wed, 26 Jun 2024 06:14:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1719407644; x=1720012444; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+w80jGsiG4VGpCQ7hPsjWSfG61Mxqook1j9XdvlA9Tw=; b=uhcJzCN/jFzAkSTIw0GSNeNhELm9L/JV8wtDIqQXgrSLL7SKH4oUyVr47/TPVL7fI0 MTb5chfGsNMYEw2Z2NbvnFKHUeKUBj4KHnneNX1O+U8W+Utfq8ENbnzZVuwA4lXdQlmu o8HKhLWTGDBQKvWIhdu1EbOUhvus0FsUS+8TzXFLYyLzc79AMlrLe7Zy4EIiFc/mPLL7 SN5s/OUA7W+0JkDKqM4SpzCy+eG3LmC2W5dp1CuJ3OdGjo7PZ7N9l7QAvo8hkz/ZyXDr zbTrEwZtXfd8e7BC/xq7qJDPI/B7xXZWJ/BcVydhVFXQDKbGv79cK7T/9BFIxomgxaed FKVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1719407644; x=1720012444; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=+w80jGsiG4VGpCQ7hPsjWSfG61Mxqook1j9XdvlA9Tw=; b=HpdPr7iwV6DIxddG39A7G/rlYmwQo76xKz+3V3hprPXOQySEOcUENfPtbMtKwW9qBf HEYlX/6ZMgz/N5aHeG0rRGWXAaQ8cNV3i8AebGjeeRRTQwl3R9NKjgkQpsMOW5X6BBH3 6hh+HAg9AFhyCLKN14di9oTujfEuh1fzS4Y4qUCHpyFiIhGsgBllfV2Xoi1YJjY4moML 58p6TOkyM99RuHti0VS3D++TfkrhFwIFD2lG3Jqr/ODLZTilRw+xZBaOG6hUtBH4d583 2jtLejkd+lJ79oWsaya8t735LShu7cgYeCjaUBNXczfliD855L5u6VZ3z1tCD7byd8KT hWUA== X-Forwarded-Encrypted: i=1; AJvYcCUXthmpLjutxKL0KB10gepI1X81NWCip22qs63RfDcj5BUJBiGCISksKD8fY1Zb7kX09iE4HQz28e2ITxI7v02u0yedlq1K5Cd4FBdwSVtW X-Gm-Message-State: AOJu0Yy4rwQFlIXdSJYfeWWQfQ9mo/EtlBZ4P8ZX0r7sUYquBBwYnvTo JxLY2pje01ILGSSwXdgiDVKdgO+IPJevEmmZL1chRfwkhl8y73wOTU0S/0k+uss= X-Google-Smtp-Source: AGHT+IFllg3IYaHR+yNuwabvI2Esa87l1QM3SHNcpI6URxgMur8IwZXQjkTo3wIWV+d+Qukem0mG5Q== X-Received: by 2002:a05:600c:1d29:b0:424:86aa:b7e7 with SMTP id 5b1f17b1804b1-424893f767amr108722745e9.9.1719407643561; Wed, 26 Jun 2024 06:14:03 -0700 (PDT) Received: from localhost.localdomain (amontpellier-656-1-456-62.w92-145.abo.wanadoo.fr. [92.145.124.62]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-424c825d8a4sm25617795e9.19.2024.06.26.06.14.02 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 Jun 2024 06:14:03 -0700 (PDT) From: Alexandre Ghiti To: Jonathan Corbet , Paul Walmsley , Palmer Dabbelt , Albert Ou , Andrea Parri , Nathan Chancellor , Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng , Arnd Bergmann , Leonardo Bras , Guo Ren , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-riscv@lists.infradead.org, linux-arch@vger.kernel.org Cc: Alexandre Ghiti Subject: [PATCH v2 10/10] riscv: Add qspinlock support based on Zabha extension Date: Wed, 26 Jun 2024 15:03:47 +0200 Message-Id: <20240626130347.520750-11-alexghiti@rivosinc.com> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20240626130347.520750-1-alexghiti@rivosinc.com> References: <20240626130347.520750-1-alexghiti@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240626_061405_403874_564894BD X-CRM114-Status: GOOD ( 22.11 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org In order to produce a generic kernel, a user can select CONFIG_COMBO_SPINLOCKS which will fallback at runtime to the ticket spinlock implementation if Zabha is not present. Note that we can't use alternatives here because the discovery of extensions is done too late and we need to start with the qspinlock implementation because the ticket spinlock implementation would pollute the spinlock value, so let's use static keys. This is largely based on Guo's work and Leonardo reviews at [1]. Link: https://lore.kernel.org/linux-riscv/20231225125847.2778638-1-guoren@kernel.org/ [1] Signed-off-by: Alexandre Ghiti --- .../locking/queued-spinlocks/arch-support.txt | 2 +- arch/riscv/Kconfig | 10 +++++ arch/riscv/include/asm/Kbuild | 4 +- arch/riscv/include/asm/spinlock.h | 39 +++++++++++++++++++ arch/riscv/kernel/setup.c | 21 ++++++++++ include/asm-generic/qspinlock.h | 2 + include/asm-generic/ticket_spinlock.h | 2 + 7 files changed, 78 insertions(+), 2 deletions(-) create mode 100644 arch/riscv/include/asm/spinlock.h diff --git a/Documentation/features/locking/queued-spinlocks/arch-support.txt b/Documentation/features/locking/queued-spinlocks/arch-support.txt index 22f2990392ff..cf26042480e2 100644 --- a/Documentation/features/locking/queued-spinlocks/arch-support.txt +++ b/Documentation/features/locking/queued-spinlocks/arch-support.txt @@ -20,7 +20,7 @@ | openrisc: | ok | | parisc: | TODO | | powerpc: | ok | - | riscv: | TODO | + | riscv: | ok | | s390: | TODO | | sh: | TODO | | sparc: | ok | diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 0bbaec0444d0..c2ba673e58ca 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -72,6 +72,7 @@ config RISCV select ARCH_WANT_OPTIMIZE_HUGETLB_VMEMMAP select ARCH_WANTS_NO_INSTR select ARCH_WANTS_THP_SWAP if HAVE_ARCH_TRANSPARENT_HUGEPAGE + select ARCH_WEAK_RELEASE_ACQUIRE if ARCH_USE_QUEUED_SPINLOCKS select BINFMT_FLAT_NO_DATA_START_OFFSET if !MMU select BUILDTIME_TABLE_SORT if MMU select CLINT_TIMER if RISCV_M_MODE @@ -482,6 +483,15 @@ config NODES_SHIFT Specify the maximum number of NUMA Nodes available on the target system. Increases memory reserved to accommodate various tables. +config RISCV_COMBO_SPINLOCKS + bool "Using combo spinlock" + depends on SMP && MMU && TOOLCHAIN_HAS_ZABHA + select ARCH_USE_QUEUED_SPINLOCKS + default y + help + Embed both queued spinlock and ticket lock so that the spinlock + implementation can be chosen at runtime. + config RISCV_ALTERNATIVE bool depends on !XIP_KERNEL diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild index 504f8b7e72d4..ad72f2bd4cc9 100644 --- a/arch/riscv/include/asm/Kbuild +++ b/arch/riscv/include/asm/Kbuild @@ -2,10 +2,12 @@ generic-y += early_ioremap.h generic-y += flat.h generic-y += kvm_para.h +generic-y += mcs_spinlock.h generic-y += parport.h -generic-y += spinlock.h generic-y += spinlock_types.h +generic-y += ticket_spinlock.h generic-y += qrwlock.h generic-y += qrwlock_types.h +generic-y += qspinlock.h generic-y += user.h generic-y += vmlinux.lds.h diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h new file mode 100644 index 000000000000..4856d50006f2 --- /dev/null +++ b/arch/riscv/include/asm/spinlock.h @@ -0,0 +1,39 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __ASM_RISCV_SPINLOCK_H +#define __ASM_RISCV_SPINLOCK_H + +#ifdef CONFIG_RISCV_COMBO_SPINLOCKS +#define _Q_PENDING_LOOPS (1 << 9) + +#define __no_arch_spinlock_redefine +#include +#include +#include + +DECLARE_STATIC_KEY_TRUE(qspinlock_key); + +#define SPINLOCK_BASE_DECLARE(op, type, type_lock) \ +static __always_inline type arch_spin_##op(type_lock lock) \ +{ \ + if (static_branch_unlikely(&qspinlock_key)) \ + return queued_spin_##op(lock); \ + return ticket_spin_##op(lock); \ +} + +SPINLOCK_BASE_DECLARE(lock, void, arch_spinlock_t *) +SPINLOCK_BASE_DECLARE(unlock, void, arch_spinlock_t *) +SPINLOCK_BASE_DECLARE(is_locked, int, arch_spinlock_t *) +SPINLOCK_BASE_DECLARE(is_contended, int, arch_spinlock_t *) +SPINLOCK_BASE_DECLARE(trylock, bool, arch_spinlock_t *) +SPINLOCK_BASE_DECLARE(value_unlocked, int, arch_spinlock_t) + +#else + +#include + +#endif + +#include + +#endif /* __ASM_RISCV_SPINLOCK_H */ diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c index 4f73c0ae44b2..929bccd4c9e5 100644 --- a/arch/riscv/kernel/setup.c +++ b/arch/riscv/kernel/setup.c @@ -244,6 +244,26 @@ static void __init parse_dtb(void) #endif } +DEFINE_STATIC_KEY_TRUE(qspinlock_key); +EXPORT_SYMBOL(qspinlock_key); + +static void __init riscv_spinlock_init(void) +{ + asm goto(ALTERNATIVE("j %[no_zacas]", "nop", 0, RISCV_ISA_EXT_ZACAS, 1) + : : : : no_zacas); + asm goto(ALTERNATIVE("nop", "j %[qspinlock]", 0, RISCV_ISA_EXT_ZABHA, 1) + : : : : qspinlock); + +no_zacas: + static_branch_disable(&qspinlock_key); + pr_info("Ticket spinlock: enabled\n"); + + return; + +qspinlock: + pr_info("Queued spinlock: enabled\n"); +} + extern void __init init_rt_signal_env(void); void __init setup_arch(char **cmdline_p) @@ -295,6 +315,7 @@ void __init setup_arch(char **cmdline_p) riscv_set_dma_cache_alignment(); riscv_user_isa_enable(); + riscv_spinlock_init(); } bool arch_cpu_is_hotpluggable(int cpu) diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h index 0655aa5b57b2..bf47cca2c375 100644 --- a/include/asm-generic/qspinlock.h +++ b/include/asm-generic/qspinlock.h @@ -136,6 +136,7 @@ static __always_inline bool virt_spin_lock(struct qspinlock *lock) } #endif +#ifndef __no_arch_spinlock_redefine /* * Remapping spinlock architecture specific functions to the corresponding * queued spinlock functions. @@ -146,5 +147,6 @@ static __always_inline bool virt_spin_lock(struct qspinlock *lock) #define arch_spin_lock(l) queued_spin_lock(l) #define arch_spin_trylock(l) queued_spin_trylock(l) #define arch_spin_unlock(l) queued_spin_unlock(l) +#endif #endif /* __ASM_GENERIC_QSPINLOCK_H */ diff --git a/include/asm-generic/ticket_spinlock.h b/include/asm-generic/ticket_spinlock.h index cfcff22b37b3..325779970d8a 100644 --- a/include/asm-generic/ticket_spinlock.h +++ b/include/asm-generic/ticket_spinlock.h @@ -89,6 +89,7 @@ static __always_inline int ticket_spin_is_contended(arch_spinlock_t *lock) return (s16)((val >> 16) - (val & 0xffff)) > 1; } +#ifndef __no_arch_spinlock_redefine /* * Remapping spinlock architecture specific functions to the corresponding * ticket spinlock functions. @@ -99,5 +100,6 @@ static __always_inline int ticket_spin_is_contended(arch_spinlock_t *lock) #define arch_spin_lock(l) ticket_spin_lock(l) #define arch_spin_trylock(l) ticket_spin_trylock(l) #define arch_spin_unlock(l) ticket_spin_unlock(l) +#endif #endif /* __ASM_GENERIC_TICKET_SPINLOCK_H */