From patchwork Sat Apr 30 15:36:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Palmer Dabbelt X-Patchwork-Id: 12833319 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id BEBB0C43217 for ; Sat, 30 Apr 2022 15:38:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:To:From:Cc:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=FcCrXP5XQUj4oKtEqeVZqgN9oVOVdtHPy9tC5jnfV8I=; b=zDQPvEZXZU3glU 7OKx94ZOC+guPjEN85hBug/WAA27/ll7Vm9NDhpREJYNgRK4HhXmOH2KFMw3fk5KyjGcz5+NMESW1 3Nx/G0q/F90CBuTXjY7fuzXM9agwvmVZmVJ42X21r0u/8IjW61SzYOco8eWx0H/BNBMRicnQvpUG9 lUWZ1WPsF3nd+coFT0gKYTpZaMlHNl0IQOVQp/N5Bs5VNcgFZjFZf/Ej5gtLIQmkCFsAq/ATeUW78 I0TO2h2lRwAKX2Og3dO1bgWYu77E0HbIvDCWEJu6zCfqMAzLdSGHFHAn5wysQdz+JwTMzN8l0wa4r dxPOgDT2M5dAONrJYP3A==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkpAe-00EZFe-78; Sat, 30 Apr 2022 15:38:08 +0000 Received: from mail-pj1-x1032.google.com ([2607:f8b0:4864:20::1032]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1nkpAW-00EZ9E-5J for linux-riscv@lists.infradead.org; Sat, 30 Apr 2022 15:38:01 +0000 Received: by mail-pj1-x1032.google.com with SMTP id o69so8082198pjo.3 for ; Sat, 30 Apr 2022 08:37:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20210112.gappssmtp.com; s=20210112; h=subject:date:message-id:in-reply-to:references:mime-version :content-transfer-encoding:cc:from:to; bh=vsFDZUB/84NzhX1Z1IP0W5vQn6O0PaUxSIdB0QjeUDc=; b=r9PDVKcgm65Yi4vII/nk4Hbo/0VBC0YV7rdeo8wYExc1mL5i9iTQrvolkp1GVFLke7 +vCF/A5uqEZvh2csjfLPm8RWQsuHbKJX5kwLzm78bw0kFy42B6J4nK8MD/fKN48dI4bG 523x+G8tMmvMFwvBTnJ9CHVBhnmsDko/gzea7i1d/e4C2uq1pHJ2kE0F23EmSkojAL3L fd201kWeiUl5qiV4l0f1gNZUR0QSWEbHO12tcQN0BazqIjtOLG62PJBWmQZP4OcuE4Eb RxvgdLj+WdYFMnxwj+PMOBWi7eBUZ7BBl2ku0KO40Vcvdzu19+9wALmnOJee6SEHdHxN toNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:cc:from:to; bh=vsFDZUB/84NzhX1Z1IP0W5vQn6O0PaUxSIdB0QjeUDc=; b=fyYNKhRzOdljenEW4nSISBTIZejPhvB3e8SkY/0MX8n42K7eIMZN36iqo/P8BvccJk 6Rzl5zQU85/DnFDTkNGERiNQ+2PC71PrMpkqTIPMM2kLmRICWWsfrqtvGHl2yTAw2xV1 c9jyf6ugM9AOWlY58SEehDmLiMJBUqxmfsPjYIrmR1pr9TSJE2O2geM6ioti6Ji+Czvv LJuhdB2bq/OMHVX8sb+glK40UxFpVc/Lsu/6JPoNDLTWMedzzell3J+3GsCEQ/1N8dwA 8S7B2j6fxIz2TFkVuNX0JJgcXZchThv9erirPsk0q+ETEtr46cXyvg/hko5o/WYYd9FZ KBYw== X-Gm-Message-State: AOAM532OK+Qu0CPbQjd4wgBEirYpVOZ7QV+wq8mLW/uLH+/UT5zh5JGI p6fyy7o3kUi0PVzD/6FcbNjhlg== X-Google-Smtp-Source: ABdhPJw/hFbzhIhDKerSo/xxSkM2N2Ke32oM0UbAy8C7RthY2xSvx/irL3sI6rRGhR1JZDHiupHHjA== X-Received: by 2002:a17:902:8b8a:b0:158:fbd0:45ab with SMTP id ay10-20020a1709028b8a00b00158fbd045abmr4283114plb.110.1651333078732; Sat, 30 Apr 2022 08:37:58 -0700 (PDT) Received: from localhost (76-210-143-223.lightspeed.sntcca.sbcglobal.net. [76.210.143.223]) by smtp.gmail.com with ESMTPSA id i1-20020a17090332c100b0015e8d4eb1f4sm1630983plr.62.2022.04.30.08.37.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 30 Apr 2022 08:37:58 -0700 (PDT) Subject: [PATCH v4 5/7] RISC-V: Move to generic spinlocks Date: Sat, 30 Apr 2022 08:36:24 -0700 Message-Id: <20220430153626.30660-6-palmer@rivosinc.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220430153626.30660-1-palmer@rivosinc.com> References: <20220430153626.30660-1-palmer@rivosinc.com> MIME-Version: 1.0 Cc: guoren@kernel.org, peterz@infradead.org, mingo@redhat.com, Will Deacon , longman@redhat.com, boqun.feng@gmail.com, jonas@southpole.se, stefan.kristiansson@saunalahti.fi, shorne@gmail.com, Paul Walmsley , Palmer Dabbelt , aou@eecs.berkeley.edu, Arnd Bergmann , Greg KH , sudipm.mukherjee@gmail.com, macro@orcam.me.uk, jszhang@kernel.org, linux-csky@vger.kernel.org, linux-kernel@vger.kernel.org, openrisc@lists.librecores.org, linux-riscv@lists.infradead.org, linux-arch@vger.kernel.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, Palmer Dabbelt From: Palmer Dabbelt To: Arnd Bergmann X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220430_083800_227737_0091AC36 X-CRM114-Status: GOOD ( 15.09 ) X-BeenThere: linux-riscv@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-riscv" Errors-To: linux-riscv-bounces+linux-riscv=archiver.kernel.org@lists.infradead.org From: Palmer Dabbelt Our existing spinlocks aren't fair and replacing them has been on the TODO list for a long time. This moves to the recently-introduced ticket spinlocks, which are simple enough that they are likely to be correct and fast on the vast majority of extant implementations. This introduces a horrible hack that allows us to split out the spinlock conversion from the rwlock conversion. We have to do the spinlocks first because qrwlock needs fair spinlocks, but we don't want to pollute the asm-generic code to support the generic spinlocks without qrwlocks. Thus we pollute the RISC-V code, but just until the next commit as it's all going away. Signed-off-by: Palmer Dabbelt Tested-by: Heiko Stuebner Reviewed-by: Guo Ren Tested-by: Conor Dooley --- arch/riscv/include/asm/Kbuild | 2 ++ arch/riscv/include/asm/spinlock.h | 44 +++---------------------- arch/riscv/include/asm/spinlock_types.h | 9 +++-- 3 files changed, 10 insertions(+), 45 deletions(-) diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild index 5edf5b8587e7..c3f229ae8033 100644 --- a/arch/riscv/include/asm/Kbuild +++ b/arch/riscv/include/asm/Kbuild @@ -3,5 +3,7 @@ generic-y += early_ioremap.h generic-y += flat.h generic-y += kvm_para.h generic-y += parport.h +generic-y += qrwlock.h +generic-y += qrwlock_types.h generic-y += user.h generic-y += vmlinux.lds.h diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h index f4f7fa1b7ca8..88a4d5d0d98a 100644 --- a/arch/riscv/include/asm/spinlock.h +++ b/arch/riscv/include/asm/spinlock.h @@ -7,49 +7,13 @@ #ifndef _ASM_RISCV_SPINLOCK_H #define _ASM_RISCV_SPINLOCK_H +/* This is horible, but the whole file is going away in the next commit. */ +#define __ASM_GENERIC_QRWLOCK_H + #include #include #include - -/* - * Simple spin lock operations. These provide no fairness guarantees. - */ - -/* FIXME: Replace this with a ticket lock, like MIPS. */ - -#define arch_spin_is_locked(x) (READ_ONCE((x)->lock) != 0) - -static inline void arch_spin_unlock(arch_spinlock_t *lock) -{ - smp_store_release(&lock->lock, 0); -} - -static inline int arch_spin_trylock(arch_spinlock_t *lock) -{ - int tmp = 1, busy; - - __asm__ __volatile__ ( - " amoswap.w %0, %2, %1\n" - RISCV_ACQUIRE_BARRIER - : "=r" (busy), "+A" (lock->lock) - : "r" (tmp) - : "memory"); - - return !busy; -} - -static inline void arch_spin_lock(arch_spinlock_t *lock) -{ - while (1) { - if (arch_spin_is_locked(lock)) - continue; - - if (arch_spin_trylock(lock)) - break; - } -} - -/***********************************************************/ +#include static inline void arch_read_lock(arch_rwlock_t *lock) { diff --git a/arch/riscv/include/asm/spinlock_types.h b/arch/riscv/include/asm/spinlock_types.h index 5a35a49505da..f2f9b5d7120d 100644 --- a/arch/riscv/include/asm/spinlock_types.h +++ b/arch/riscv/include/asm/spinlock_types.h @@ -6,15 +6,14 @@ #ifndef _ASM_RISCV_SPINLOCK_TYPES_H #define _ASM_RISCV_SPINLOCK_TYPES_H +/* This is horible, but the whole file is going away in the next commit. */ +#define __ASM_GENERIC_QRWLOCK_TYPES_H + #ifndef __LINUX_SPINLOCK_TYPES_RAW_H # error "please don't include this file directly" #endif -typedef struct { - volatile unsigned int lock; -} arch_spinlock_t; - -#define __ARCH_SPIN_LOCK_UNLOCKED { 0 } +#include typedef struct { volatile unsigned int lock;