From patchwork Fri May 18 07:49:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kent Overstreet X-Patchwork-Id: 10408475 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id BFA1B602CB for ; Fri, 18 May 2018 07:55:08 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id B067628875 for ; Fri, 18 May 2018 07:55:08 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id A261928899; Fri, 18 May 2018 07:55:08 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.8 required=2.0 tests=BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, FREEMAIL_FROM, MAILING_LIST_MULTI, RCVD_IN_DNSWL_HI, T_DKIM_INVALID autolearn=unavailable version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 1158528875 for ; Fri, 18 May 2018 07:55:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752510AbeERHyy (ORCPT ); Fri, 18 May 2018 03:54:54 -0400 Received: from mail-qk0-f193.google.com ([209.85.220.193]:44440 "EHLO mail-qk0-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752136AbeERHt7 (ORCPT ); Fri, 18 May 2018 03:49:59 -0400 Received: by mail-qk0-f193.google.com with SMTP id z23-v6so386104qki.11; Fri, 18 May 2018 00:49:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=bjbqDdyXnSNoYSmiygguWU35VQa1dR8KraFcZ5AQBJY=; b=FFzyGadkCQKng2pWfIKHwoxIbj/TP2m30a//MJjnEUt0dWJF8pKVTjekQio8/VxDzs xA7JkMTMNvAmS5ID5ZGZxTPjAdll5JTVgfkIcdRiZYmAhPEOsGJaRM2jyAnV/crFQIYO 90VymGvb+IhJykusLw1NihH9jmJlYM6/sm/z9ivGDpEsbSwO3OQIHd4it/oG1rtIBGTR J2a3oX/Hsh5agUsxegvyZxCHs2qw4s6zKaXIfYjkc+GFQnpViZ+U8z1g2PwYKPqPva/y JA7qDa4I2Si6hqsc7iO737LyyEZE3tvF1zsKrQiQR3M31PGj1pYS8y2/P85m3KsEBivD OHzg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=bjbqDdyXnSNoYSmiygguWU35VQa1dR8KraFcZ5AQBJY=; b=J8VIABL+hvzzt8yT7D1IhE+cdIQNRB6YJshMNfedg54KJPXqhmqukNUoo7Y+S98ZlC huEP/xyUk37G1P10iCEIwnBojPiwfiEBmhuiBH8kaaNO2FUu9rxd1ojG0qbXPN2PAmp+ APJxn9R2DlWSMBd+dB0bdqQAmEo1hqq5ZTQ3umKGDpCvLVwaP+edgsIHjAvmF/2ZLMWm I3Pv/Tqg0lY25fbtQk9U2wN4bancQ1eYtksdemXwAs9KxYmfZ/S1hleYKHTxeVHPQrOL kmL+zT7CBhP7EPPtoDRCIPQaAjk6eWU+zg+pn8nVcrYcnbiDdJHz61LAEqFbyetmg6Pz t0Rw== X-Gm-Message-State: ALKqPwe5yHdVmjwxyobNXx5gYncvnfPcR+DrMHMWCMionsYc2opCtEap 1A0UnXCLjCl3eBxZqNj018qdW8hHuw== X-Google-Smtp-Source: AB8JxZqQ2AFoSAUplCmErIoOaoUPHdIyTR/anMQETPE59PqhZWJv8knYU27R4/jTSNtVSdoG0WNFhQ== X-Received: by 2002:a37:5c03:: with SMTP id q3-v6mr7715436qkb.345.1526629798520; Fri, 18 May 2018 00:49:58 -0700 (PDT) Received: from localhost.localdomain (c-71-234-172-214.hsd1.vt.comcast.net. [71.234.172.214]) by smtp.gmail.com with ESMTPSA id s64-v6sm5443004qkl.85.2018.05.18.00.49.56 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 18 May 2018 00:49:57 -0700 (PDT) From: Kent Overstreet To: linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org Cc: Kent Overstreet , Andrew Morton , Dave Chinner , darrick.wong@oracle.com, tytso@mit.edu, linux-btrfs@vger.kernel.org, clm@fb.com, jbacik@fb.com, viro@zeniv.linux.org.uk, willy@infradead.org, peterz@infradead.org Subject: [PATCH 03/10] locking: bring back lglocks Date: Fri, 18 May 2018 03:49:04 -0400 Message-Id: <20180518074918.13816-7-kent.overstreet@gmail.com> X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180518074918.13816-1-kent.overstreet@gmail.com> References: <20180518074918.13816-1-kent.overstreet@gmail.com> Sender: linux-btrfs-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-btrfs@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP bcachefs makes use of them - also, add a proper lg_lock_init() Signed-off-by: Kent Overstreet --- include/linux/lglock.h | 97 +++++++++++++++++++++++++++++++++++++ kernel/locking/Makefile | 1 + kernel/locking/lglock.c | 105 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 203 insertions(+) create mode 100644 include/linux/lglock.h create mode 100644 kernel/locking/lglock.c diff --git a/include/linux/lglock.h b/include/linux/lglock.h new file mode 100644 index 0000000000..c1fbe64bd2 --- /dev/null +++ b/include/linux/lglock.h @@ -0,0 +1,97 @@ +/* + * Specialised local-global spinlock. Can only be declared as global variables + * to avoid overhead and keep things simple (and we don't want to start using + * these inside dynamically allocated structures). + * + * "local/global locks" (lglocks) can be used to: + * + * - Provide fast exclusive access to per-CPU data, with exclusive access to + * another CPU's data allowed but possibly subject to contention, and to + * provide very slow exclusive access to all per-CPU data. + * - Or to provide very fast and scalable read serialisation, and to provide + * very slow exclusive serialisation of data (not necessarily per-CPU data). + * + * Brlocks are also implemented as a short-hand notation for the latter use + * case. + * + * Copyright 2009, 2010, Nick Piggin, Novell Inc. + */ +#ifndef __LINUX_LGLOCK_H +#define __LINUX_LGLOCK_H + +#include +#include +#include +#include +#include + +#ifdef CONFIG_SMP + +struct lglock { + arch_spinlock_t __percpu *lock; +#ifdef CONFIG_DEBUG_LOCK_ALLOC + struct lockdep_map dep_map; +#endif +}; + +#define DEFINE_LGLOCK(name) \ + static DEFINE_PER_CPU(arch_spinlock_t, name ## _lock) \ + = __ARCH_SPIN_LOCK_UNLOCKED; \ + struct lglock name = { .lock = &name ## _lock } + +#define DEFINE_STATIC_LGLOCK(name) \ + static DEFINE_PER_CPU(arch_spinlock_t, name ## _lock) \ + = __ARCH_SPIN_LOCK_UNLOCKED; \ + static struct lglock name = { .lock = &name ## _lock } + +static inline void lg_lock_free(struct lglock *lg) +{ + free_percpu(lg->lock); +} + +#define lg_lock_lockdep_init(lock) \ +do { \ + static struct lock_class_key __key; \ + \ + lockdep_init_map(&(lock)->dep_map, #lock, &__key, 0); \ +} while (0) + +static inline int __lg_lock_init(struct lglock *lg) +{ + lg->lock = alloc_percpu(*lg->lock); + return lg->lock ? 0 : -ENOMEM; +} + +#define lg_lock_init(lock) \ +({ \ + lg_lock_lockdep_init(lock); \ + __lg_lock_init(lock); \ +}) + +void lg_local_lock(struct lglock *lg); +void lg_local_unlock(struct lglock *lg); +void lg_local_lock_cpu(struct lglock *lg, int cpu); +void lg_local_unlock_cpu(struct lglock *lg, int cpu); + +void lg_double_lock(struct lglock *lg, int cpu1, int cpu2); +void lg_double_unlock(struct lglock *lg, int cpu1, int cpu2); + +void lg_global_lock(struct lglock *lg); +void lg_global_unlock(struct lglock *lg); + +#else +/* When !CONFIG_SMP, map lglock to spinlock */ +#define lglock spinlock +#define DEFINE_LGLOCK(name) DEFINE_SPINLOCK(name) +#define DEFINE_STATIC_LGLOCK(name) static DEFINE_SPINLOCK(name) +#define lg_lock_init(lg) ({ spin_lock_init(lg); 0; }) +#define lg_lock_free(lg) do {} while (0) +#define lg_local_lock spin_lock +#define lg_local_unlock spin_unlock +#define lg_local_lock_cpu(lg, cpu) spin_lock(lg) +#define lg_local_unlock_cpu(lg, cpu) spin_unlock(lg) +#define lg_global_lock spin_lock +#define lg_global_unlock spin_unlock +#endif + +#endif diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile index 392c7f23af..e5bb62823d 100644 --- a/kernel/locking/Makefile +++ b/kernel/locking/Makefile @@ -19,6 +19,7 @@ obj-$(CONFIG_LOCKDEP) += lockdep_proc.o endif obj-$(CONFIG_SMP) += spinlock.o obj-$(CONFIG_LOCK_SPIN_ON_OWNER) += osq_lock.o +obj-$(CONFIG_SMP) += lglock.o obj-$(CONFIG_PROVE_LOCKING) += spinlock.o obj-$(CONFIG_QUEUED_SPINLOCKS) += qspinlock.o obj-$(CONFIG_RT_MUTEXES) += rtmutex.o diff --git a/kernel/locking/lglock.c b/kernel/locking/lglock.c new file mode 100644 index 0000000000..051feaccc4 --- /dev/null +++ b/kernel/locking/lglock.c @@ -0,0 +1,105 @@ +/* See include/linux/lglock.h for description */ +#include +#include +#include +#include + +/* + * Note there is no uninit, so lglocks cannot be defined in + * modules (but it's fine to use them from there) + * Could be added though, just undo lg_lock_init + */ + +void lg_local_lock(struct lglock *lg) +{ + arch_spinlock_t *lock; + + preempt_disable(); + lock_acquire_shared(&lg->dep_map, 0, 0, NULL, _RET_IP_); + lock = this_cpu_ptr(lg->lock); + arch_spin_lock(lock); +} +EXPORT_SYMBOL(lg_local_lock); + +void lg_local_unlock(struct lglock *lg) +{ + arch_spinlock_t *lock; + + lock_release(&lg->dep_map, 1, _RET_IP_); + lock = this_cpu_ptr(lg->lock); + arch_spin_unlock(lock); + preempt_enable(); +} +EXPORT_SYMBOL(lg_local_unlock); + +void lg_local_lock_cpu(struct lglock *lg, int cpu) +{ + arch_spinlock_t *lock; + + preempt_disable(); + lock_acquire_shared(&lg->dep_map, 0, 0, NULL, _RET_IP_); + lock = per_cpu_ptr(lg->lock, cpu); + arch_spin_lock(lock); +} +EXPORT_SYMBOL(lg_local_lock_cpu); + +void lg_local_unlock_cpu(struct lglock *lg, int cpu) +{ + arch_spinlock_t *lock; + + lock_release(&lg->dep_map, 1, _RET_IP_); + lock = per_cpu_ptr(lg->lock, cpu); + arch_spin_unlock(lock); + preempt_enable(); +} +EXPORT_SYMBOL(lg_local_unlock_cpu); + +void lg_double_lock(struct lglock *lg, int cpu1, int cpu2) +{ + BUG_ON(cpu1 == cpu2); + + /* lock in cpu order, just like lg_global_lock */ + if (cpu2 < cpu1) + swap(cpu1, cpu2); + + preempt_disable(); + lock_acquire_shared(&lg->dep_map, 0, 0, NULL, _RET_IP_); + arch_spin_lock(per_cpu_ptr(lg->lock, cpu1)); + arch_spin_lock(per_cpu_ptr(lg->lock, cpu2)); +} + +void lg_double_unlock(struct lglock *lg, int cpu1, int cpu2) +{ + lock_release(&lg->dep_map, 1, _RET_IP_); + arch_spin_unlock(per_cpu_ptr(lg->lock, cpu1)); + arch_spin_unlock(per_cpu_ptr(lg->lock, cpu2)); + preempt_enable(); +} + +void lg_global_lock(struct lglock *lg) +{ + int i; + + preempt_disable(); + lock_acquire_exclusive(&lg->dep_map, 0, 0, NULL, _RET_IP_); + for_each_possible_cpu(i) { + arch_spinlock_t *lock; + lock = per_cpu_ptr(lg->lock, i); + arch_spin_lock(lock); + } +} +EXPORT_SYMBOL(lg_global_lock); + +void lg_global_unlock(struct lglock *lg) +{ + int i; + + lock_release(&lg->dep_map, 1, _RET_IP_); + for_each_possible_cpu(i) { + arch_spinlock_t *lock; + lock = per_cpu_ptr(lg->lock, i); + arch_spin_unlock(lock); + } + preempt_enable(); +} +EXPORT_SYMBOL(lg_global_unlock);