From patchwork Tue Jan 26 16:25:12 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Vrabel X-Patchwork-Id: 8124701 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 25E6C9F1C0 for ; Tue, 26 Jan 2016 16:28:34 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 7CEF82021B for ; Tue, 26 Jan 2016 16:28:29 +0000 (UTC) Received: from lists.xen.org (lists.xenproject.org [50.57.142.19]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id E81B820265 for ; Tue, 26 Jan 2016 16:28:27 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aO6RH-0004Cl-UA; Tue, 26 Jan 2016 16:25:55 +0000 Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aO6RG-0004Aa-CY for xen-devel@lists.xenproject.org; Tue, 26 Jan 2016 16:25:54 +0000 Received: from [85.158.143.35] by server-2.bemta-4.messagelabs.com id F0/21-08977-11E97A65; Tue, 26 Jan 2016 16:25:53 +0000 X-Env-Sender: prvs=826905ef6=david.vrabel@citrix.com X-Msg-Ref: server-10.tower-21.messagelabs.com!1453825548!12210099!2 X-Originating-IP: [66.165.176.63] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 7.35.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 31392 invoked from network); 26 Jan 2016 16:25:52 -0000 Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63) by server-10.tower-21.messagelabs.com with RC4-SHA encrypted SMTP; 26 Jan 2016 16:25:52 -0000 X-IronPort-AV: E=Sophos;i="5.22,350,1449532800"; d="scan'208";a="334056161" From: David Vrabel To: Date: Tue, 26 Jan 2016 16:25:12 +0000 Message-ID: <1453825513-1611-4-git-send-email-david.vrabel@citrix.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1453825513-1611-1-git-send-email-david.vrabel@citrix.com> References: <1453825513-1611-1-git-send-email-david.vrabel@citrix.com> MIME-Version: 1.0 X-DLP: MIA2 Cc: Andrew Cooper , Jennifer Herbert , David Vrabel , Jan Beulich , Ian Campbell Subject: [Xen-devel] [PATCHv2 3/4] spinlock: move rwlock API and per-cpu rwlocks into their own files X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP From: Jennifer Herbert In preparation for a replacement read-write lock implementation, move the API and the per-cpu read-write locks into their own files. Signed-off-by: Jennifer Herbert Signed-off-by: David Vrabel --- v2: - new --- xen/arch/x86/mm/mem_sharing.c | 1 + xen/common/Makefile | 1 + xen/common/rwlock.c | 47 +++++++++++++ xen/common/spinlock.c | 45 ------------- xen/include/asm-x86/mm.h | 1 + xen/include/xen/grant_table.h | 1 + xen/include/xen/rwlock.h | 150 ++++++++++++++++++++++++++++++++++++++++++ xen/include/xen/sched.h | 1 + xen/include/xen/spinlock.h | 143 ---------------------------------------- 9 files changed, 202 insertions(+), 188 deletions(-) create mode 100644 xen/common/rwlock.c create mode 100644 xen/include/xen/rwlock.h diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c index a95e105..a522423 100644 --- a/xen/arch/x86/mm/mem_sharing.c +++ b/xen/arch/x86/mm/mem_sharing.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include diff --git a/xen/common/Makefile b/xen/common/Makefile index 4df71ee..6e82b33 100644 --- a/xen/common/Makefile +++ b/xen/common/Makefile @@ -30,6 +30,7 @@ obj-y += rangeset.o obj-y += radix-tree.o obj-y += rbtree.o obj-y += rcupdate.o +obj-y += rwlock.o obj-$(CONFIG_SCHED_ARINC653) += sched_arinc653.o obj-$(CONFIG_SCHED_CREDIT) += sched_credit.o obj-$(CONFIG_SCHED_CREDIT2) += sched_credit2.o diff --git a/xen/common/rwlock.c b/xen/common/rwlock.c new file mode 100644 index 0000000..410d4dc --- /dev/null +++ b/xen/common/rwlock.c @@ -0,0 +1,47 @@ +#include +#include + +static DEFINE_PER_CPU(cpumask_t, percpu_rwlock_readers); + +void _percpu_write_lock(percpu_rwlock_t **per_cpudata, + percpu_rwlock_t *percpu_rwlock) +{ + unsigned int cpu; + cpumask_t *rwlock_readers = &this_cpu(percpu_rwlock_readers); + + /* Validate the correct per_cpudata variable has been provided. */ + _percpu_rwlock_owner_check(per_cpudata, percpu_rwlock); + + /* + * First take the write lock to protect against other writers or slow + * path readers. + */ + write_lock(&percpu_rwlock->rwlock); + + /* Now set the global variable so that readers start using read_lock. */ + percpu_rwlock->writer_activating = 1; + smp_mb(); + + /* Using a per cpu cpumask is only safe if there is no nesting. */ + ASSERT(!in_irq()); + cpumask_copy(rwlock_readers, &cpu_online_map); + + /* Check if there are any percpu readers in progress on this rwlock. */ + for ( ; ; ) + { + for_each_cpu(cpu, rwlock_readers) + { + /* + * Remove any percpu readers not contending on this rwlock + * from our check mask. + */ + if ( per_cpu_ptr(per_cpudata, cpu) != percpu_rwlock ) + __cpumask_clear_cpu(cpu, rwlock_readers); + } + /* Check if we've cleared all percpu readers from check mask. */ + if ( cpumask_empty(rwlock_readers) ) + break; + /* Give the coherency fabric a break. */ + cpu_relax(); + }; +} diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index bab1f95..7b0cf6c 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -10,8 +10,6 @@ #include #include -static DEFINE_PER_CPU(cpumask_t, percpu_rwlock_readers); - #ifndef NDEBUG static atomic_t spin_debug __read_mostly = ATOMIC_INIT(0); @@ -494,49 +492,6 @@ int _rw_is_write_locked(rwlock_t *lock) return (lock->lock == RW_WRITE_FLAG); /* writer in critical section? */ } -void _percpu_write_lock(percpu_rwlock_t **per_cpudata, - percpu_rwlock_t *percpu_rwlock) -{ - unsigned int cpu; - cpumask_t *rwlock_readers = &this_cpu(percpu_rwlock_readers); - - /* Validate the correct per_cpudata variable has been provided. */ - _percpu_rwlock_owner_check(per_cpudata, percpu_rwlock); - - /* - * First take the write lock to protect against other writers or slow - * path readers. - */ - write_lock(&percpu_rwlock->rwlock); - - /* Now set the global variable so that readers start using read_lock. */ - percpu_rwlock->writer_activating = 1; - smp_mb(); - - /* Using a per cpu cpumask is only safe if there is no nesting. */ - ASSERT(!in_irq()); - cpumask_copy(rwlock_readers, &cpu_online_map); - - /* Check if there are any percpu readers in progress on this rwlock. */ - for ( ; ; ) - { - for_each_cpu(cpu, rwlock_readers) - { - /* - * Remove any percpu readers not contending on this rwlock - * from our check mask. - */ - if ( per_cpu_ptr(per_cpudata, cpu) != percpu_rwlock ) - __cpumask_clear_cpu(cpu, rwlock_readers); - } - /* Check if we've cleared all percpu readers from check mask. */ - if ( cpumask_empty(rwlock_readers) ) - break; - /* Give the coherency fabric a break. */ - cpu_relax(); - }; -} - #ifdef LOCK_PROFILE struct lock_profile_anc { diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index 7598414..4560deb 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include diff --git a/xen/include/xen/grant_table.h b/xen/include/xen/grant_table.h index b4f064e..32b0ecd 100644 --- a/xen/include/xen/grant_table.h +++ b/xen/include/xen/grant_table.h @@ -23,6 +23,7 @@ #ifndef __XEN_GRANT_TABLE_H__ #define __XEN_GRANT_TABLE_H__ +#include #include #include #include diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h new file mode 100644 index 0000000..9d87783 --- /dev/null +++ b/xen/include/xen/rwlock.h @@ -0,0 +1,150 @@ +#ifndef __RWLOCK_H__ +#define __RWLOCK_H__ + +#include + +#define read_lock(l) _read_lock(l) +#define read_lock_irq(l) _read_lock_irq(l) +#define read_lock_irqsave(l, f) \ + ({ \ + BUILD_BUG_ON(sizeof(f) != sizeof(unsigned long)); \ + ((f) = _read_lock_irqsave(l)); \ + }) + +#define read_unlock(l) _read_unlock(l) +#define read_unlock_irq(l) _read_unlock_irq(l) +#define read_unlock_irqrestore(l, f) _read_unlock_irqrestore(l, f) +#define read_trylock(l) _read_trylock(l) + +#define write_lock(l) _write_lock(l) +#define write_lock_irq(l) _write_lock_irq(l) +#define write_lock_irqsave(l, f) \ + ({ \ + BUILD_BUG_ON(sizeof(f) != sizeof(unsigned long)); \ + ((f) = _write_lock_irqsave(l)); \ + }) +#define write_trylock(l) _write_trylock(l) + +#define write_unlock(l) _write_unlock(l) +#define write_unlock_irq(l) _write_unlock_irq(l) +#define write_unlock_irqrestore(l, f) _write_unlock_irqrestore(l, f) + +#define rw_is_locked(l) _rw_is_locked(l) +#define rw_is_write_locked(l) _rw_is_write_locked(l) + + +typedef struct percpu_rwlock percpu_rwlock_t; + +struct percpu_rwlock { + rwlock_t rwlock; + bool_t writer_activating; +#ifndef NDEBUG + percpu_rwlock_t **percpu_owner; +#endif +}; + +#ifndef NDEBUG +#define PERCPU_RW_LOCK_UNLOCKED(owner) { RW_LOCK_UNLOCKED, 0, owner } +static inline void _percpu_rwlock_owner_check(percpu_rwlock_t **per_cpudata, + percpu_rwlock_t *percpu_rwlock) +{ + ASSERT(per_cpudata == percpu_rwlock->percpu_owner); +} +#else +#define PERCPU_RW_LOCK_UNLOCKED(owner) { RW_LOCK_UNLOCKED, 0 } +#define _percpu_rwlock_owner_check(data, lock) ((void)0) +#endif + +#define DEFINE_PERCPU_RWLOCK_RESOURCE(l, owner) \ + percpu_rwlock_t l = PERCPU_RW_LOCK_UNLOCKED(&get_per_cpu_var(owner)) +#define percpu_rwlock_resource_init(l, owner) \ + (*(l) = (percpu_rwlock_t)PERCPU_RW_LOCK_UNLOCKED(&get_per_cpu_var(owner))) + +static inline void _percpu_read_lock(percpu_rwlock_t **per_cpudata, + percpu_rwlock_t *percpu_rwlock) +{ + /* Validate the correct per_cpudata variable has been provided. */ + _percpu_rwlock_owner_check(per_cpudata, percpu_rwlock); + + /* We cannot support recursion on the same lock. */ + ASSERT(this_cpu_ptr(per_cpudata) != percpu_rwlock); + /* + * Detect using a second percpu_rwlock_t simulatenously and fallback + * to standard read_lock. + */ + if ( unlikely(this_cpu_ptr(per_cpudata) != NULL ) ) + { + read_lock(&percpu_rwlock->rwlock); + return; + } + + /* Indicate this cpu is reading. */ + this_cpu_ptr(per_cpudata) = percpu_rwlock; + smp_mb(); + /* Check if a writer is waiting. */ + if ( unlikely(percpu_rwlock->writer_activating) ) + { + /* Let the waiting writer know we aren't holding the lock. */ + this_cpu_ptr(per_cpudata) = NULL; + /* Wait using the read lock to keep the lock fair. */ + read_lock(&percpu_rwlock->rwlock); + /* Set the per CPU data again and continue. */ + this_cpu_ptr(per_cpudata) = percpu_rwlock; + /* Drop the read lock because we don't need it anymore. */ + read_unlock(&percpu_rwlock->rwlock); + } +} + +static inline void _percpu_read_unlock(percpu_rwlock_t **per_cpudata, + percpu_rwlock_t *percpu_rwlock) +{ + /* Validate the correct per_cpudata variable has been provided. */ + _percpu_rwlock_owner_check(per_cpudata, percpu_rwlock); + + /* Verify the read lock was taken for this lock */ + ASSERT(this_cpu_ptr(per_cpudata) != NULL); + /* + * Detect using a second percpu_rwlock_t simulatenously and fallback + * to standard read_unlock. + */ + if ( unlikely(this_cpu_ptr(per_cpudata) != percpu_rwlock ) ) + { + read_unlock(&percpu_rwlock->rwlock); + return; + } + this_cpu_ptr(per_cpudata) = NULL; + smp_wmb(); +} + +/* Don't inline percpu write lock as it's a complex function. */ +void _percpu_write_lock(percpu_rwlock_t **per_cpudata, + percpu_rwlock_t *percpu_rwlock); + +static inline void _percpu_write_unlock(percpu_rwlock_t **per_cpudata, + percpu_rwlock_t *percpu_rwlock) +{ + /* Validate the correct per_cpudata variable has been provided. */ + _percpu_rwlock_owner_check(per_cpudata, percpu_rwlock); + + ASSERT(percpu_rwlock->writer_activating); + percpu_rwlock->writer_activating = 0; + write_unlock(&percpu_rwlock->rwlock); +} + +#define percpu_rw_is_write_locked(l) _rw_is_write_locked(&((l)->rwlock)) + +#define percpu_read_lock(percpu, lock) \ + _percpu_read_lock(&get_per_cpu_var(percpu), lock) +#define percpu_read_unlock(percpu, lock) \ + _percpu_read_unlock(&get_per_cpu_var(percpu), lock) +#define percpu_write_lock(percpu, lock) \ + _percpu_write_lock(&get_per_cpu_var(percpu), lock) +#define percpu_write_unlock(percpu, lock) \ + _percpu_write_unlock(&get_per_cpu_var(percpu), lock) + +#define DEFINE_PERCPU_RWLOCK_GLOBAL(name) DEFINE_PER_CPU(percpu_rwlock_t *, \ + name) +#define DECLARE_PERCPU_RWLOCK_GLOBAL(name) DECLARE_PER_CPU(percpu_rwlock_t *, \ + name) + +#endif /* __RWLOCK_H__ */ diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h index 5870745..b47a3fe 100644 --- a/xen/include/xen/sched.h +++ b/xen/include/xen/sched.h @@ -4,6 +4,7 @@ #include #include +#include #include #include #include diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index 22c4fc2..765db51 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -236,147 +236,4 @@ int _rw_is_write_locked(rwlock_t *lock); #define spin_lock_recursive(l) _spin_lock_recursive(l) #define spin_unlock_recursive(l) _spin_unlock_recursive(l) -#define read_lock(l) _read_lock(l) -#define read_lock_irq(l) _read_lock_irq(l) -#define read_lock_irqsave(l, f) \ - ({ \ - BUILD_BUG_ON(sizeof(f) != sizeof(unsigned long)); \ - ((f) = _read_lock_irqsave(l)); \ - }) - -#define read_unlock(l) _read_unlock(l) -#define read_unlock_irq(l) _read_unlock_irq(l) -#define read_unlock_irqrestore(l, f) _read_unlock_irqrestore(l, f) -#define read_trylock(l) _read_trylock(l) - -#define write_lock(l) _write_lock(l) -#define write_lock_irq(l) _write_lock_irq(l) -#define write_lock_irqsave(l, f) \ - ({ \ - BUILD_BUG_ON(sizeof(f) != sizeof(unsigned long)); \ - ((f) = _write_lock_irqsave(l)); \ - }) -#define write_trylock(l) _write_trylock(l) - -#define write_unlock(l) _write_unlock(l) -#define write_unlock_irq(l) _write_unlock_irq(l) -#define write_unlock_irqrestore(l, f) _write_unlock_irqrestore(l, f) - -#define rw_is_locked(l) _rw_is_locked(l) -#define rw_is_write_locked(l) _rw_is_write_locked(l) - -typedef struct percpu_rwlock percpu_rwlock_t; - -struct percpu_rwlock { - rwlock_t rwlock; - bool_t writer_activating; -#ifndef NDEBUG - percpu_rwlock_t **percpu_owner; -#endif -}; - -#ifndef NDEBUG -#define PERCPU_RW_LOCK_UNLOCKED(owner) { RW_LOCK_UNLOCKED, 0, owner } -static inline void _percpu_rwlock_owner_check(percpu_rwlock_t **per_cpudata, - percpu_rwlock_t *percpu_rwlock) -{ - ASSERT(per_cpudata == percpu_rwlock->percpu_owner); -} -#else -#define PERCPU_RW_LOCK_UNLOCKED(owner) { RW_LOCK_UNLOCKED, 0 } -#define _percpu_rwlock_owner_check(data, lock) ((void)0) -#endif - -#define DEFINE_PERCPU_RWLOCK_RESOURCE(l, owner) \ - percpu_rwlock_t l = PERCPU_RW_LOCK_UNLOCKED(&get_per_cpu_var(owner)) -#define percpu_rwlock_resource_init(l, owner) \ - (*(l) = (percpu_rwlock_t)PERCPU_RW_LOCK_UNLOCKED(&get_per_cpu_var(owner))) - -static inline void _percpu_read_lock(percpu_rwlock_t **per_cpudata, - percpu_rwlock_t *percpu_rwlock) -{ - /* Validate the correct per_cpudata variable has been provided. */ - _percpu_rwlock_owner_check(per_cpudata, percpu_rwlock); - - /* We cannot support recursion on the same lock. */ - ASSERT(this_cpu_ptr(per_cpudata) != percpu_rwlock); - /* - * Detect using a second percpu_rwlock_t simulatenously and fallback - * to standard read_lock. - */ - if ( unlikely(this_cpu_ptr(per_cpudata) != NULL ) ) - { - read_lock(&percpu_rwlock->rwlock); - return; - } - - /* Indicate this cpu is reading. */ - this_cpu_ptr(per_cpudata) = percpu_rwlock; - smp_mb(); - /* Check if a writer is waiting. */ - if ( unlikely(percpu_rwlock->writer_activating) ) - { - /* Let the waiting writer know we aren't holding the lock. */ - this_cpu_ptr(per_cpudata) = NULL; - /* Wait using the read lock to keep the lock fair. */ - read_lock(&percpu_rwlock->rwlock); - /* Set the per CPU data again and continue. */ - this_cpu_ptr(per_cpudata) = percpu_rwlock; - /* Drop the read lock because we don't need it anymore. */ - read_unlock(&percpu_rwlock->rwlock); - } -} - -static inline void _percpu_read_unlock(percpu_rwlock_t **per_cpudata, - percpu_rwlock_t *percpu_rwlock) -{ - /* Validate the correct per_cpudata variable has been provided. */ - _percpu_rwlock_owner_check(per_cpudata, percpu_rwlock); - - /* Verify the read lock was taken for this lock */ - ASSERT(this_cpu_ptr(per_cpudata) != NULL); - /* - * Detect using a second percpu_rwlock_t simulatenously and fallback - * to standard read_unlock. - */ - if ( unlikely(this_cpu_ptr(per_cpudata) != percpu_rwlock ) ) - { - read_unlock(&percpu_rwlock->rwlock); - return; - } - this_cpu_ptr(per_cpudata) = NULL; - smp_wmb(); -} - -/* Don't inline percpu write lock as it's a complex function. */ -void _percpu_write_lock(percpu_rwlock_t **per_cpudata, - percpu_rwlock_t *percpu_rwlock); - -static inline void _percpu_write_unlock(percpu_rwlock_t **per_cpudata, - percpu_rwlock_t *percpu_rwlock) -{ - /* Validate the correct per_cpudata variable has been provided. */ - _percpu_rwlock_owner_check(per_cpudata, percpu_rwlock); - - ASSERT(percpu_rwlock->writer_activating); - percpu_rwlock->writer_activating = 0; - write_unlock(&percpu_rwlock->rwlock); -} - -#define percpu_rw_is_write_locked(l) _rw_is_write_locked(&((l)->rwlock)) - -#define percpu_read_lock(percpu, lock) \ - _percpu_read_lock(&get_per_cpu_var(percpu), lock) -#define percpu_read_unlock(percpu, lock) \ - _percpu_read_unlock(&get_per_cpu_var(percpu), lock) -#define percpu_write_lock(percpu, lock) \ - _percpu_write_lock(&get_per_cpu_var(percpu), lock) -#define percpu_write_unlock(percpu, lock) \ - _percpu_write_unlock(&get_per_cpu_var(percpu), lock) - -#define DEFINE_PERCPU_RWLOCK_GLOBAL(name) DEFINE_PER_CPU(percpu_rwlock_t *, \ - name) -#define DECLARE_PERCPU_RWLOCK_GLOBAL(name) DECLARE_PER_CPU(percpu_rwlock_t *, \ - name) - #endif /* __SPINLOCK_H__ */