From patchwork Thu Feb 24 10:54:35 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 12758331 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 7A5D5C4332F for ; Thu, 24 Feb 2022 10:55:02 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.278152.475304 (Exim 4.92) (envelope-from ) id 1nNBlm-0000wS-LI; Thu, 24 Feb 2022 10:54:46 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 278152.475304; Thu, 24 Feb 2022 10:54:46 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nNBlm-0000wL-I3; Thu, 24 Feb 2022 10:54:46 +0000 Received: by outflank-mailman (input) for mailman id 278152; Thu, 24 Feb 2022 10:54:44 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nNBlk-0000gJ-Pv for xen-devel@lists.xenproject.org; Thu, 24 Feb 2022 10:54:44 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 2bebc759-9560-11ec-8eb8-a37418f5ba1a; Thu, 24 Feb 2022 11:54:43 +0100 (CET) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id A1A5121136; Thu, 24 Feb 2022 10:54:42 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 68F2713A79; Thu, 24 Feb 2022 10:54:42 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id CI5gGPJjF2L2SQAAMHmgww (envelope-from ); Thu, 24 Feb 2022 10:54:42 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2bebc759-9560-11ec-8eb8-a37418f5ba1a DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1645700082; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ojjdlGbMc95KyfFklDt/mHMPcKnQ5SgYlPEfROv8euA=; b=FZ3VVQKQ9N5KZGY3CNBPoQN1p/zbYprrswdaoV6yeJORXSDKet60wfyv/NQpmwX8+bE9Np FeX9q0eH6FZN39XM9nDMkzizWXCYN7kr/xxMr0KmfHZ4mSu3tt5nvZT2sNXslubpJrO11N jKpjcXsuem5DcyfXfNoKB5FOTEvCMEo= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Andrew Cooper , George Dunlap , Jan Beulich , Julien Grall , Stefano Stabellini , Wei Liu Subject: [PATCH 1/2] xen/spinlock: use lock address for lock debug functions Date: Thu, 24 Feb 2022 11:54:35 +0100 Message-Id: <20220224105436.1480-2-jgross@suse.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220224105436.1480-1-jgross@suse.com> References: <20220224105436.1480-1-jgross@suse.com> MIME-Version: 1.0 Instead of only passing the lock_debug address to check_lock() et al use the address of the spinlock. Signed-off-by: Juergen Gross --- xen/common/spinlock.c | 34 +++++++++++++++++----------------- xen/include/xen/rwlock.h | 10 +++++----- xen/include/xen/spinlock.h | 10 ++++++++-- 3 files changed, 30 insertions(+), 24 deletions(-) diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index 62c83aaa6a..53d6ab6853 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -13,7 +13,7 @@ static atomic_t spin_debug __read_mostly = ATOMIC_INIT(0); -void check_lock(union lock_debug *debug, bool try) +void check_lock(spinlock_t *lock, bool try) { bool irq_safe = !local_irq_is_enabled(); @@ -49,12 +49,12 @@ void check_lock(union lock_debug *debug, bool try) if ( try && irq_safe ) return; - if ( unlikely(debug->irq_safe != irq_safe) ) + if ( unlikely(lock->debug.irq_safe != irq_safe) ) { union lock_debug seen, new = { 0 }; new.irq_safe = irq_safe; - seen.val = cmpxchg(&debug->val, LOCK_DEBUG_INITVAL, new.val); + seen.val = cmpxchg(&lock->debug.val, LOCK_DEBUG_INITVAL, new.val); if ( !seen.unseen && seen.irq_safe == !irq_safe ) { @@ -65,7 +65,7 @@ void check_lock(union lock_debug *debug, bool try) } } -static void check_barrier(union lock_debug *debug) +static void check_barrier(spinlock_t *lock) { if ( unlikely(atomic_read(&spin_debug) <= 0) ) return; @@ -81,19 +81,19 @@ static void check_barrier(union lock_debug *debug) * However, if we spin on an IRQ-unsafe lock with IRQs disabled then that * is clearly wrong, for the same reason outlined in check_lock() above. */ - BUG_ON(!local_irq_is_enabled() && !debug->irq_safe); + BUG_ON(!local_irq_is_enabled() && !lock->debug.irq_safe); } -static void got_lock(union lock_debug *debug) +static void got_lock(spinlock_t *lock) { - debug->cpu = smp_processor_id(); + lock->debug.cpu = smp_processor_id(); } -static void rel_lock(union lock_debug *debug) +static void rel_lock(spinlock_t *lock) { if ( atomic_read(&spin_debug) > 0 ) - BUG_ON(debug->cpu != smp_processor_id()); - debug->cpu = SPINLOCK_NO_CPU; + BUG_ON(lock->debug.cpu != smp_processor_id()); + lock->debug.cpu = SPINLOCK_NO_CPU; } void spin_debug_enable(void) @@ -164,7 +164,7 @@ void inline _spin_lock_cb(spinlock_t *lock, void (*cb)(void *), void *data) spinlock_tickets_t tickets = SPINLOCK_TICKET_INC; LOCK_PROFILE_VAR; - check_lock(&lock->debug, false); + check_lock(lock, false); preempt_disable(); tickets.head_tail = arch_fetch_and_add(&lock->tickets.head_tail, tickets.head_tail); @@ -176,7 +176,7 @@ void inline _spin_lock_cb(spinlock_t *lock, void (*cb)(void *), void *data) arch_lock_relax(); } arch_lock_acquire_barrier(); - got_lock(&lock->debug); + got_lock(lock); LOCK_PROFILE_GOT; } @@ -204,7 +204,7 @@ unsigned long _spin_lock_irqsave(spinlock_t *lock) void _spin_unlock(spinlock_t *lock) { LOCK_PROFILE_REL; - rel_lock(&lock->debug); + rel_lock(lock); arch_lock_release_barrier(); add_sized(&lock->tickets.head, 1); arch_lock_signal(); @@ -240,7 +240,7 @@ int _spin_trylock(spinlock_t *lock) spinlock_tickets_t old, new; preempt_disable(); - check_lock(&lock->debug, true); + check_lock(lock, true); old = observe_lock(&lock->tickets); if ( old.head != old.tail ) { @@ -259,7 +259,7 @@ int _spin_trylock(spinlock_t *lock) * cmpxchg() is a full barrier so no need for an * arch_lock_acquire_barrier(). */ - got_lock(&lock->debug); + got_lock(lock); #ifdef CONFIG_DEBUG_LOCK_PROFILE if (lock->profile) lock->profile->time_locked = NOW(); @@ -274,7 +274,7 @@ void _spin_barrier(spinlock_t *lock) s_time_t block = NOW(); #endif - check_barrier(&lock->debug); + check_barrier(lock); smp_mb(); sample = observe_lock(&lock->tickets); if ( sample.head != sample.tail ) @@ -300,7 +300,7 @@ int _spin_trylock_recursive(spinlock_t *lock) BUILD_BUG_ON(NR_CPUS > SPINLOCK_NO_CPU); BUILD_BUG_ON(SPINLOCK_RECURSE_BITS < 3); - check_lock(&lock->debug, true); + check_lock(lock, true); if ( likely(lock->recurse_cpu != cpu) ) { diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h index 0cc9167715..188fc809dc 100644 --- a/xen/include/xen/rwlock.h +++ b/xen/include/xen/rwlock.h @@ -56,7 +56,7 @@ static inline int _read_trylock(rwlock_t *lock) u32 cnts; preempt_disable(); - check_lock(&lock->lock.debug, true); + check_lock(&lock->lock, true); cnts = atomic_read(&lock->cnts); if ( likely(_can_read_lock(cnts)) ) { @@ -90,7 +90,7 @@ static inline void _read_lock(rwlock_t *lock) if ( likely(_can_read_lock(cnts)) ) { /* The slow path calls check_lock() via spin_lock(). */ - check_lock(&lock->lock.debug, false); + check_lock(&lock->lock, false); return; } @@ -169,7 +169,7 @@ static inline void _write_lock(rwlock_t *lock) if ( atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) == 0 ) { /* The slow path calls check_lock() via spin_lock(). */ - check_lock(&lock->lock.debug, false); + check_lock(&lock->lock, false); return; } @@ -206,7 +206,7 @@ static inline int _write_trylock(rwlock_t *lock) u32 cnts; preempt_disable(); - check_lock(&lock->lock.debug, true); + check_lock(&lock->lock, true); cnts = atomic_read(&lock->cnts); if ( unlikely(cnts) || unlikely(atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) != 0) ) @@ -341,7 +341,7 @@ static inline void _percpu_read_lock(percpu_rwlock_t **per_cpudata, else { /* All other paths have implicit check_lock() calls via read_lock(). */ - check_lock(&percpu_rwlock->rwlock.lock.debug, false); + check_lock(&percpu_rwlock->rwlock.lock, false); } } diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index 961891bea4..5b6b73732f 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -21,13 +21,11 @@ union lock_debug { }; }; #define _LOCK_DEBUG { LOCK_DEBUG_INITVAL } -void check_lock(union lock_debug *debug, bool try); void spin_debug_enable(void); void spin_debug_disable(void); #else union lock_debug { }; #define _LOCK_DEBUG { } -#define check_lock(l, t) ((void)0) #define spin_debug_enable() ((void)0) #define spin_debug_disable() ((void)0) #endif @@ -189,6 +187,14 @@ int _spin_trylock_recursive(spinlock_t *lock); void _spin_lock_recursive(spinlock_t *lock); void _spin_unlock_recursive(spinlock_t *lock); +#ifdef CONFIG_DEBUG_LOCKS +void check_lock(spinlock_t *lock, bool try); +#else +static inline void check_lock(spinlock_t *lock, bool try) +{ +} +#endif + #define spin_lock(l) _spin_lock(l) #define spin_lock_cb(l, c, d) _spin_lock_cb(l, c, d) #define spin_lock_irq(l) _spin_lock_irq(l) From patchwork Thu Feb 24 10:54:36 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: =?utf-8?b?SsO8cmdlbiBHcm/Dnw==?= X-Patchwork-Id: 12758330 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id DEFD6C433F5 for ; Thu, 24 Feb 2022 10:55:01 +0000 (UTC) Received: from list by lists.xenproject.org with outflank-mailman.278153.475310 (Exim 4.92) (envelope-from ) id 1nNBlm-0000yT-UZ; Thu, 24 Feb 2022 10:54:46 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 278153.475310; Thu, 24 Feb 2022 10:54:46 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nNBlm-0000yB-Pe; Thu, 24 Feb 2022 10:54:46 +0000 Received: by outflank-mailman (input) for mailman id 278153; Thu, 24 Feb 2022 10:54:45 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1nNBll-0000gP-1N for xen-devel@lists.xenproject.org; Thu, 24 Feb 2022 10:54:45 +0000 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 2c0ab764-9560-11ec-8539-5f4723681683; Thu, 24 Feb 2022 11:54:43 +0100 (CET) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id E7D8A212B6; Thu, 24 Feb 2022 10:54:42 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id A85BA13A79; Thu, 24 Feb 2022 10:54:42 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id uPbOJ/JjF2L2SQAAMHmgww (envelope-from ); Thu, 24 Feb 2022 10:54:42 +0000 X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2c0ab764-9560-11ec-8539-5f4723681683 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1645700082; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=URIjccjwM+WJARLadPEaWbGGRJrYXRvtFlWn552mtiI=; b=fLIXfWbRb3zjBOqhts86DZqCB04UL0o2z9JifLcWZ6kaGZy/vV/jBOjbJ53bCqxq2o6eIm n/Zk19ePskoyZNPKLmVdL6fMrGRySTYHM96AZ1k6f1zmFx1CgXykOA1xMQwJ9i4xlFbT2Y wmUlgkBcSspxhYVOKBW7mD+eAvMNghQ= From: Juergen Gross To: xen-devel@lists.xenproject.org Cc: Juergen Gross , Jan Beulich , Andrew Cooper , George Dunlap , =?utf-8?q?Roger_Pau_Monn=C3=A9?= , Wei Liu , Julien Grall , Stefano Stabellini Subject: [PATCH 2/2] xen/spinlock: merge recurse_cpu and debug.cpu fields in struct spinlock Date: Thu, 24 Feb 2022 11:54:36 +0100 Message-Id: <20220224105436.1480-3-jgross@suse.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20220224105436.1480-1-jgross@suse.com> References: <20220224105436.1480-1-jgross@suse.com> MIME-Version: 1.0 Instead of having two fields in struct spinlock holding a cpu number, just merge them. For this purpose get rid of union lock_debug and use a 32 bit sized union for cpu, recurse_cnt and the two debug booleans. Signed-off-by: Juergen Gross --- xen/arch/x86/mm/mm-locks.h | 6 ++--- xen/common/spinlock.c | 48 +++++++++++++++++++++----------------- xen/include/xen/spinlock.h | 43 ++++++++++++++++++---------------- 3 files changed, 52 insertions(+), 45 deletions(-) diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h index fcfd4706ba..01cf3a820d 100644 --- a/xen/arch/x86/mm/mm-locks.h +++ b/xen/arch/x86/mm/mm-locks.h @@ -42,7 +42,7 @@ static inline void mm_lock_init(mm_lock_t *l) static inline bool mm_locked_by_me(const mm_lock_t *l) { - return (l->lock.recurse_cpu == current->processor); + return (l->lock.data.cpu == current->processor); } static inline int _get_lock_level(void) @@ -94,7 +94,7 @@ static inline void _mm_lock(const struct domain *d, mm_lock_t *l, if ( !((mm_locked_by_me(l)) && rec) ) _check_lock_level(d, level); spin_lock_recursive(&l->lock); - if ( l->lock.recurse_cnt == 1 ) + if ( l->lock.data.recurse_cnt == 1 ) { l->locker_function = func; l->unlock_level = _get_lock_level(); @@ -209,7 +209,7 @@ static inline void mm_read_unlock(mm_rwlock_t *l) static inline void mm_unlock(mm_lock_t *l) { - if ( l->lock.recurse_cnt == 1 ) + if ( l->lock.data.recurse_cnt == 1 ) { l->locker_function = "nobody"; _set_lock_level(l->unlock_level); diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index 53d6ab6853..33e6aaab1c 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -17,8 +17,6 @@ void check_lock(spinlock_t *lock, bool try) { bool irq_safe = !local_irq_is_enabled(); - BUILD_BUG_ON(LOCK_DEBUG_PAD_BITS <= 0); - if ( unlikely(atomic_read(&spin_debug) <= 0) ) return; @@ -49,12 +47,12 @@ void check_lock(spinlock_t *lock, bool try) if ( try && irq_safe ) return; - if ( unlikely(lock->debug.irq_safe != irq_safe) ) + if ( unlikely(lock->data.irq_safe != irq_safe) ) { - union lock_debug seen, new = { 0 }; + spinlock_data_t seen, new = { 0 }; new.irq_safe = irq_safe; - seen.val = cmpxchg(&lock->debug.val, LOCK_DEBUG_INITVAL, new.val); + seen.val = cmpxchg(&lock->data.val, SPINLOCK_DATA_INITVAL, new.val); if ( !seen.unseen && seen.irq_safe == !irq_safe ) { @@ -81,19 +79,19 @@ static void check_barrier(spinlock_t *lock) * However, if we spin on an IRQ-unsafe lock with IRQs disabled then that * is clearly wrong, for the same reason outlined in check_lock() above. */ - BUG_ON(!local_irq_is_enabled() && !lock->debug.irq_safe); + BUG_ON(!local_irq_is_enabled() && !lock->data.irq_safe); } static void got_lock(spinlock_t *lock) { - lock->debug.cpu = smp_processor_id(); + lock->data.cpu = smp_processor_id(); } static void rel_lock(spinlock_t *lock) { if ( atomic_read(&spin_debug) > 0 ) - BUG_ON(lock->debug.cpu != smp_processor_id()); - lock->debug.cpu = SPINLOCK_NO_CPU; + BUG_ON(lock->data.cpu != smp_processor_id()); + lock->data.cpu = SPINLOCK_NO_CPU; } void spin_debug_enable(void) @@ -230,9 +228,9 @@ int _spin_is_locked(spinlock_t *lock) * "false" here, making this function suitable only for use in * ASSERT()s and alike. */ - return lock->recurse_cpu == SPINLOCK_NO_CPU + return lock->data.cpu == SPINLOCK_NO_CPU ? lock->tickets.head != lock->tickets.tail - : lock->recurse_cpu == smp_processor_id(); + : lock->data.cpu == smp_processor_id(); } int _spin_trylock(spinlock_t *lock) @@ -296,22 +294,24 @@ int _spin_trylock_recursive(spinlock_t *lock) { unsigned int cpu = smp_processor_id(); - /* Don't allow overflow of recurse_cpu field. */ + /* Don't allow overflow of cpu field. */ BUILD_BUG_ON(NR_CPUS > SPINLOCK_NO_CPU); BUILD_BUG_ON(SPINLOCK_RECURSE_BITS < 3); check_lock(lock, true); - if ( likely(lock->recurse_cpu != cpu) ) + if ( likely(lock->data.cpu != cpu) ) { if ( !spin_trylock(lock) ) return 0; - lock->recurse_cpu = cpu; +#ifndef CONFIG_DEBUG_LOCKS + lock->data.cpu = cpu; +#endif } /* We support only fairly shallow recursion, else the counter overflows. */ - ASSERT(lock->recurse_cnt < SPINLOCK_MAX_RECURSE); - lock->recurse_cnt++; + ASSERT(lock->data.recurse_cnt < SPINLOCK_MAX_RECURSE); + lock->data.recurse_cnt++; return 1; } @@ -320,22 +320,26 @@ void _spin_lock_recursive(spinlock_t *lock) { unsigned int cpu = smp_processor_id(); - if ( likely(lock->recurse_cpu != cpu) ) + if ( likely(lock->data.cpu != cpu) ) { _spin_lock(lock); - lock->recurse_cpu = cpu; +#ifndef CONFIG_DEBUG_LOCKS + lock->data.cpu = cpu; +#endif } /* We support only fairly shallow recursion, else the counter overflows. */ - ASSERT(lock->recurse_cnt < SPINLOCK_MAX_RECURSE); - lock->recurse_cnt++; + ASSERT(lock->data.recurse_cnt < SPINLOCK_MAX_RECURSE); + lock->data.recurse_cnt++; } void _spin_unlock_recursive(spinlock_t *lock) { - if ( likely(--lock->recurse_cnt == 0) ) + if ( likely(--lock->data.recurse_cnt == 0) ) { - lock->recurse_cpu = SPINLOCK_NO_CPU; +#ifndef CONFIG_DEBUG_LOCKS + lock->data.cpu = SPINLOCK_NO_CPU; +#endif spin_unlock(lock); } } diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index 5b6b73732f..61731b5d29 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -6,26 +6,34 @@ #include #include -#define SPINLOCK_CPU_BITS 12 +#define SPINLOCK_CPU_BITS 12 +#define SPINLOCK_NO_CPU ((1u << SPINLOCK_CPU_BITS) - 1) +#define SPINLOCK_RECURSE_BITS (16 - SPINLOCK_CPU_BITS) +#define SPINLOCK_MAX_RECURSE ((1u << SPINLOCK_RECURSE_BITS) - 1) +#define SPINLOCK_PAD_BITS (30 - SPINLOCK_CPU_BITS - SPINLOCK_RECURSE_BITS) -#ifdef CONFIG_DEBUG_LOCKS -union lock_debug { - uint16_t val; -#define LOCK_DEBUG_INITVAL 0xffff +typedef union { + u32 val; struct { - uint16_t cpu:SPINLOCK_CPU_BITS; -#define LOCK_DEBUG_PAD_BITS (14 - SPINLOCK_CPU_BITS) - uint16_t :LOCK_DEBUG_PAD_BITS; + u32 cpu:SPINLOCK_CPU_BITS; + u32 recurse_cnt:SPINLOCK_RECURSE_BITS; + u32 pad:SPINLOCK_PAD_BITS; +#ifdef CONFIG_DEBUG_LOCKS bool irq_safe:1; bool unseen:1; +#define SPINLOCK_DEBUG_INITVAL 0xc0000000 +#else + u32 debug_pad:2; +#define SPINLOCK_DEBUG_INITVAL 0x00000000 +#endif }; -}; -#define _LOCK_DEBUG { LOCK_DEBUG_INITVAL } +} spinlock_data_t; +#define SPINLOCK_DATA_INITVAL (SPINLOCK_NO_CPU | SPINLOCK_DEBUG_INITVAL) + +#ifdef CONFIG_DEBUG_LOCKS void spin_debug_enable(void); void spin_debug_disable(void); #else -union lock_debug { }; -#define _LOCK_DEBUG { } #define spin_debug_enable() ((void)0) #define spin_debug_disable() ((void)0) #endif @@ -92,7 +100,7 @@ struct lock_profile_qhead { static struct lock_profile * const __lock_profile_##name \ __used_section(".lockprofile.data") = \ &__lock_profile_data_##name -#define _SPIN_LOCK_UNLOCKED(x) { { 0 }, SPINLOCK_NO_CPU, 0, _LOCK_DEBUG, x } +#define _SPIN_LOCK_UNLOCKED(x) { { 0 }, { SPINLOCK_DATA_INITVAL }, x } #define SPIN_LOCK_UNLOCKED _SPIN_LOCK_UNLOCKED(NULL) #define DEFINE_SPINLOCK(l) \ spinlock_t l = _SPIN_LOCK_UNLOCKED(NULL); \ @@ -134,7 +142,7 @@ extern void cf_check spinlock_profile_reset(unsigned char key); struct lock_profile_qhead { }; -#define SPIN_LOCK_UNLOCKED { { 0 }, SPINLOCK_NO_CPU, 0, _LOCK_DEBUG } +#define SPIN_LOCK_UNLOCKED { { 0 }, { SPINLOCK_DATA_INITVAL } } #define DEFINE_SPINLOCK(l) spinlock_t l = SPIN_LOCK_UNLOCKED #define spin_lock_init_prof(s, l) spin_lock_init(&((s)->l)) @@ -156,12 +164,7 @@ typedef union { typedef struct spinlock { spinlock_tickets_t tickets; - u16 recurse_cpu:SPINLOCK_CPU_BITS; -#define SPINLOCK_NO_CPU ((1u << SPINLOCK_CPU_BITS) - 1) -#define SPINLOCK_RECURSE_BITS (16 - SPINLOCK_CPU_BITS) - u16 recurse_cnt:SPINLOCK_RECURSE_BITS; -#define SPINLOCK_MAX_RECURSE ((1u << SPINLOCK_RECURSE_BITS) - 1) - union lock_debug debug; + spinlock_data_t data; #ifdef CONFIG_DEBUG_LOCK_PROFILE struct lock_profile *profile; #endif