Message ID | 20240314072029.16937-9-jgross@suse.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Series | xen/spinlock: make recursive spinlocks a dedicated type | expand |
On 14.03.2024 08:20, Juergen Gross wrote: > --- a/xen/common/spinlock.c > +++ b/xen/common/spinlock.c > @@ -395,14 +395,7 @@ static bool always_inline spin_is_locked_common(const spinlock_tickets_t *t) > > int _spin_is_locked(const spinlock_t *lock) > { > - /* > - * Recursive locks may be locked by another CPU, yet we return > - * "false" here, making this function suitable only for use in > - * ASSERT()s and alike. > - */ > - return lock->recurse_cpu == SPINLOCK_NO_CPU > - ? spin_is_locked_common(&lock->tickets) > - : lock->recurse_cpu == smp_processor_id(); > + return spin_is_locked_common(&lock->tickets); > } The "only suitable for ASSERT()s and alike" part of the comment wants to survive here, I think. > @@ -465,6 +458,23 @@ void _spin_barrier(spinlock_t *lock) > spin_barrier_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR); > } > > +bool _rspin_is_locked(const rspinlock_t *lock) > +{ > + /* > + * Recursive locks may be locked by another CPU, yet we return > + * "false" here, making this function suitable only for use in > + * ASSERT()s and alike. > + */ > + return lock->recurse_cpu == SPINLOCK_NO_CPU > + ? spin_is_locked_common(&lock->tickets) > + : lock->recurse_cpu == smp_processor_id(); > +} Here otoh I wonder if both the comment and the spin_is_locked_common() part of the condition are actually correct. Oh, the latter needs retaining as long as we have nrspin_*() functions, I suppose. But the comment could surely do with improving a little - at the very least "yet we return "false"" isn't quite right; minimally there's a "may" missing. In principle, without any nrspin_*() functions, the result here ought to be usable generally, not just for ASSERT()s. Whether having its and _spin_is_locked()'s behavior differ would be a good idea is a separate question, of course. Jan
On 18.03.24 15:57, Jan Beulich wrote: > On 14.03.2024 08:20, Juergen Gross wrote: >> --- a/xen/common/spinlock.c >> +++ b/xen/common/spinlock.c >> @@ -395,14 +395,7 @@ static bool always_inline spin_is_locked_common(const spinlock_tickets_t *t) >> >> int _spin_is_locked(const spinlock_t *lock) >> { >> - /* >> - * Recursive locks may be locked by another CPU, yet we return >> - * "false" here, making this function suitable only for use in >> - * ASSERT()s and alike. >> - */ >> - return lock->recurse_cpu == SPINLOCK_NO_CPU >> - ? spin_is_locked_common(&lock->tickets) >> - : lock->recurse_cpu == smp_processor_id(); >> + return spin_is_locked_common(&lock->tickets); >> } > > The "only suitable for ASSERT()s and alike" part of the comment wants > to survive here, I think. Why? I could understand you asking for putting such a comment to spinlock.h mentioning that any *_is_locked() variant isn't safe, but with _spin_is_locked() no longer covering recursive locks the comment's reasoning is no longer true. > >> @@ -465,6 +458,23 @@ void _spin_barrier(spinlock_t *lock) >> spin_barrier_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR); >> } >> >> +bool _rspin_is_locked(const rspinlock_t *lock) >> +{ >> + /* >> + * Recursive locks may be locked by another CPU, yet we return >> + * "false" here, making this function suitable only for use in >> + * ASSERT()s and alike. >> + */ >> + return lock->recurse_cpu == SPINLOCK_NO_CPU >> + ? spin_is_locked_common(&lock->tickets) >> + : lock->recurse_cpu == smp_processor_id(); >> +} > > Here otoh I wonder if both the comment and the spin_is_locked_common() > part of the condition are actually correct. Oh, the latter needs > retaining as long as we have nrspin_*() functions, I suppose. But the > comment could surely do with improving a little - at the very least > "yet we return "false"" isn't quite right; minimally there's a "may" > missing. If anything I guess the comment shouldn't gain a "may", but rather say "Recursive locks may be locked by another CPU via rspin_lock() ..." Juergen
On 18.03.2024 16:31, Jürgen Groß wrote: > On 18.03.24 15:57, Jan Beulich wrote: >> On 14.03.2024 08:20, Juergen Gross wrote: >>> --- a/xen/common/spinlock.c >>> +++ b/xen/common/spinlock.c >>> @@ -395,14 +395,7 @@ static bool always_inline spin_is_locked_common(const spinlock_tickets_t *t) >>> >>> int _spin_is_locked(const spinlock_t *lock) >>> { >>> - /* >>> - * Recursive locks may be locked by another CPU, yet we return >>> - * "false" here, making this function suitable only for use in >>> - * ASSERT()s and alike. >>> - */ >>> - return lock->recurse_cpu == SPINLOCK_NO_CPU >>> - ? spin_is_locked_common(&lock->tickets) >>> - : lock->recurse_cpu == smp_processor_id(); >>> + return spin_is_locked_common(&lock->tickets); >>> } >> >> The "only suitable for ASSERT()s and alike" part of the comment wants >> to survive here, I think. > > Why? > > I could understand you asking for putting such a comment to spinlock.h > mentioning that any *_is_locked() variant isn't safe, but with > _spin_is_locked() no longer covering recursive locks the comment's reasoning > is no longer true. Hmm. I guess there is a difference in expectations. To me, these functions in principle ought to report whether the lock is "owned", not just "locked by some CPU". They don't, hence why they may not be used for other than ASSERT()s. As to the reasoning no longer being applicable here: That's why I asked to only retain the "only ASSERT()s" part of the comment. Yes, such a comment may also be suitable to have in spinlock.h. What I'd like to avoid is for it to be lost altogether. Jan >>> @@ -465,6 +458,23 @@ void _spin_barrier(spinlock_t *lock) >>> spin_barrier_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR); >>> } >>> >>> +bool _rspin_is_locked(const rspinlock_t *lock) >>> +{ >>> + /* >>> + * Recursive locks may be locked by another CPU, yet we return >>> + * "false" here, making this function suitable only for use in >>> + * ASSERT()s and alike. >>> + */ >>> + return lock->recurse_cpu == SPINLOCK_NO_CPU >>> + ? spin_is_locked_common(&lock->tickets) >>> + : lock->recurse_cpu == smp_processor_id(); >>> +} >> >> Here otoh I wonder if both the comment and the spin_is_locked_common() >> part of the condition are actually correct. Oh, the latter needs >> retaining as long as we have nrspin_*() functions, I suppose. But the >> comment could surely do with improving a little - at the very least >> "yet we return "false"" isn't quite right; minimally there's a "may" >> missing. > > If anything I guess the comment shouldn't gain a "may", but rather say > "Recursive locks may be locked by another CPU via rspin_lock() ..." > > > Juergen
On 18.03.24 16:44, Jan Beulich wrote: > On 18.03.2024 16:31, Jürgen Groß wrote: >> On 18.03.24 15:57, Jan Beulich wrote: >>> On 14.03.2024 08:20, Juergen Gross wrote: >>>> --- a/xen/common/spinlock.c >>>> +++ b/xen/common/spinlock.c >>>> @@ -395,14 +395,7 @@ static bool always_inline spin_is_locked_common(const spinlock_tickets_t *t) >>>> >>>> int _spin_is_locked(const spinlock_t *lock) >>>> { >>>> - /* >>>> - * Recursive locks may be locked by another CPU, yet we return >>>> - * "false" here, making this function suitable only for use in >>>> - * ASSERT()s and alike. >>>> - */ >>>> - return lock->recurse_cpu == SPINLOCK_NO_CPU >>>> - ? spin_is_locked_common(&lock->tickets) >>>> - : lock->recurse_cpu == smp_processor_id(); >>>> + return spin_is_locked_common(&lock->tickets); >>>> } >>> >>> The "only suitable for ASSERT()s and alike" part of the comment wants >>> to survive here, I think. >> >> Why? >> >> I could understand you asking for putting such a comment to spinlock.h >> mentioning that any *_is_locked() variant isn't safe, but with >> _spin_is_locked() no longer covering recursive locks the comment's reasoning >> is no longer true. > > Hmm. I guess there is a difference in expectations. To me, these > functions in principle ought to report whether the lock is "owned", > not just "locked by some CPU". They don't, hence why they may not be > used for other than ASSERT()s. Okay, thanks for clarification. I'll change the comment accordingly. Juergen
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c index 515a8c98a5..ae5dcb1870 100644 --- a/xen/arch/x86/mm/p2m-pod.c +++ b/xen/arch/x86/mm/p2m-pod.c @@ -374,7 +374,7 @@ int p2m_pod_empty_cache(struct domain *d) /* After this barrier no new PoD activities can happen. */ BUG_ON(!d->is_dying); - spin_barrier(&p2m->pod.lock.lock); + rspin_barrier(&p2m->pod.lock.lock); lock_page_alloc(p2m); diff --git a/xen/common/domain.c b/xen/common/domain.c index 8e0109c590..2d7c69e6d2 100644 --- a/xen/common/domain.c +++ b/xen/common/domain.c @@ -983,7 +983,7 @@ int domain_kill(struct domain *d) case DOMDYING_alive: domain_pause(d); d->is_dying = DOMDYING_dying; - spin_barrier(&d->domain_lock); + rspin_barrier(&d->domain_lock); argo_destroy(d); vnuma_destroy(d->vnuma); domain_set_outstanding_pages(d, 0); diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c index 20238015ed..83d1715e25 100644 --- a/xen/common/page_alloc.c +++ b/xen/common/page_alloc.c @@ -475,7 +475,7 @@ unsigned long domain_adjust_tot_pages(struct domain *d, long pages) { long dom_before, dom_after, dom_claimed, sys_before, sys_after; - ASSERT(spin_is_locked(&d->page_alloc_lock)); + ASSERT(rspin_is_locked(&d->page_alloc_lock)); d->tot_pages += pages; /* diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index 648393d95f..c3f2f9b209 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -395,14 +395,7 @@ static bool always_inline spin_is_locked_common(const spinlock_tickets_t *t) int _spin_is_locked(const spinlock_t *lock) { - /* - * Recursive locks may be locked by another CPU, yet we return - * "false" here, making this function suitable only for use in - * ASSERT()s and alike. - */ - return lock->recurse_cpu == SPINLOCK_NO_CPU - ? spin_is_locked_common(&lock->tickets) - : lock->recurse_cpu == smp_processor_id(); + return spin_is_locked_common(&lock->tickets); } static bool always_inline spin_trylock_common(spinlock_tickets_t *t, @@ -465,6 +458,23 @@ void _spin_barrier(spinlock_t *lock) spin_barrier_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR); } +bool _rspin_is_locked(const rspinlock_t *lock) +{ + /* + * Recursive locks may be locked by another CPU, yet we return + * "false" here, making this function suitable only for use in + * ASSERT()s and alike. + */ + return lock->recurse_cpu == SPINLOCK_NO_CPU + ? spin_is_locked_common(&lock->tickets) + : lock->recurse_cpu == smp_processor_id(); +} + +void _rspin_barrier(rspinlock_t *lock) +{ + spin_barrier_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR); +} + bool _rspin_trylock(rspinlock_t *lock) { unsigned int cpu = smp_processor_id(); diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c index 22f50fc617..d5e6aacc27 100644 --- a/xen/drivers/char/console.c +++ b/xen/drivers/char/console.c @@ -327,7 +327,7 @@ static void cf_check do_dec_thresh(unsigned char key, bool unused) static void conring_puts(const char *str, size_t len) { - ASSERT(spin_is_locked(&console_lock)); + ASSERT(rspin_is_locked(&console_lock)); while ( len-- ) conring[CONRING_IDX_MASK(conringp++)] = *str++; @@ -765,7 +765,7 @@ static void __putstr(const char *str) { size_t len = strlen(str); - ASSERT(spin_is_locked(&console_lock)); + ASSERT(rspin_is_locked(&console_lock)); console_serial_puts(str, len); video_puts(str, len); diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c index 4fcc7e2cde..5a446d3dce 100644 --- a/xen/drivers/passthrough/pci.c +++ b/xen/drivers/passthrough/pci.c @@ -65,7 +65,7 @@ void pcidevs_unlock(void) bool pcidevs_locked(void) { - return !!spin_is_locked(&_pcidevs_lock); + return rspin_is_locked(&_pcidevs_lock); } static struct radix_tree_root pci_segments; diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index 5b20b11db6..7dd11faab3 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -297,6 +297,8 @@ void _rspin_lock(rspinlock_t *lock); unsigned long _rspin_lock_irqsave(rspinlock_t *lock); void _rspin_unlock(rspinlock_t *lock); void _rspin_unlock_irqrestore(rspinlock_t *lock, unsigned long flags); +bool _rspin_is_locked(const rspinlock_t *lock); +void _rspin_barrier(rspinlock_t *lock); static always_inline void rspin_lock(rspinlock_t *lock) { @@ -307,6 +309,8 @@ static always_inline void rspin_lock(rspinlock_t *lock) #define rspin_trylock(l) lock_evaluate_nospec(_rspin_trylock(l)) #define rspin_unlock(l) _rspin_unlock(l) #define rspin_unlock_irqrestore(l, f) _rspin_unlock_irqrestore(l, f) +#define rspin_barrier(l) _rspin_barrier(l) +#define rspin_is_locked(l) _rspin_is_locked(l) #define nrspin_trylock(l) spin_trylock(l) #define nrspin_lock(l) spin_lock(l)
Add rspin_is_locked() and rspin_barrier() in order to prepare differing spinlock_t and rspinlock_t types. Signed-off-by: Juergen Gross <jgross@suse.com> --- V2: - partially carved out from V1 patch, partially new V5: - let rspin_is_locked() return bool (Jan Beulich) --- xen/arch/x86/mm/p2m-pod.c | 2 +- xen/common/domain.c | 2 +- xen/common/page_alloc.c | 2 +- xen/common/spinlock.c | 26 ++++++++++++++++++-------- xen/drivers/char/console.c | 4 ++-- xen/drivers/passthrough/pci.c | 2 +- xen/include/xen/spinlock.h | 4 ++++ 7 files changed, 28 insertions(+), 14 deletions(-)