From patchwork Wed Aug 16 18:33:11 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Boris Ostrovsky X-Patchwork-Id: 9904483 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id AD5FB600CA for ; Wed, 16 Aug 2017 18:33:12 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id A16FB28A3F for ; Wed, 16 Aug 2017 18:33:12 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 9590928A4C; Wed, 16 Aug 2017 18:33:12 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.2 required=2.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 2EA5928A3F for ; Wed, 16 Aug 2017 18:33:12 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1di35d-0005Tm-Sr; Wed, 16 Aug 2017 18:30:49 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1di35c-0005Sc-CG for xen-devel@lists.xen.org; Wed, 16 Aug 2017 18:30:48 +0000 Received: from [85.158.139.211] by server-12.bemta-5.messagelabs.com id 3F/45-01731-75F84995; Wed, 16 Aug 2017 18:30:47 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFupikeJIrShJLcpLzFFi42KZM10+UDe8f0q kwZTJHBZLPi5mcWD0OLr7N1MAYxRrZl5SfkUCa8b5i+uYC1qVKk63rGVvYGyR7mLk4hASmMwk 8e3/fVYI5zejROf0eywQzgZGiamnFrBBOD2MEivvPmbsYuTkYBMwkjh7dDqYLSIgLXHt82VGk CJmgWlMEnu/vQNLCAvYSNxasJwFxGYRUJV4s3sLM4jNK+Ap0TLjJhuILSGgIDHl4XuwOKeAl8 Ta7evA4kJANUs+HGSGqDGW6JvVxzKBkW8BI8MqRo3i1KKy1CJdIwO9pKLM9IyS3MTMHF1DA1O 93NTi4sT01JzEpGK95PzcTYzAcKlnYGDcwdg42+8QoyQHk5Ior1f+lEghvqT8lMqMxOKM+KLS nNTiQ4wyHBxKErz5vUA5waLU9NSKtMwcYODCpCU4eJREeGf0AKV5iwsSc4sz0yFSpxh1OV5N+ P+NSYglLz8vVUqc9y3IDAGQoozSPLgRsCi6xCgrJczLyMDAIMRTkFqUm1mCKv+KUZyDUUmYl7 kPaApPZl4J3KZXQEcwAR1xpX0SyBEliQgpqQZGl8xG0z2yFiUir1r0+e++ei+bNNf817KIH4/ nnxA/f8KqUvT0m8WBQtPWnM56y1R/SthM4KvVhflnbU7VvC82/fMg3UlAM2A676JFeiKqt+xX 1m86GVY6323ih2PFF0KM+ppsd19z8g2cc1zyolmSTXHKBRGWmIXJ+lonLQXFgXG8K1DJbrYSS 3FGoqEWc1FxIgDWwP2XnQIAAA== X-Env-Sender: boris.ostrovsky@oracle.com X-Msg-Ref: server-16.tower-206.messagelabs.com!1502908245!91689471!1 X-Originating-IP: [156.151.31.81] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n X-StarScan-Received: X-StarScan-Version: 9.4.45; banners=-,-,- X-VirusChecked: Checked Received: (qmail 55851 invoked from network); 16 Aug 2017 18:30:46 -0000 Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81) by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-GCM-SHA384 encrypted SMTP; 16 Aug 2017 18:30:46 -0000 Received: from aserv0021.oracle.com (aserv0021.oracle.com [141.146.126.233]) by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id v7GIUcwC022360 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 16 Aug 2017 18:30:39 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by aserv0021.oracle.com (8.14.4/8.14.4) with ESMTP id v7GIUcJK000420 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 16 Aug 2017 18:30:38 GMT Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id v7GIUbuh003332; Wed, 16 Aug 2017 18:30:37 GMT Received: from ovs104.us.oracle.com (/10.149.76.204) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Wed, 16 Aug 2017 11:30:37 -0700 From: Boris Ostrovsky To: xen-devel@lists.xen.org Date: Wed, 16 Aug 2017 14:33:11 -0400 Message-Id: <1502908394-9760-6-git-send-email-boris.ostrovsky@oracle.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1502908394-9760-1-git-send-email-boris.ostrovsky@oracle.com> References: <1502908394-9760-1-git-send-email-boris.ostrovsky@oracle.com> X-Source-IP: aserv0021.oracle.com [141.146.126.233] Cc: sstabellini@kernel.org, wei.liu2@citrix.com, George.Dunlap@eu.citrix.com, andrew.cooper3@citrix.com, ian.jackson@eu.citrix.com, tim@xen.org, julien.grall@arm.com, jbeulich@suse.com, Boris Ostrovsky Subject: [Xen-devel] [PATCHES v8 5/8] spinlock: Introduce spin_lock_cb() X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP While waiting for a lock we may want to periodically run some code. This code may, for example, allow the caller to release resources held by it that are no longer needed in the critical section protected by the lock. Specifically, this feature will be needed by scrubbing code where the scrubber, while waiting for heap lock to merge back clean pages, may be requested by page allocator (which is currently holding the lock) to abort merging and release the buddy page head that the allocator wants. We could use spin_trylock() but since it doesn't take lock ticket it may take long time until the lock is taken. Instead we add spin_lock_cb() that allows us to grab the ticket and execute a callback while waiting. This callback is executed on every iteration of the spinlock waiting loop. Since we may be sleeping in the lock until it is released we need a mechanism that will make sure that the callback has a chance to run. We add spin_lock_kick() that will wake up the waiter. Signed-off-by: Boris Ostrovsky --- Changes in v8: * Defined arch_lock_signal_wmb() to avoid using smp_wmb() on ARM twice. xen/common/spinlock.c | 9 ++++++++- xen/include/asm-arm/spinlock.h | 2 ++ xen/include/asm-x86/spinlock.h | 5 +++++ xen/include/xen/spinlock.h | 4 ++++ 4 files changed, 19 insertions(+), 1 deletion(-) diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c index 2a06406..3c1caae 100644 --- a/xen/common/spinlock.c +++ b/xen/common/spinlock.c @@ -129,7 +129,7 @@ static always_inline u16 observe_head(spinlock_tickets_t *t) return read_atomic(&t->head); } -void _spin_lock(spinlock_t *lock) +void inline _spin_lock_cb(spinlock_t *lock, void (*cb)(void *), void *data) { spinlock_tickets_t tickets = SPINLOCK_TICKET_INC; LOCK_PROFILE_VAR; @@ -140,6 +140,8 @@ void _spin_lock(spinlock_t *lock) while ( tickets.tail != observe_head(&lock->tickets) ) { LOCK_PROFILE_BLOCK; + if ( unlikely(cb) ) + cb(data); arch_lock_relax(); } LOCK_PROFILE_GOT; @@ -147,6 +149,11 @@ void _spin_lock(spinlock_t *lock) arch_lock_acquire_barrier(); } +void _spin_lock(spinlock_t *lock) +{ + _spin_lock_cb(lock, NULL, NULL); +} + void _spin_lock_irq(spinlock_t *lock) { ASSERT(local_irq_is_enabled()); diff --git a/xen/include/asm-arm/spinlock.h b/xen/include/asm-arm/spinlock.h index 8cdf9e1..42b0f58 100644 --- a/xen/include/asm-arm/spinlock.h +++ b/xen/include/asm-arm/spinlock.h @@ -10,4 +10,6 @@ sev(); \ } while(0) +#define arch_lock_signal_wmb() arch_lock_signal() + #endif /* __ASM_SPINLOCK_H */ diff --git a/xen/include/asm-x86/spinlock.h b/xen/include/asm-x86/spinlock.h index be72c0f..56f6095 100644 --- a/xen/include/asm-x86/spinlock.h +++ b/xen/include/asm-x86/spinlock.h @@ -18,5 +18,10 @@ #define arch_lock_relax() cpu_relax() #define arch_lock_signal() +#define arch_lock_signal_wmb() \ +({ \ + smp_wmb(); \ + arch_lock_signal(); \ +}) #endif /* __ASM_SPINLOCK_H */ diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h index c1883bd..b5ca07d 100644 --- a/xen/include/xen/spinlock.h +++ b/xen/include/xen/spinlock.h @@ -153,6 +153,7 @@ typedef struct spinlock { #define spin_lock_init(l) (*(l) = (spinlock_t)SPIN_LOCK_UNLOCKED) void _spin_lock(spinlock_t *lock); +void _spin_lock_cb(spinlock_t *lock, void (*cond)(void *), void *data); void _spin_lock_irq(spinlock_t *lock); unsigned long _spin_lock_irqsave(spinlock_t *lock); @@ -169,6 +170,7 @@ void _spin_lock_recursive(spinlock_t *lock); void _spin_unlock_recursive(spinlock_t *lock); #define spin_lock(l) _spin_lock(l) +#define spin_lock_cb(l, c, d) _spin_lock_cb(l, c, d) #define spin_lock_irq(l) _spin_lock_irq(l) #define spin_lock_irqsave(l, f) \ ({ \ @@ -190,6 +192,8 @@ void _spin_unlock_recursive(spinlock_t *lock); 1 : ({ local_irq_restore(flags); 0; }); \ }) +#define spin_lock_kick(l) arch_lock_signal_wmb() + /* Ensure a lock is quiescent between two critical operations. */ #define spin_barrier(l) _spin_barrier(l)