From patchwork Mon Feb 28 09:10:07 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 594671 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p1S9AwLs006727 for ; Mon, 28 Feb 2011 09:10:58 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752862Ab1B1JKv (ORCPT ); Mon, 28 Feb 2011 04:10:51 -0500 Received: from mail-gy0-f174.google.com ([209.85.160.174]:49007 "EHLO mail-gy0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752906Ab1B1JKt (ORCPT ); Mon, 28 Feb 2011 04:10:49 -0500 Received: by mail-gy0-f174.google.com with SMTP id 20so1435602gyh.19 for ; Mon, 28 Feb 2011 01:10:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:sender:from:to:cc:subject:date:message-id :x-mailer:in-reply-to:references; bh=VrzKZsQCzSil+petCVgQFrONrlvS+WM2q4odtknsRJ8=; b=saYkad/5SVcUyDo30xOhlpiMwhZmSmbtYGx0NeavbOXX7KSvP2rfhAN4CpKVLgtPxJ lPi2b9kWaJ13iu7ZFJ2GW5vGzvWQuXTZW5xkj4Jy3Yg8adSAglSGHQ3pwdM5IEv0JnPK b7W4uS+iC4fnhL9Nt+H0lB79602J+jxEJ/RPs= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:from:to:cc:subject:date:message-id:x-mailer:in-reply-to :references; b=CiXR9z4a48T4FN1IPt8wJoMMCtZIJMaLEOQRITuzdw5iGFHxtQbrDmX5gAjJKeO603 zDg3bIkc9OARTscPO0RkluSntjtUROvvWAn+dwNlHjhtqyNnd7IGQGjfEveK504DhwOm ypqQY5jQFaIYgy6ACW1No6UmvKQdASoxZkrbk= Received: by 10.151.11.6 with SMTP id o6mr6284503ybi.371.1298884249246; Mon, 28 Feb 2011 01:10:49 -0800 (PST) Received: from localhost.localdomain (93-34-149-100.ip50.fastwebnet.it [93.34.149.100]) by mx.google.com with ESMTPS id 1sm2037209yhl.11.2011.02.28.01.10.47 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 28 Feb 2011 01:10:48 -0800 (PST) From: Paolo Bonzini To: qemu-devel@nongnu.org Cc: kvm@vger.kernel.org, aurelien@aurel32.net, blauwirbel@gmail.com, jan.kiszka@siemes.com, mtosatti@redhat.com Subject: [PATCH v3 uq/master 05/22] add win32 qemu-thread implementation Date: Mon, 28 Feb 2011 10:10:07 +0100 Message-Id: <1298884224-19734-6-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.7.4 In-Reply-To: <1298884224-19734-1-git-send-email-pbonzini@redhat.com> References: <1298884224-19734-1-git-send-email-pbonzini@redhat.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Mon, 28 Feb 2011 09:10:58 +0000 (UTC) diff --git a/Makefile.objs b/Makefile.objs index 9e98a66..a52f42f 100644 --- a/Makefile.objs +++ b/Makefile.objs @@ -142,8 +142,8 @@ endif common-obj-y += $(addprefix ui/, $(ui-obj-y)) common-obj-y += iov.o acl.o -common-obj-$(CONFIG_THREAD) += qemu-thread.o -common-obj-$(CONFIG_POSIX) += compatfd.o +common-obj-$(CONFIG_POSIX) += qemu-thread-posix.o compatfd.o +common-obj-$(CONFIG_WIN32) += qemu-thread-win32.o common-obj-y += notify.o event_notifier.o common-obj-y += qemu-timer.o qemu-timer-common.o diff --git a/qemu-thread.c b/qemu-thread-posix.c similarity index 100% rename from qemu-thread.c rename to qemu-thread-posix.c diff --git a/qemu-thread-posix.h b/qemu-thread-posix.h new file mode 100644 index 0000000..7af371c --- /dev/null +++ b/qemu-thread-posix.h @@ -0,0 +1,18 @@ +#ifndef __QEMU_THREAD_POSIX_H +#define __QEMU_THREAD_POSIX_H 1 +#include "pthread.h" + +struct QemuMutex { + pthread_mutex_t lock; +}; + +struct QemuCond { + pthread_cond_t cond; +}; + +struct QemuThread { + pthread_t thread; +}; + +void qemu_thread_signal(QemuThread *thread, int sig); +#endif diff --git a/qemu-thread-win32.c b/qemu-thread-win32.c new file mode 100644 index 0000000..2edcb1a --- /dev/null +++ b/qemu-thread-win32.c @@ -0,0 +1,260 @@ +/* + * Win32 implementation for mutex/cond/thread functions + * + * Copyright Red Hat, Inc. 2010 + * + * Author: + * Paolo Bonzini + * + * This work is licensed under the terms of the GNU GPL, version 2 or later. + * See the COPYING file in the top-level directory. + * + */ +#include "qemu-common.h" +#include "qemu-thread.h" +#include +#include +#include + +static void error_exit(int err, const char *msg) +{ + char *pstr; + + FormatMessage(FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_ALLOCATE_BUFFER, + NULL, err, 0, (LPTSTR)&pstr, 2, NULL); + fprintf(stderr, "qemu: %s: %s\n", msg, pstr); + LocalFree(pstr); + exit(1); +} + +void qemu_mutex_init(QemuMutex *mutex) +{ + mutex->owner = 0; + InitializeCriticalSection(&mutex->lock); +} + +void qemu_mutex_lock(QemuMutex *mutex) +{ + EnterCriticalSection(&mutex->lock); + + /* Win32 CRITICAL_SECTIONs are recursive. Assert that we're not + * using them as such. + */ + assert(mutex->owner == 0); + mutex->owner = GetCurrentThreadId(); +} + +int qemu_mutex_trylock(QemuMutex *mutex) +{ + int owned; + + owned = TryEnterCriticalSection(&mutex->lock); + if (owned) { + assert(mutex->owner == 0); + mutex->owner = GetCurrentThreadId(); + } + return !owned; +} + +void qemu_mutex_unlock(QemuMutex *mutex) +{ + assert(mutex->owner == GetCurrentThreadId()); + mutex->owner = 0; + LeaveCriticalSection(&mutex->lock); +} + +void qemu_cond_init(QemuCond *cond) +{ + memset(cond, 0, sizeof(*cond)); + + cond->sema = CreateSemaphore(NULL, 0, LONG_MAX, NULL); + if (!cond->sema) { + error_exit(GetLastError(), __func__); + } + cond->continue_event = CreateEvent(NULL, /* security */ + FALSE, /* auto-reset */ + FALSE, /* not signaled */ + NULL); /* name */ + if (!cond->continue_event) { + error_exit(GetLastError(), __func__); + } +} + +void qemu_cond_signal(QemuCond *cond) +{ + DWORD result; + + /* + * Signal only when there are waiters. cond->waiters is + * incremented by pthread_cond_wait under the external lock, + * so we are safe about that. + */ + if (cond->waiters == 0) { + return; + } + + /* + * Waiting threads decrement it outside the external lock, but + * only if another thread is executing pthread_cond_broadcast and + * has the mutex. So, it also cannot be decremented concurrently + * with this particular access. + */ + cond->target = cond->waiters - 1; + result = SignalObjectAndWait(cond->sema, cond->continue_event, + INFINITE, FALSE); + if (result == WAIT_ABANDONED || result == WAIT_FAILED) { + error_exit(GetLastError(), __func__); + } +} + +void qemu_cond_broadcast(QemuCond *cond) +{ + BOOLEAN result; + /* + * As in pthread_cond_signal, access to cond->waiters and + * cond->target is locked via the external mutex. + */ + if (cond->waiters == 0) { + return; + } + + cond->target = 0; + result = ReleaseSemaphore(cond->sema, cond->waiters, NULL); + if (!result) { + error_exit(GetLastError(), __func__); + } + + /* + * At this point all waiters continue. Each one takes its + * slice of the semaphore. Now it's our turn to wait: Since + * the external mutex is held, no thread can leave cond_wait, + * yet. For this reason, we can be sure that no thread gets + * a chance to eat *more* than one slice. OTOH, it means + * that the last waiter must send us a wake-up. + */ + WaitForSingleObject(cond->continue_event, INFINITE); +} + +void qemu_cond_wait(QemuCond *cond, QemuMutex *mutex) +{ + /* + * This access is protected under the mutex. + */ + cond->waiters++; + + /* + * Unlock external mutex and wait for signal. + * NOTE: we've held mutex locked long enough to increment + * waiters count above, so there's no problem with + * leaving mutex unlocked before we wait on semaphore. + */ + qemu_mutex_unlock(mutex); + WaitForSingleObject(cond->sema, INFINITE); + + /* Now waiters must rendez-vous with the signaling thread and + * let it continue. For cond_broadcast this has heavy contention + * and triggers thundering herd. So goes life. + * + * Decrease waiters count. The mutex is not taken, so we have + * to do this atomically. + * + * All waiters contend for the mutex at the end of this function + * until the signaling thread relinquishes it. To ensure + * each waiter consumes exactly one slice of the semaphore, + * the signaling thread stops until it is told by the last + * waiter that it can go on. + */ + if (InterlockedDecrement(&cond->waiters) == cond->target) { + SetEvent(cond->continue_event); + } + + qemu_mutex_lock(mutex); +} + +struct QemuThreadData { + QemuThread *thread; + void *(*start_routine)(void *); + void *arg; +}; + +static int qemu_thread_tls_index = TLS_OUT_OF_INDEXES; + +static unsigned __stdcall win32_start_routine(void *arg) +{ + struct QemuThreadData data = *(struct QemuThreadData *) arg; + QemuThread *thread = data.thread; + + free(arg); + TlsSetValue(qemu_thread_tls_index, thread); + + /* + * Use DuplicateHandle instead of assigning thread->thread in the + * creating thread to avoid races. It's simpler this way than with + * synchronization. + */ + DuplicateHandle(GetCurrentProcess(), GetCurrentThread(), + GetCurrentProcess(), &thread->thread, + 0, FALSE, DUPLICATE_SAME_ACCESS); + + qemu_thread_exit(data.start_routine(data.arg)); + abort(); +} + +void qemu_thread_exit(void *arg) +{ + QemuThread *thread = TlsGetValue(qemu_thread_tls_index); + thread->ret = arg; + CloseHandle(thread->thread); + thread->thread = NULL; + ExitThread(0); +} + +static inline void qemu_thread_init(void) +{ + if (qemu_thread_tls_index == TLS_OUT_OF_INDEXES) { + qemu_thread_tls_index = TlsAlloc(); + if (qemu_thread_tls_index == TLS_OUT_OF_INDEXES) { + error_exit(ERROR_NO_SYSTEM_RESOURCES, __func__); + } + } +} + + +void qemu_thread_create(QemuThread *thread, + void *(*start_routine)(void *), + void *arg) +{ + HANDLE hThread; + + struct QemuThreadData *data; + qemu_thread_init(); + data = qemu_malloc(sizeof *data); + data->thread = thread; + data->start_routine = start_routine; + data->arg = arg; + + hThread = (HANDLE) _beginthreadex(NULL, 0, win32_start_routine, + data, 0, NULL); + if (!hThread) { + error_exit(GetLastError(), __func__); + } + CloseHandle(hThread); +} + +void qemu_thread_get_self(QemuThread *thread) +{ + if (!thread->thread) { + /* In the main thread of the process. Initialize the QemuThread + pointer in TLS, and use the dummy GetCurrentThread handle as + the identifier for qemu_thread_is_self. */ + qemu_thread_init(); + TlsSetValue(qemu_thread_tls_index, thread); + thread->thread = GetCurrentThread(); + } +} + +int qemu_thread_is_self(QemuThread *thread) +{ + QemuThread *this_thread = TlsGetValue(qemu_thread_tls_index); + return this_thread->thread == thread->thread; +} diff --git a/qemu-thread-win32.h b/qemu-thread-win32.h new file mode 100644 index 0000000..878f86a --- /dev/null +++ b/qemu-thread-win32.h @@ -0,0 +1,21 @@ +#ifndef __QEMU_THREAD_WIN32_H +#define __QEMU_THREAD_WIN32_H 1 +#include "windows.h" + +struct QemuMutex { + CRITICAL_SECTION lock; + LONG owner; +}; + +struct QemuCond { + LONG waiters, target; + HANDLE sema; + HANDLE continue_event; +}; + +struct QemuThread { + HANDLE thread; + void *ret; +}; + +#endif diff --git a/qemu-thread.h b/qemu-thread.h index add97bf..acdb6b2 100644 --- a/qemu-thread.h +++ b/qemu-thread.h @@ -1,24 +1,16 @@ #ifndef __QEMU_THREAD_H #define __QEMU_THREAD_H 1 -#include "semaphore.h" -#include "pthread.h" - -struct QemuMutex { - pthread_mutex_t lock; -}; - -struct QemuCond { - pthread_cond_t cond; -}; - -struct QemuThread { - pthread_t thread; -}; typedef struct QemuMutex QemuMutex; typedef struct QemuCond QemuCond; typedef struct QemuThread QemuThread; +#ifdef _WIN32 +#include "qemu-thread-win32.h" +#else +#include "qemu-thread-posix.h" +#endif + void qemu_mutex_init(QemuMutex *mutex); void qemu_mutex_destroy(QemuMutex *mutex); void qemu_mutex_lock(QemuMutex *mutex); @@ -28,6 +20,12 @@ void qemu_mutex_unlock(QemuMutex *mutex); void qemu_cond_init(QemuCond *cond); void qemu_cond_destroy(QemuCond *cond); + +/* + * IMPORTANT: The implementation does not guarantee that pthread_cond_signal + * and pthread_cond_broadcast can be called except while the same mutex is + * held as in the corresponding pthread_cond_wait calls! + */ void qemu_cond_signal(QemuCond *cond); void qemu_cond_broadcast(QemuCond *cond); void qemu_cond_wait(QemuCond *cond, QemuMutex *mutex); @@ -36,7 +34,6 @@ int qemu_cond_timedwait(QemuCond *cond, QemuMutex *mutex, uint64_t msecs); void qemu_thread_create(QemuThread *thread, void *(*start_routine)(void*), void *arg); -void qemu_thread_signal(QemuThread *thread, int sig); void qemu_thread_get_self(QemuThread *thread); int qemu_thread_is_self(QemuThread *thread); void qemu_thread_exit(void *retval);