From patchwork Wed Sep 5 08:31:49 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 1406561 Return-Path: X-Original-To: patchwork-kvm@patchwork.kernel.org Delivered-To: patchwork-process-083081@patchwork2.kernel.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by patchwork2.kernel.org (Postfix) with ESMTP id 6A643DF264 for ; Wed, 5 Sep 2012 08:35:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758099Ab2IEIfJ (ORCPT ); Wed, 5 Sep 2012 04:35:09 -0400 Received: from mail-we0-f174.google.com ([74.125.82.174]:60467 "EHLO mail-we0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750990Ab2IEIfH (ORCPT ); Wed, 5 Sep 2012 04:35:07 -0400 Received: by weyx8 with SMTP id x8so223226wey.19 for ; Wed, 05 Sep 2012 01:35:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; bh=cQKA1K2wdjOBaVId/RKR7Ay8SCRNf9rE9zz42VfsiE8=; b=qIvb9IhLQp+mgIoCasA3LxYL5FYP4MHFjQeAKga6UwFqikg1vevzxuxTaNaeMDT9kc KkmTYkvsqTGd9eLI6B/ACXkGSC8BEG+/PjryHHRwFRJHulJY0TybD+qlYdee/AArF3B5 VyZ6PlASgvuZNMTj9Nkg0QF/BSSLX97Jb02vuOLTcJnW2C1nxuW5ENDILCOF+2p2W5w2 sK3IGxjO04sHBzBQG1ZXdP3U/t/uO3fw3qZhduWrN/VQmaODouLO2wvoyRZ+UnC7pEac lL3bW81lDYye5h8EQLTDeMIdu+047tyAJJRbfLXgSgYPHEKm3qKPXgU6y4TxITKG7ee5 nnrA== Received: by 10.216.243.1 with SMTP id j1mr12271322wer.29.1346834105997; Wed, 05 Sep 2012 01:35:05 -0700 (PDT) Received: from lappy.capriciverd.com (20.Red-80-59-140.staticIP.rima-tde.net. [80.59.140.20]) by mx.google.com with ESMTPS id q4sm27971068wix.9.2012.09.05.01.33.12 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 05 Sep 2012 01:35:04 -0700 (PDT) From: Sasha Levin To: penberg@kernel.org Cc: asias.hejun@gmail.com, mingo@elte.hu, gorcunov@openvz.org, kvm@vger.kernel.org, Sasha Levin Subject: [PATCH 15/33] kvm tools: threadpool exit routine Date: Wed, 5 Sep 2012 10:31:49 +0200 Message-Id: <1346833927-15740-16-git-send-email-levinsasha928@gmail.com> X-Mailer: git-send-email 1.7.12 In-Reply-To: <1346833927-15740-1-git-send-email-levinsasha928@gmail.com> References: <1346833927-15740-1-git-send-email-levinsasha928@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org Add an exit function for the threadpool which will stop all running threads in the pool. Also clean up the init code a bit. Signed-off-by: Sasha Levin --- tools/kvm/builtin-run.c | 11 ++++++++++- tools/kvm/include/kvm/threadpool.h | 3 ++- tools/kvm/util/threadpool.c | 33 +++++++++++++++++++++++++++++---- 3 files changed, 41 insertions(+), 6 deletions(-) diff --git a/tools/kvm/builtin-run.c b/tools/kvm/builtin-run.c index b4da06e..d83917f 100644 --- a/tools/kvm/builtin-run.c +++ b/tools/kvm/builtin-run.c @@ -1216,7 +1216,12 @@ static int kvm_cmd_run_init(int argc, const char **argv) } } - thread_pool__init(nr_online_cpus); + r = thread_pool__init(kvm); + if (r < 0) { + pr_err("thread_pool__init() failed with error %d\n", r); + goto fail; + } + fail: return r; } @@ -1301,6 +1306,10 @@ static void kvm_cmd_run_exit(int guest_ret) if (r < 0) pr_warning("pci__exit() failed with error %d\n", r); + r = thread_pool__exit(kvm); + if (r < 0) + pr_warning("thread_pool__exit() failed with error %d\n", r); + r = kvm__exit(kvm); if (r < 0) pr_warning("pci__exit() failed with error %d\n", r); diff --git a/tools/kvm/include/kvm/threadpool.h b/tools/kvm/include/kvm/threadpool.h index 768239f..abe46ea 100644 --- a/tools/kvm/include/kvm/threadpool.h +++ b/tools/kvm/include/kvm/threadpool.h @@ -30,7 +30,8 @@ static inline void thread_pool__init_job(struct thread_pool__job *job, struct kv }; } -int thread_pool__init(unsigned long thread_count); +int thread_pool__init(struct kvm *kvm); +int thread_pool__exit(struct kvm *kvm); void thread_pool__do_job(struct thread_pool__job *job); diff --git a/tools/kvm/util/threadpool.c b/tools/kvm/util/threadpool.c index bafbcd7..6c7566d 100644 --- a/tools/kvm/util/threadpool.c +++ b/tools/kvm/util/threadpool.c @@ -14,6 +14,7 @@ static LIST_HEAD(head); static pthread_t *threads; static long threadcount; +static bool running; static struct thread_pool__job *thread_pool__job_pop_locked(void) { @@ -76,15 +77,16 @@ static void *thread_pool__threadfunc(void *param) { pthread_cleanup_push(thread_pool__threadfunc_cleanup, NULL); - for (;;) { + while (running) { struct thread_pool__job *curjob; mutex_lock(&job_mutex); - while ((curjob = thread_pool__job_pop_locked()) == NULL) + while (running && (curjob = thread_pool__job_pop_locked()) == NULL) pthread_cond_wait(&job_cond, &job_mutex); mutex_unlock(&job_mutex); - thread_pool__handle_job(curjob); + if (running) + thread_pool__handle_job(curjob); } pthread_cleanup_pop(0); @@ -116,9 +118,12 @@ static int thread_pool__addthread(void) return res; } -int thread_pool__init(unsigned long thread_count) +int thread_pool__init(struct kvm *kvm) { unsigned long i; + unsigned int thread_count = sysconf(_SC_NPROCESSORS_ONLN); + + running = true; for (i = 0; i < thread_count; i++) if (thread_pool__addthread() < 0) @@ -127,6 +132,26 @@ int thread_pool__init(unsigned long thread_count) return i; } +int thread_pool__exit(struct kvm *kvm) +{ + int i; + void *NUL = NULL; + + running = false; + + for (i = 0; i < threadcount; i++) { + mutex_lock(&job_mutex); + pthread_cond_signal(&job_cond); + mutex_unlock(&job_mutex); + } + + for (i = 0; i < threadcount; i++) { + pthread_join(threads[i], NUL); + } + + return 0; +} + void thread_pool__do_job(struct thread_pool__job *job) { struct thread_pool__job *jobinfo = job;