From patchwork Mon May 30 17:27:55 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sasha Levin X-Patchwork-Id: 830672 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id p4UHTG2O018765 for ; Mon, 30 May 2011 17:29:49 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757602Ab1E3R3T (ORCPT ); Mon, 30 May 2011 13:29:19 -0400 Received: from mail-wy0-f174.google.com ([74.125.82.174]:46736 "EHLO mail-wy0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757346Ab1E3R3Q (ORCPT ); Mon, 30 May 2011 13:29:16 -0400 Received: by mail-wy0-f174.google.com with SMTP id 21so2766739wya.19 for ; Mon, 30 May 2011 10:29:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:from:to:cc:subject:date:message-id:x-mailer :in-reply-to:references; bh=1UNpgbpgCJrUvZGtRlNS1a7lalhvmjUnY4SuWLCzYFg=; b=fMFX/of/CmDQy5OTg5GFezXbqumwkBUOU8GHsca/H5rJvPQ+HqhbOF5DUMSJsaR7Pg iIHbsUZM8ttZHSdxsAQX8EaxGxXGS7tm+lGQvIRGQe5jeS05kE+sLy9nAxBBZw5+JlOD 2i7dp7HZY7VwNV/4+7kidlenrpYLnVvevAM48= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=from:to:cc:subject:date:message-id:x-mailer:in-reply-to:references; b=BEf5t2MXLXR26DtrL1vPV9Q8NYAvJ5DSfazEIkDRugA8SbwgBDtIV9rBMjjh13HDxi 3HY2E+NKYfMD0tABD6PRuaR66Lguz00chFq8gGC04Mv4unirpW/QfUYmN1EdOt4opFLf 1rxFINQ/ZZfE2bG7OVVtDooDNUUhlR19ez+rU= Received: by 10.227.182.129 with SMTP id cc1mr2668785wbb.10.1306776555580; Mon, 30 May 2011 10:29:15 -0700 (PDT) Received: from localhost.localdomain (bzq-79-181-201-100.red.bezeqint.net [79.181.201.100]) by mx.google.com with ESMTPS id gb6sm3194848wbb.17.2011.05.30.10.29.13 (version=TLSv1/SSLv3 cipher=OTHER); Mon, 30 May 2011 10:29:15 -0700 (PDT) From: Sasha Levin To: penberg@kernel.org Cc: kvm@vger.kernel.org, mingo@elte.hu, asias.hejun@gmail.com, gorcunov@gmail.com, prasadjoshi124@gmail.com, Sasha Levin Subject: [PATCH v3 5/8] kvm tools: Add a brlock Date: Mon, 30 May 2011 20:27:55 +0300 Message-Id: <1306776478-29613-6-git-send-email-levinsasha928@gmail.com> X-Mailer: git-send-email 1.7.5.3 In-Reply-To: <1306776478-29613-1-git-send-email-levinsasha928@gmail.com> References: <1306776478-29613-1-git-send-email-levinsasha928@gmail.com> Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.6 (demeter1.kernel.org [140.211.167.41]); Mon, 30 May 2011 17:29:49 +0000 (UTC) brlock is a lock which is very cheap for reads, but very expensive for writes. This lock will be used when updates are very rare and reads are common. This lock is currently implemented by stopping the guest while performing the updates. We assume that the only threads which read from the locked data are VCPU threads, and the only writer isn't a VCPU thread. Signed-off-by: Sasha Levin --- tools/kvm/include/kvm/brlock.h | 25 +++++++++++++++++++++++++ 1 files changed, 25 insertions(+), 0 deletions(-) create mode 100644 tools/kvm/include/kvm/brlock.h diff --git a/tools/kvm/include/kvm/brlock.h b/tools/kvm/include/kvm/brlock.h new file mode 100644 index 0000000..2e2e0f8 --- /dev/null +++ b/tools/kvm/include/kvm/brlock.h @@ -0,0 +1,25 @@ +#ifndef KVM__BRLOCK_H +#define KVM__BRLOCK_H + +#include "kvm/kvm.h" +#include "kvm/barrier.h" + +/* + * brlock is a lock which is very cheap for reads, but very expensive + * for writes. + * This lock will be used when updates are very rare and reads are common. + * This lock is currently implemented by stopping the guest while + * performing the updates. We assume that the only threads whichread from + * the locked data are VCPU threads, and the only writer isn't a VCPU thread. + */ + +#ifndef barrier +#define barrier() __asm__ __volatile__("": : :"memory") +#endif + +#define br_read_lock() barrier() +#define br_read_unlock() barrier() + +#define br_write_lock() kvm__pause() +#define br_write_unlock() kvm__continue() +#endif