From patchwork Fri Aug 27 05:49:11 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jason Wang X-Patchwork-Id: 136631 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by demeter1.kernel.org (8.14.4/8.14.3) with ESMTP id o7R5nIMc002674 for ; Fri, 27 Aug 2010 05:49:18 GMT Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752605Ab0H0FtQ (ORCPT ); Fri, 27 Aug 2010 01:49:16 -0400 Received: from mx1.redhat.com ([209.132.183.28]:1688 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752461Ab0H0FtQ (ORCPT ); Fri, 27 Aug 2010 01:49:16 -0400 Received: from int-mx02.intmail.prod.int.phx2.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id o7R5nFFZ030452 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Fri, 27 Aug 2010 01:49:15 -0400 Received: from [127.0.1.1] (dhcp-65-37.nay.redhat.com [10.66.65.37]) by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id o7R5nCvI016789; Fri, 27 Aug 2010 01:49:14 -0400 Subject: [PATCH kvm-unit-test 1/6] Introduce memory barriers. To: mtosatti@redhat.com, avi@redhat.com, kvm@vger.kernel.org From: Jason Wang Cc: glommer@redhat.com Date: Fri, 27 Aug 2010 13:49:11 +0800 Message-ID: <20100827054911.7409.1538.stgit@FreeLancer> In-Reply-To: <20100827054733.7409.63882.stgit@FreeLancer> References: <20100827054733.7409.63882.stgit@FreeLancer> User-Agent: StGit/0.15 MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12 Sender: kvm-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org X-Greylist: IP, sender and recipient auto-whitelisted, not delayed by milter-greylist-4.2.3 (demeter1.kernel.org [140.211.167.41]); Fri, 27 Aug 2010 05:49:18 +0000 (UTC) diff --git a/lib/x86/smp.h b/lib/x86/smp.h index c2e7350..df5fdba 100644 --- a/lib/x86/smp.h +++ b/lib/x86/smp.h @@ -1,6 +1,10 @@ #ifndef __SMP_H #define __SMP_H +#define mb() asm volatile("mfence":::"memory") +#define rmb() asm volatile("lfence":::"memory") +#define wmb() asm volatile("sfence" ::: "memory") + struct spinlock { int v; };