From patchwork Fri Jan 22 13:41:47 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Malcolm Crossley X-Patchwork-Id: 8090231 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 4485C9F96D for ; Fri, 22 Jan 2016 13:44:58 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 6F736202FE for ; Fri, 22 Jan 2016 13:44:56 +0000 (UTC) Received: from lists.xen.org (lists.xenproject.org [50.57.142.19]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 768DD20266 for ; Fri, 22 Jan 2016 13:44:55 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xen.org) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aMbyQ-0007bU-1S; Fri, 22 Jan 2016 13:41:58 +0000 Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1aMbyO-0007bO-BX for xen-devel@lists.xenproject.org; Fri, 22 Jan 2016 13:41:56 +0000 Received: from [85.158.143.35] by server-1.bemta-4.messagelabs.com id C2/5D-09708-3A132A65; Fri, 22 Jan 2016 13:41:55 +0000 X-Env-Sender: prvs=8226cb754=malcolm.crossley@citrix.com X-Msg-Ref: server-4.tower-21.messagelabs.com!1453470112!11561592!1 X-Originating-IP: [66.165.176.89] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n, received_headers: No Received headers X-StarScan-Received: X-StarScan-Version: 7.35.1; banners=-,-,- X-VirusChecked: Checked Received: (qmail 12783 invoked from network); 22 Jan 2016 13:41:54 -0000 Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89) by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP; 22 Jan 2016 13:41:54 -0000 X-IronPort-AV: E=Sophos;i="5.22,331,1449532800"; d="scan'208";a="327049679" From: Malcolm Crossley To: , , , , , Date: Fri, 22 Jan 2016 13:41:47 +0000 Message-ID: <1453470107-27861-4-git-send-email-malcolm.crossley@citrix.com> X-Mailer: git-send-email 1.7.12.4 In-Reply-To: <1453470107-27861-1-git-send-email-malcolm.crossley@citrix.com> References: <1453470107-27861-1-git-send-email-malcolm.crossley@citrix.com> MIME-Version: 1.0 X-DLP: MIA1 Cc: xen-devel@lists.xenproject.org, dario.faggioli@citrix.com, stefano.stabellini@citrix.com, Malcolm Crossley Subject: [Xen-devel] [PATCHv6 3/3] p2m: convert p2m rwlock to percpu rwlock X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The per domain p2m read lock suffers from significant contention when performance multi-queue block or network IO due to the parallel grant map/unmaps/copies occuring on the DomU's p2m. On multi-socket systems, the contention results in the locked compare swap operation failing frequently which results in a tight loop of retries of the compare swap operation. As the coherency fabric can only support a specific rate of compare swap operations for a particular data location then taking the read lock itself becomes a bottleneck for p2m operations. Percpu rwlock p2m performance with the same configuration is approximately 64 gbit/s vs the 48 gbit/s with grant table percpu rwlocks only. Oprofile was used to determine the initial overhead of the read-write locks and to confirm the overhead was dramatically reduced by the percpu rwlocks. Note: altp2m users will not achieve a gain if they take an altp2m read lock simultaneously with the main p2m lock. Signed-off-by: Malcolm Crossley Reviewed-by: George Dunlap --- Changes since v5: - None Changes since v4: - None Changes since v3: - None Changes since v2 - Updated local percpu rwlock initialisation --- xen/arch/x86/mm/mm-locks.h | 12 +++++++----- xen/arch/x86/mm/p2m.c | 1 + xen/include/asm-x86/mm.h | 2 +- 3 files changed, 9 insertions(+), 6 deletions(-) diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h index 76c7217..8a40986 100644 --- a/xen/arch/x86/mm/mm-locks.h +++ b/xen/arch/x86/mm/mm-locks.h @@ -31,6 +31,8 @@ DECLARE_PER_CPU(int, mm_lock_level); #define __get_lock_level() (this_cpu(mm_lock_level)) +DECLARE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock); + static inline void mm_lock_init(mm_lock_t *l) { spin_lock_init(&l->lock); @@ -99,7 +101,7 @@ static inline void _mm_enforce_order_lock_post(int level, int *unlock_level, static inline void mm_rwlock_init(mm_rwlock_t *l) { - rwlock_init(&l->lock); + percpu_rwlock_resource_init(&l->lock, p2m_percpu_rwlock); l->locker = -1; l->locker_function = "nobody"; l->unlock_level = 0; @@ -115,7 +117,7 @@ static inline void _mm_write_lock(mm_rwlock_t *l, const char *func, int level) if ( !mm_write_locked_by_me(l) ) { __check_lock_level(level); - write_lock(&l->lock); + percpu_write_lock(p2m_percpu_rwlock, &l->lock); l->locker = get_processor_id(); l->locker_function = func; l->unlock_level = __get_lock_level(); @@ -131,20 +133,20 @@ static inline void mm_write_unlock(mm_rwlock_t *l) l->locker = -1; l->locker_function = "nobody"; __set_lock_level(l->unlock_level); - write_unlock(&l->lock); + percpu_write_unlock(p2m_percpu_rwlock, &l->lock); } static inline void _mm_read_lock(mm_rwlock_t *l, int level) { __check_lock_level(level); - read_lock(&l->lock); + percpu_read_lock(p2m_percpu_rwlock, &l->lock); /* There's nowhere to store the per-CPU unlock level so we can't * set the lock level. */ } static inline void mm_read_unlock(mm_rwlock_t *l) { - read_unlock(&l->lock); + percpu_read_unlock(p2m_percpu_rwlock, &l->lock); } /* This wrapper uses the line number to express the locking order below */ diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index ed0bbd7..a45ee35 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -54,6 +54,7 @@ boolean_param("hap_2mb", opt_hap_2mb); #undef page_to_mfn #define page_to_mfn(_pg) _mfn(__page_to_mfn(_pg)) +DEFINE_PERCPU_RWLOCK_GLOBAL(p2m_percpu_rwlock); /* Init the datastructures for later use by the p2m code */ static int p2m_initialise(struct domain *d, struct p2m_domain *p2m) diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h index de3f973..7598414 100644 --- a/xen/include/asm-x86/mm.h +++ b/xen/include/asm-x86/mm.h @@ -584,7 +584,7 @@ typedef struct mm_lock { } mm_lock_t; typedef struct mm_rwlock { - rwlock_t lock; + percpu_rwlock_t lock; int unlock_level; int recurse_count; int locker; /* CPU that holds the write lock */