From patchwork Fri Apr 22 10:54:12 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quan Xu X-Patchwork-Id: 8909691 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 66D669F1D3 for ; Fri, 22 Apr 2016 10:59:35 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 549862024F for ; Fri, 22 Apr 2016 10:59:34 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 1951520149 for ; Fri, 22 Apr 2016 10:59:33 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1atYmG-0004G9-Q9; Fri, 22 Apr 2016 10:57:36 +0000 Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1atYmF-0004FJ-HE for xen-devel@lists.xen.org; Fri, 22 Apr 2016 10:57:35 +0000 Received: from [85.158.139.211] by server-13.bemta-5.messagelabs.com id 0D/E3-28710-E930A175; Fri, 22 Apr 2016 10:57:34 +0000 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFnrDLMWRWlGSWpSXmKPExsXS1tbhqDuPWSr c4Nc9UYslHxezODB6HN39mymAMYo1My8pvyKBNWPpk+OsBf9cK9bMWcHWwLjGqIuRk0NIoEJi /qEjrCC2hACvxJFlM6Bsf4lXG2awQdTUSEz48JMZxGYTUJTYcHE5E4gtIiAtce3zZUYQm1mgS uJ+93MwW1jAS+JX/xl2EJtFQFVi+8YtYDavgKPEnFPnoOYrSCz7shZoJgcHp4CTxM0+qFWOEm c6z7NOYORdwMiwilGjOLWoLLVI19BUL6koMz2jJDcxM0fX0MBULze1uDgxPTUnMalYLzk/dxM jMBQYgGAHY8N2z0OMkhxMSqK85x9IhgvxJeWnVGYkFmfEF5XmpBYfYpTh4FCS4F3HJBUuJFiU mp5akZaZAwxKmLQEB4+SCO9NkDRvcUFibnFmOkTqFKOilDjvSpCEAEgiozQPrg0WCZcYZaWEe RmBDhHiKUgtys0sQZV/xSjOwagkzHsMZApPZl4J3PRXQIuZgBb/uyAJsrgkESEl1cDY//DKds cfHs/UZ0XFJYg8vvhxzs7f05hbz9+acfX1Ft4XN9Vr90nt/MP+MTxKY1pbRO9Xi+Onaur1lm7 Qi5h6su1W2jblb4b+rM8fvPjF9Efm3hk3T93/1zuWy+yx/7v5q0xNzkW/18F+e8LWHH506Edq y6bTrm4pzNN+VPXduqPy/qf0/6RDbUosxRmJhlrMRcWJAPxHcQ9/AgAA X-Env-Sender: quan.xu@intel.com X-Msg-Ref: server-7.tower-206.messagelabs.com!1461322650!35754210!3 X-Originating-IP: [134.134.136.65] X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG X-StarScan-Received: X-StarScan-Version: 8.28; banners=-,-,- X-VirusChecked: Checked Received: (qmail 61496 invoked from network); 22 Apr 2016 10:57:33 -0000 Received: from mga03.intel.com (HELO mga03.intel.com) (134.134.136.65) by server-7.tower-206.messagelabs.com with SMTP; 22 Apr 2016 10:57:33 -0000 Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga103.jf.intel.com with ESMTP; 22 Apr 2016 03:57:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,516,1455004800"; d="scan'208";a="937932746" Received: from xen-commits.sh.intel.com ([10.239.82.178]) by orsmga001.jf.intel.com with ESMTP; 22 Apr 2016 03:57:31 -0700 From: Quan Xu To: xen-devel@lists.xen.org Date: Fri, 22 Apr 2016 18:54:12 +0800 Message-Id: <1461322453-29216-3-git-send-email-quan.xu@intel.com> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1461322453-29216-1-git-send-email-quan.xu@intel.com> References: <1461322453-29216-1-git-send-email-quan.xu@intel.com> Cc: Quan Xu , kevin.tian@intel.com, feng.wu@intel.com, dario.faggioli@citrix.com, jbeulich@suse.com Subject: [Xen-devel] [PATCH v10 2/3] vt-d: synchronize for Device-TLB flush one by one X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP Today we do Device-TLB flush synchronization after issuing flush requests for all ATS devices belonging to a VM. Doing so however imposes a limitation, i.e. that we can not figure out which flush request is blocked in the flush queue list, based on VT-d spec. To prepare correct Device-TLB flush timeout handling in next patch, we change the behavior to synchronize for every Device-TLB flush request. So the Device-TLB flush interface is changed a little bit, by checking timeout within the function instead of outside of function. Accordingly we also do a similar change for flush interfaces of IOTLB/IEC/Context, i.e. moving synchronization into the function. Since there is no user of a non-synced interface, we just rename existing ones with _sync suffix. Signed-off-by: Quan Xu --- xen/drivers/passthrough/vtd/extern.h | 5 +-- xen/drivers/passthrough/vtd/qinval.c | 61 +++++++++++++++++++++-------------- xen/drivers/passthrough/vtd/x86/ats.c | 8 ++--- 3 files changed, 43 insertions(+), 31 deletions(-) diff --git a/xen/drivers/passthrough/vtd/extern.h b/xen/drivers/passthrough/vtd/extern.h index d4d37c3..ab7ecad 100644 --- a/xen/drivers/passthrough/vtd/extern.h +++ b/xen/drivers/passthrough/vtd/extern.h @@ -59,8 +59,9 @@ int ats_device(const struct pci_dev *, const struct acpi_drhd_unit *); int dev_invalidate_iotlb(struct iommu *iommu, u16 did, u64 addr, unsigned int size_order, u64 type); -int qinval_device_iotlb(struct iommu *iommu, - u32 max_invs_pend, u16 sid, u16 size, u64 addr); +int __must_check qinval_device_iotlb_sync(struct iommu *iommu, + u32 max_invs_pend, + u16 sid, u16 size, u64 addr); unsigned int get_cache_line_size(void); void cacheline_flush(char *); diff --git a/xen/drivers/passthrough/vtd/qinval.c b/xen/drivers/passthrough/vtd/qinval.c index 52ba2c2..69cc6bf 100644 --- a/xen/drivers/passthrough/vtd/qinval.c +++ b/xen/drivers/passthrough/vtd/qinval.c @@ -33,6 +33,8 @@ integer_param("vtd_qi_timeout", vtd_qi_timeout); #define IOMMU_QI_TIMEOUT (vtd_qi_timeout * MILLISECS(1)) +static int invalidate_sync(struct iommu *iommu); + static void print_qi_regs(struct iommu *iommu) { u64 val; @@ -72,8 +74,10 @@ static void qinval_update_qtail(struct iommu *iommu, unsigned int index) dmar_writeq(iommu->reg, DMAR_IQT_REG, (val << QINVAL_INDEX_SHIFT)); } -static void queue_invalidate_context(struct iommu *iommu, - u16 did, u16 source_id, u8 function_mask, u8 granu) +static int __must_check queue_invalidate_context_sync(struct iommu *iommu, + u16 did, u16 source_id, + u8 function_mask, + u8 granu) { unsigned long flags; unsigned int index; @@ -100,10 +104,14 @@ static void queue_invalidate_context(struct iommu *iommu, spin_unlock_irqrestore(&iommu->register_lock, flags); unmap_vtd_domain_page(qinval_entries); + + return invalidate_sync(iommu); } -static void queue_invalidate_iotlb(struct iommu *iommu, - u8 granu, u8 dr, u8 dw, u16 did, u8 am, u8 ih, u64 addr) +static int __must_check queue_invalidate_iotlb_sync(struct iommu *iommu, + u8 granu, u8 dr, u8 dw, + u16 did, u8 am, u8 ih, + u64 addr) { unsigned long flags; unsigned int index; @@ -133,10 +141,12 @@ static void queue_invalidate_iotlb(struct iommu *iommu, unmap_vtd_domain_page(qinval_entries); qinval_update_qtail(iommu, index); spin_unlock_irqrestore(&iommu->register_lock, flags); + + return invalidate_sync(iommu); } static int __must_check queue_invalidate_wait(struct iommu *iommu, - u8 iflag, u8 sw, u8 fn) + u8 iflag, u8 sw, u8 fn) { s_time_t timeout; volatile u32 poll_slot = QINVAL_STAT_INIT; @@ -196,8 +206,10 @@ static int invalidate_sync(struct iommu *iommu) return 0; } -int qinval_device_iotlb(struct iommu *iommu, - u32 max_invs_pend, u16 sid, u16 size, u64 addr) +int __must_check qinval_device_iotlb_sync(struct iommu *iommu, + u32 max_invs_pend, + u16 sid, u16 size, + u64 addr) { unsigned long flags; unsigned int index; @@ -226,15 +238,17 @@ int qinval_device_iotlb(struct iommu *iommu, qinval_update_qtail(iommu, index); spin_unlock_irqrestore(&iommu->register_lock, flags); - return 0; + return invalidate_sync(iommu); } -static void queue_invalidate_iec(struct iommu *iommu, u8 granu, u8 im, u16 iidx) +static int __must_check queue_invalidate_iec_sync(struct iommu *iommu, + u8 granu, u8 im, u16 iidx) { unsigned long flags; unsigned int index; u64 entry_base; struct qinval_entry *qinval_entry, *qinval_entries; + int ret; spin_lock_irqsave(&iommu->register_lock, flags); index = qinval_next_index(iommu); @@ -254,14 +268,9 @@ static void queue_invalidate_iec(struct iommu *iommu, u8 granu, u8 im, u16 iidx) unmap_vtd_domain_page(qinval_entries); qinval_update_qtail(iommu, index); spin_unlock_irqrestore(&iommu->register_lock, flags); -} -static int __iommu_flush_iec(struct iommu *iommu, u8 granu, u8 im, u16 iidx) -{ - int ret; - - queue_invalidate_iec(iommu, granu, im, iidx); ret = invalidate_sync(iommu); + /* * reading vt-d architecture register will ensure * draining happens in implementation independent way. @@ -273,12 +282,12 @@ static int __iommu_flush_iec(struct iommu *iommu, u8 granu, u8 im, u16 iidx) int iommu_flush_iec_global(struct iommu *iommu) { - return __iommu_flush_iec(iommu, IEC_GLOBAL_INVL, 0, 0); + return queue_invalidate_iec_sync(iommu, IEC_GLOBAL_INVL, 0, 0); } int iommu_flush_iec_index(struct iommu *iommu, u8 im, u16 iidx) { - return __iommu_flush_iec(iommu, IEC_INDEX_INVL, im, iidx); + return queue_invalidate_iec_sync(iommu, IEC_INDEX_INVL, im, iidx); } static int flush_context_qi( @@ -304,11 +313,9 @@ static int flush_context_qi( } if ( qi_ctrl->qinval_maddr != 0 ) - { - queue_invalidate_context(iommu, did, sid, fm, - type >> DMA_CCMD_INVL_GRANU_OFFSET); - ret = invalidate_sync(iommu); - } + ret = queue_invalidate_context_sync(iommu, did, sid, fm, + type >> DMA_CCMD_INVL_GRANU_OFFSET); + return ret; } @@ -346,9 +353,13 @@ static int flush_iotlb_qi( if (cap_read_drain(iommu->cap)) dr = 1; /* Need to conside the ih bit later */ - queue_invalidate_iotlb(iommu, - type >> DMA_TLB_FLUSH_GRANU_OFFSET, dr, - dw, did, size_order, 0, addr); + ret = queue_invalidate_iotlb_sync(iommu, + type >> DMA_TLB_FLUSH_GRANU_OFFSET, + dr, dw, did, size_order, 0, addr); + + if ( ret ) + return ret; + if ( flush_dev_iotlb ) ret = dev_invalidate_iotlb(iommu, did, addr, size_order, type); rc = invalidate_sync(iommu); diff --git a/xen/drivers/passthrough/vtd/x86/ats.c b/xen/drivers/passthrough/vtd/x86/ats.c index 334b9c1..dfa4d30 100644 --- a/xen/drivers/passthrough/vtd/x86/ats.c +++ b/xen/drivers/passthrough/vtd/x86/ats.c @@ -134,8 +134,8 @@ int dev_invalidate_iotlb(struct iommu *iommu, u16 did, /* invalidate all translations: sbit=1,bit_63=0,bit[62:12]=1 */ sbit = 1; addr = (~0UL << PAGE_SHIFT_4K) & 0x7FFFFFFFFFFFFFFF; - rc = qinval_device_iotlb(iommu, pdev->ats_queue_depth, - sid, sbit, addr); + rc = qinval_device_iotlb_sync(iommu, pdev->ats_queue_depth, + sid, sbit, addr); break; case DMA_TLB_PSI_FLUSH: if ( !device_in_domain(iommu, pdev, did) ) @@ -154,8 +154,8 @@ int dev_invalidate_iotlb(struct iommu *iommu, u16 did, addr |= (((u64)1 << (size_order - 1)) - 1) << PAGE_SHIFT_4K; } - rc = qinval_device_iotlb(iommu, pdev->ats_queue_depth, - sid, sbit, addr); + rc = qinval_device_iotlb_sync(iommu, pdev->ats_queue_depth, + sid, sbit, addr); break; default: dprintk(XENLOG_WARNING VTDPREFIX, "invalid vt-d flush type\n");