From patchwork Thu Mar 24 05:57:57 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Quan Xu X-Patchwork-Id: 8658291 Return-Path: X-Original-To: patchwork-xen-devel@patchwork.kernel.org Delivered-To: patchwork-parsemail@patchwork1.web.kernel.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.136]) by patchwork1.web.kernel.org (Postfix) with ESMTP id 5CA1A9F44D for ; Thu, 24 Mar 2016 06:03:46 +0000 (UTC) Received: from mail.kernel.org (localhost [127.0.0.1]) by mail.kernel.org (Postfix) with ESMTP id 288A320379 for ; Thu, 24 Mar 2016 06:03:45 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D86AE20376 for ; Thu, 24 Mar 2016 06:03:43 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aiyKK-0005kk-VM; Thu, 24 Mar 2016 06:01:00 +0000 Received: from mail6.bemta6.messagelabs.com ([85.158.143.247]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aiyKJ-0005kQ-Ew for xen-devel@lists.xen.org; Thu, 24 Mar 2016 06:00:59 +0000 Received: from [85.158.143.35] by server-2.bemta-6.messagelabs.com id 4D/A6-09532-A9283F65; Thu, 24 Mar 2016 06:00:58 +0000 X-Env-Sender: quan.xu@intel.com X-Msg-Ref: server-9.tower-21.messagelabs.com!1458799256!5603859!2 X-Originating-IP: [192.55.52.88] X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n X-StarScan-Received: X-StarScan-Version: 8.11; banners=-,-,- X-VirusChecked: Checked Received: (qmail 21520 invoked from network); 24 Mar 2016 06:00:58 -0000 Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88) by server-9.tower-21.messagelabs.com with SMTP; 24 Mar 2016 06:00:58 -0000 Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP; 23 Mar 2016 23:00:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.24,383,1455004800"; d="scan'208";a="943936129" Received: from xen-commits.sh.intel.com ([10.239.82.178]) by fmsmga002.fm.intel.com with ESMTP; 23 Mar 2016 23:00:56 -0700 From: Quan Xu To: xen-devel@lists.xen.org Date: Thu, 24 Mar 2016 13:57:57 +0800 Message-Id: <1458799079-79825-2-git-send-email-quan.xu@intel.com> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1458799079-79825-1-git-send-email-quan.xu@intel.com> References: <1458799079-79825-1-git-send-email-quan.xu@intel.com> Cc: Quan Xu , kevin.tian@intel.com, feng.wu@intel.com, dario.faggioli@citrix.com, jbeulich@suse.com Subject: [Xen-devel] [PATCH v8 2/3] VT-d: Wrap a _sync version for all VT-d flush interfaces X-BeenThere: xen-devel@lists.xen.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_MED, UNPARSEABLE_RELAY autolearn=unavailable version=3.3.1 X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on mail.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP For consistency, we wrap a _sync version for all VT-d flush interfaces. It simplifies caller logic and makes code more readable as well. Signed-off-by: Quan Xu --- xen/drivers/passthrough/vtd/extern.h | 2 + xen/drivers/passthrough/vtd/qinval.c | 173 ++++++++++++++++++++-------------- xen/drivers/passthrough/vtd/x86/ats.c | 12 +-- 3 files changed, 106 insertions(+), 81 deletions(-) diff --git a/xen/drivers/passthrough/vtd/extern.h b/xen/drivers/passthrough/vtd/extern.h index d4d37c3..6d3187d 100644 --- a/xen/drivers/passthrough/vtd/extern.h +++ b/xen/drivers/passthrough/vtd/extern.h @@ -61,6 +61,8 @@ int dev_invalidate_iotlb(struct iommu *iommu, u16 did, int qinval_device_iotlb(struct iommu *iommu, u32 max_invs_pend, u16 sid, u16 size, u64 addr); +int qinval_device_iotlb_sync(struct iommu *iommu, u32 max_invs_pend, + u16 sid, u16 size, u64 addr); unsigned int get_cache_line_size(void); void cacheline_flush(char *); diff --git a/xen/drivers/passthrough/vtd/qinval.c b/xen/drivers/passthrough/vtd/qinval.c index 52ba2c2..ad9e265 100644 --- a/xen/drivers/passthrough/vtd/qinval.c +++ b/xen/drivers/passthrough/vtd/qinval.c @@ -72,6 +72,70 @@ static void qinval_update_qtail(struct iommu *iommu, unsigned int index) dmar_writeq(iommu->reg, DMAR_IQT_REG, (val << QINVAL_INDEX_SHIFT)); } +static int __must_check queue_invalidate_wait(struct iommu *iommu, + u8 iflag, u8 sw, u8 fn) +{ + s_time_t timeout; + volatile u32 poll_slot = QINVAL_STAT_INIT; + unsigned int index; + unsigned long flags; + u64 entry_base; + struct qinval_entry *qinval_entry, *qinval_entries; + + spin_lock_irqsave(&iommu->register_lock, flags); + index = qinval_next_index(iommu); + entry_base = iommu_qi_ctrl(iommu)->qinval_maddr + + ((index >> QINVAL_ENTRY_ORDER) << PAGE_SHIFT); + qinval_entries = map_vtd_domain_page(entry_base); + qinval_entry = &qinval_entries[index % (1 << QINVAL_ENTRY_ORDER)]; + + qinval_entry->q.inv_wait_dsc.lo.type = TYPE_INVAL_WAIT; + qinval_entry->q.inv_wait_dsc.lo.iflag = iflag; + qinval_entry->q.inv_wait_dsc.lo.sw = sw; + qinval_entry->q.inv_wait_dsc.lo.fn = fn; + qinval_entry->q.inv_wait_dsc.lo.res_1 = 0; + qinval_entry->q.inv_wait_dsc.lo.sdata = QINVAL_STAT_DONE; + qinval_entry->q.inv_wait_dsc.hi.res_1 = 0; + qinval_entry->q.inv_wait_dsc.hi.saddr = virt_to_maddr(&poll_slot) >> 2; + + unmap_vtd_domain_page(qinval_entries); + qinval_update_qtail(iommu, index); + spin_unlock_irqrestore(&iommu->register_lock, flags); + + /* Now we don't support interrupt method */ + if ( sw ) + { + /* In case all wait descriptor writes to same addr with same data */ + timeout = NOW() + IOMMU_QI_TIMEOUT; + while ( poll_slot != QINVAL_STAT_DONE ) + { + if ( NOW() > timeout ) + { + print_qi_regs(iommu); + printk(XENLOG_WARNING VTDPREFIX + "Queue invalidate wait descriptor timed out.\n"); + return -ETIMEDOUT; + } + + cpu_relax(); + } + + return 0; + } + + return -EOPNOTSUPP; +} + +static int invalidate_sync(struct iommu *iommu) +{ + struct qi_ctrl *qi_ctrl = iommu_qi_ctrl(iommu); + + if ( qi_ctrl->qinval_maddr ) + return queue_invalidate_wait(iommu, 0, 1, 1); + + return 0; +} + static void queue_invalidate_context(struct iommu *iommu, u16 did, u16 source_id, u8 function_mask, u8 granu) { @@ -102,6 +166,15 @@ static void queue_invalidate_context(struct iommu *iommu, unmap_vtd_domain_page(qinval_entries); } +static int queue_invalidate_context_sync(struct iommu *iommu, + u16 did, u16 source_id, u8 function_mask, u8 granu) +{ + queue_invalidate_context(iommu, did, source_id, + function_mask, granu); + + return invalidate_sync(iommu); +} + static void queue_invalidate_iotlb(struct iommu *iommu, u8 granu, u8 dr, u8 dw, u16 did, u8 am, u8 ih, u64 addr) { @@ -135,65 +208,12 @@ static void queue_invalidate_iotlb(struct iommu *iommu, spin_unlock_irqrestore(&iommu->register_lock, flags); } -static int __must_check queue_invalidate_wait(struct iommu *iommu, - u8 iflag, u8 sw, u8 fn) -{ - s_time_t timeout; - volatile u32 poll_slot = QINVAL_STAT_INIT; - unsigned int index; - unsigned long flags; - u64 entry_base; - struct qinval_entry *qinval_entry, *qinval_entries; - - spin_lock_irqsave(&iommu->register_lock, flags); - index = qinval_next_index(iommu); - entry_base = iommu_qi_ctrl(iommu)->qinval_maddr + - ((index >> QINVAL_ENTRY_ORDER) << PAGE_SHIFT); - qinval_entries = map_vtd_domain_page(entry_base); - qinval_entry = &qinval_entries[index % (1 << QINVAL_ENTRY_ORDER)]; - - qinval_entry->q.inv_wait_dsc.lo.type = TYPE_INVAL_WAIT; - qinval_entry->q.inv_wait_dsc.lo.iflag = iflag; - qinval_entry->q.inv_wait_dsc.lo.sw = sw; - qinval_entry->q.inv_wait_dsc.lo.fn = fn; - qinval_entry->q.inv_wait_dsc.lo.res_1 = 0; - qinval_entry->q.inv_wait_dsc.lo.sdata = QINVAL_STAT_DONE; - qinval_entry->q.inv_wait_dsc.hi.res_1 = 0; - qinval_entry->q.inv_wait_dsc.hi.saddr = virt_to_maddr(&poll_slot) >> 2; - - unmap_vtd_domain_page(qinval_entries); - qinval_update_qtail(iommu, index); - spin_unlock_irqrestore(&iommu->register_lock, flags); - - /* Now we don't support interrupt method */ - if ( sw ) - { - /* In case all wait descriptor writes to same addr with same data */ - timeout = NOW() + IOMMU_QI_TIMEOUT; - while ( poll_slot != QINVAL_STAT_DONE ) - { - if ( NOW() > timeout ) - { - print_qi_regs(iommu); - printk(XENLOG_WARNING VTDPREFIX - "Queue invalidate wait descriptor timed out.\n"); - return -ETIMEDOUT; - } - cpu_relax(); - } - return 0; - } - - return -EOPNOTSUPP; -} - -static int invalidate_sync(struct iommu *iommu) +static int queue_invalidate_iotlb_sync(struct iommu *iommu, + u8 granu, u8 dr, u8 dw, u16 did, u8 am, u8 ih, u64 addr) { - struct qi_ctrl *qi_ctrl = iommu_qi_ctrl(iommu); + queue_invalidate_iotlb(iommu, granu, dr, dw, did, am, ih, addr); - if ( qi_ctrl->qinval_maddr ) - return queue_invalidate_wait(iommu, 0, 1, 1); - return 0; + return invalidate_sync(iommu); } int qinval_device_iotlb(struct iommu *iommu, @@ -229,6 +249,14 @@ int qinval_device_iotlb(struct iommu *iommu, return 0; } +int qinval_device_iotlb_sync(struct iommu *iommu, + u32 max_invs_pend, u16 sid, u16 size, u64 addr) +{ + qinval_device_iotlb(iommu, max_invs_pend, sid, size, addr); + + return invalidate_sync(iommu); +} + static void queue_invalidate_iec(struct iommu *iommu, u8 granu, u8 im, u16 iidx) { unsigned long flags; @@ -256,7 +284,7 @@ static void queue_invalidate_iec(struct iommu *iommu, u8 granu, u8 im, u16 iidx) spin_unlock_irqrestore(&iommu->register_lock, flags); } -static int __iommu_flush_iec(struct iommu *iommu, u8 granu, u8 im, u16 iidx) +static int queue_invalidate_iec_sync(struct iommu *iommu, u8 granu, u8 im, u16 iidx) { int ret; @@ -273,12 +301,12 @@ static int __iommu_flush_iec(struct iommu *iommu, u8 granu, u8 im, u16 iidx) int iommu_flush_iec_global(struct iommu *iommu) { - return __iommu_flush_iec(iommu, IEC_GLOBAL_INVL, 0, 0); + return queue_invalidate_iec_sync(iommu, IEC_GLOBAL_INVL, 0, 0); } int iommu_flush_iec_index(struct iommu *iommu, u8 im, u16 iidx) { - return __iommu_flush_iec(iommu, IEC_INDEX_INVL, im, iidx); + return queue_invalidate_iec_sync(iommu, IEC_INDEX_INVL, im, iidx); } static int flush_context_qi( @@ -304,11 +332,9 @@ static int flush_context_qi( } if ( qi_ctrl->qinval_maddr != 0 ) - { - queue_invalidate_context(iommu, did, sid, fm, - type >> DMA_CCMD_INVL_GRANU_OFFSET); - ret = invalidate_sync(iommu); - } + ret = queue_invalidate_context_sync(iommu, did, sid, fm, + type >> DMA_CCMD_INVL_GRANU_OFFSET); + return ret; } @@ -338,23 +364,24 @@ static int flush_iotlb_qi( if ( qi_ctrl->qinval_maddr != 0 ) { - int rc; - /* use queued invalidation */ if (cap_write_drain(iommu->cap)) dw = 1; if (cap_read_drain(iommu->cap)) dr = 1; /* Need to conside the ih bit later */ - queue_invalidate_iotlb(iommu, - type >> DMA_TLB_FLUSH_GRANU_OFFSET, dr, - dw, did, size_order, 0, addr); + ret = queue_invalidate_iotlb_sync(iommu, + type >> DMA_TLB_FLUSH_GRANU_OFFSET, dr, dw, did, + size_order, 0, addr); + + /* TODO: Timeout error handling to be added later */ + if ( ret ) + return ret; + if ( flush_dev_iotlb ) ret = dev_invalidate_iotlb(iommu, did, addr, size_order, type); - rc = invalidate_sync(iommu); - if ( !ret ) - ret = rc; } + return ret; } diff --git a/xen/drivers/passthrough/vtd/x86/ats.c b/xen/drivers/passthrough/vtd/x86/ats.c index 334b9c1..7b1c07b 100644 --- a/xen/drivers/passthrough/vtd/x86/ats.c +++ b/xen/drivers/passthrough/vtd/x86/ats.c @@ -118,7 +118,6 @@ int dev_invalidate_iotlb(struct iommu *iommu, u16 did, { u16 sid = PCI_BDF2(pdev->bus, pdev->devfn); bool_t sbit; - int rc = 0; /* Only invalidate devices that belong to this IOMMU */ if ( pdev->iommu != iommu ) @@ -134,8 +133,8 @@ int dev_invalidate_iotlb(struct iommu *iommu, u16 did, /* invalidate all translations: sbit=1,bit_63=0,bit[62:12]=1 */ sbit = 1; addr = (~0UL << PAGE_SHIFT_4K) & 0x7FFFFFFFFFFFFFFF; - rc = qinval_device_iotlb(iommu, pdev->ats_queue_depth, - sid, sbit, addr); + ret = qinval_device_iotlb_sync(iommu, pdev->ats_queue_depth, + sid, sbit, addr); break; case DMA_TLB_PSI_FLUSH: if ( !device_in_domain(iommu, pdev, did) ) @@ -154,16 +153,13 @@ int dev_invalidate_iotlb(struct iommu *iommu, u16 did, addr |= (((u64)1 << (size_order - 1)) - 1) << PAGE_SHIFT_4K; } - rc = qinval_device_iotlb(iommu, pdev->ats_queue_depth, - sid, sbit, addr); + ret = qinval_device_iotlb_sync(iommu, pdev->ats_queue_depth, + sid, sbit, addr); break; default: dprintk(XENLOG_WARNING VTDPREFIX, "invalid vt-d flush type\n"); return -EOPNOTSUPP; } - - if ( !ret ) - ret = rc; } return ret;