From patchwork Tue Jul 16 16:23:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 11046403 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 699B6912 for ; Tue, 16 Jul 2019 16:26:13 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 51715285A7 for ; Tue, 16 Jul 2019 16:26:13 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id 447CD285BD; Tue, 16 Jul 2019 16:26:13 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id AB55E285A7 for ; Tue, 16 Jul 2019 16:26:12 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQFU-00017g-F0; Tue, 16 Jul 2019 16:24:16 +0000 Received: from us1-rack-dfw2.inumbo.com ([104.130.134.6]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hnQFT-00017b-Q4 for xen-devel@lists.xenproject.org; Tue, 16 Jul 2019 16:24:15 +0000 X-Inumbo-ID: 265a3de2-a7e6-11e9-8980-bc764e045a96 Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144]) by us1-rack-dfw2.inumbo.com (Halon) with ESMTPS id 265a3de2-a7e6-11e9-8980-bc764e045a96; Tue, 16 Jul 2019 16:24:14 +0000 (UTC) Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@citrix.com; spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of andrew.cooper3@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="andrew.cooper3@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of Andrew.Cooper3@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="Andrew.Cooper3@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ~all" Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: kKI1dKnECExF3SzwBlPdlrWFpAqX4cZ9JTYO+SVN4akUy2Fn5qFTY7WE10vgYbvUfQMXVuuMLB 1emP/XD51lBrMKDR41JbGVfy975zjv1tC9ocN9pQ957hCt5BFwhpDsSNjX4flx7SpqRI4jmiTu 0sFYd3tRFwPVlIdSCgicVH4drTyWP+ScFA5iOicQM3alHw9Qf1IYfFVlnBrTzzzxrjIC6n7pHN 6w6SAL6RSk8HRtbOK/clIpgjFp4ribqaFAbVVezIaTLeIo/W0Pg+N/jCs5nWN9s9fiHr/nq4ir V/E= X-SBRS: 2.7 X-MesageID: 3119422 X-Ironport-Server: esa4.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.63,498,1557201600"; d="scan'208";a="3119422" From: Andrew Cooper To: Xen-devel Date: Tue, 16 Jul 2019 17:23:55 +0100 Message-ID: <20190716162355.1321-1-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v3] passthrough/vtd: Don't DMA to the stack in queue_invalidate_wait() X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Kevin Tian , Jan Beulich Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP DMA-ing to the stack is considered bad practice. In this case, if a timeout occurs because of a sluggish device which is processing the request, the completion notification will corrupt the stack of a subsequent deeper call tree. Place the poll_slot in a percpu area and DMA to that instead. Fix the declaration of saddr in struct qinval_entry, to avoid a shift by two. The requirement here is that the DMA address is dword aligned, which is covered by poll_slot's type. This change does not address other issues. Correlating completions after a timeout with their request is a more complicated change. Signed-off-by: Andrew Cooper Reviewed-by: Jan Beulich Reviewed-by: Kevin Tian --- CC: Jan Beulich CC: Kevin Tian It turns out that this has been pending since 4.10, and grossly late. v3: * Fix saddr declarion to drop a shift-by-two. * Drop volatile attribute. Use ACCESS_ONCE() instead. --- xen/drivers/passthrough/vtd/iommu.h | 3 +-- xen/drivers/passthrough/vtd/qinval.c | 9 +++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/xen/drivers/passthrough/vtd/iommu.h b/xen/drivers/passthrough/vtd/iommu.h index 1a992f72d6..c9290a3996 100644 --- a/xen/drivers/passthrough/vtd/iommu.h +++ b/xen/drivers/passthrough/vtd/iommu.h @@ -444,8 +444,7 @@ struct qinval_entry { sdata : 32; }lo; struct { - u64 res_1 : 2, - saddr : 62; + u64 saddr; }hi; }inv_wait_dsc; }q; diff --git a/xen/drivers/passthrough/vtd/qinval.c b/xen/drivers/passthrough/vtd/qinval.c index 01447cf9a8..09cbd36ebb 100644 --- a/xen/drivers/passthrough/vtd/qinval.c +++ b/xen/drivers/passthrough/vtd/qinval.c @@ -147,13 +147,15 @@ static int __must_check queue_invalidate_wait(struct iommu *iommu, u8 iflag, u8 sw, u8 fn, bool_t flush_dev_iotlb) { - volatile u32 poll_slot = QINVAL_STAT_INIT; + static DEFINE_PER_CPU(uint32_t, poll_slot); unsigned int index; unsigned long flags; u64 entry_base; struct qinval_entry *qinval_entry, *qinval_entries; + uint32_t *this_poll_slot = &this_cpu(poll_slot); spin_lock_irqsave(&iommu->register_lock, flags); + ACCESS_ONCE(*this_poll_slot) = QINVAL_STAT_INIT; index = qinval_next_index(iommu); entry_base = iommu_qi_ctrl(iommu)->qinval_maddr + ((index >> QINVAL_ENTRY_ORDER) << PAGE_SHIFT); @@ -166,8 +168,7 @@ static int __must_check queue_invalidate_wait(struct iommu *iommu, qinval_entry->q.inv_wait_dsc.lo.fn = fn; qinval_entry->q.inv_wait_dsc.lo.res_1 = 0; qinval_entry->q.inv_wait_dsc.lo.sdata = QINVAL_STAT_DONE; - qinval_entry->q.inv_wait_dsc.hi.res_1 = 0; - qinval_entry->q.inv_wait_dsc.hi.saddr = virt_to_maddr(&poll_slot) >> 2; + qinval_entry->q.inv_wait_dsc.hi.saddr = virt_to_maddr(this_poll_slot); unmap_vtd_domain_page(qinval_entries); qinval_update_qtail(iommu, index); @@ -182,7 +183,7 @@ static int __must_check queue_invalidate_wait(struct iommu *iommu, timeout = NOW() + MILLISECS(flush_dev_iotlb ? iommu_dev_iotlb_timeout : VTD_QI_TIMEOUT); - while ( poll_slot != QINVAL_STAT_DONE ) + while ( ACCESS_ONCE(*this_poll_slot) != QINVAL_STAT_DONE ) { if ( NOW() > timeout ) {