From patchwork Thu Jun 27 15:23:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 11019889 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id D0BDE14BB for ; Thu, 27 Jun 2019 15:25:29 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id C191628488 for ; Thu, 27 Jun 2019 15:25:29 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id B638628979; Thu, 27 Jun 2019 15:25:29 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=2.0 tests=BAYES_00,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=ham version=3.3.1 Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.wl.linuxfoundation.org (Postfix) with ESMTPS id 49F3728488 for ; Thu, 27 Jun 2019 15:25:29 +0000 (UTC) Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hgWFf-0004ZU-CK; Thu, 27 Jun 2019 15:23:55 +0000 Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1hgWFe-0004ZF-93 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2019 15:23:54 +0000 X-Inumbo-ID: 91756822-98ef-11e9-bd42-8733604c400a Received: from prv1-mh.provo.novell.com (unknown [137.65.248.33]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS id 91756822-98ef-11e9-bd42-8733604c400a; Thu, 27 Jun 2019 15:23:52 +0000 (UTC) Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Thu, 27 Jun 2019 09:23:51 -0600 Message-Id: <5D14DF81020000780023B9DF@prv1-mh.provo.novell.com> X-Mailer: Novell GroupWise Internet Agent 18.1.1 Date: Thu, 27 Jun 2019 09:23:45 -0600 From: "Jan Beulich" To: "xen-devel" References: <5D024C500200007800237DD8@prv1-mh.provo.novell.com> <5D14DDA6020000780023B96E@prv1-mh.provo.novell.com> In-Reply-To: <5D14DDA6020000780023B96E@prv1-mh.provo.novell.com> Mime-Version: 1.0 Content-Disposition: inline Subject: [Xen-devel] [PATCH RFC v2 10/10] AMD/IOMMU: correct IRTE updating X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Brian Woods , Suravee Suthikulpanit Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" X-Virus-Scanned: ClamAV using ClamSMTP While for 32-bit IRTEs I think we can safely continue to assume that the writes will translate to a single MOV, the use of CMPXCHG16B is more heavy handed than necessary for the 128-bit form, and the flushing didn't get done along the lines of what the specification says. Mark entries to be updated as not remapped (which will result in interrupt requests to get target aborted, but the interrupts should be masked anyway at that point in time), issue the flush, and only then write the new entry. In the 128-bit IRTE case set RemapEn separately last, to that the ordering of the writes of the two 64-bit halves won't matter. In update_intremap_entry_from_msi_msg() also fold the duplicate initial lock determination and acquire into just a single instance. Signed-off-by: Jan Beulich --- RFC: Putting the flush invocations in loops isn't overly nice, but I don't think this can really be abused, since callers up the stack hold further locks. Nevertheless I'd like to ask for better suggestions. --- v2: Parts morphed into earlier patch. --- a/xen/drivers/passthrough/amd/iommu_intr.c +++ b/xen/drivers/passthrough/amd/iommu_intr.c @@ -238,8 +238,7 @@ static void update_intremap_entry(union break; case irte128: - ACCESS_ONCE(entry.ptr128->raw[0]) = 0; - barrier(); + ASSERT(!entry.ptr128->full.remap_en); entry.ptr128->raw[1] = container_of(&full, union irte128, full)->raw[1]; barrier(); @@ -308,6 +307,20 @@ static int update_intremap_entry_from_io } entry = get_intremap_entry(iommu->seg, req_id, offset); + + /* The RemapEn fields match for all formats. */ + while ( iommu->enabled && entry.ptr32->basic.remap_en ) + { + entry.ptr32->basic.remap_en = 0; + spin_unlock(lock); + + spin_lock(&iommu->lock); + amd_iommu_flush_intremap(iommu, req_id); + spin_unlock(&iommu->lock); + + spin_lock(lock); + } + if ( fresh ) /* nothing */; else if ( !lo_update ) @@ -337,13 +350,6 @@ static int update_intremap_entry_from_io spin_unlock_irqrestore(lock, flags); - if ( iommu->enabled && !fresh ) - { - spin_lock_irqsave(&iommu->lock, flags); - amd_iommu_flush_intremap(iommu, req_id); - spin_unlock_irqrestore(&iommu->lock, flags); - } - set_rte_index(rte, offset); return 0; @@ -608,19 +614,27 @@ static int update_intremap_entry_from_ms req_id = get_dma_requestor_id(iommu->seg, bdf); alias_id = get_intremap_requestor_id(iommu->seg, bdf); + lock = get_intremap_lock(iommu->seg, req_id); + spin_lock_irqsave(lock, flags); + if ( msg == NULL ) { - lock = get_intremap_lock(iommu->seg, req_id); - spin_lock_irqsave(lock, flags); for ( i = 0; i < nr; ++i ) free_intremap_entry(iommu->seg, req_id, *remap_index + i); spin_unlock_irqrestore(lock, flags); - goto done; - } - lock = get_intremap_lock(iommu->seg, req_id); + if ( iommu->enabled ) + { + spin_lock_irqsave(&iommu->lock, flags); + amd_iommu_flush_intremap(iommu, req_id); + if ( alias_id != req_id ) + amd_iommu_flush_intremap(iommu, alias_id); + spin_unlock_irqrestore(&iommu->lock, flags); + } + + return 0; + } - spin_lock_irqsave(lock, flags); dest_mode = (msg->address_lo >> MSI_ADDR_DESTMODE_SHIFT) & 0x1; delivery_mode = (msg->data >> MSI_DATA_DELIVERY_MODE_SHIFT) & 0x1; vector = (msg->data >> MSI_DATA_VECTOR_SHIFT) & MSI_DATA_VECTOR_MASK; @@ -644,6 +658,22 @@ static int update_intremap_entry_from_ms } entry = get_intremap_entry(iommu->seg, req_id, offset); + + /* The RemapEn fields match for all formats. */ + while ( iommu->enabled && entry.ptr32->basic.remap_en ) + { + entry.ptr32->basic.remap_en = 0; + spin_unlock(lock); + + spin_lock(&iommu->lock); + amd_iommu_flush_intremap(iommu, req_id); + if ( alias_id != req_id ) + amd_iommu_flush_intremap(iommu, alias_id); + spin_unlock(&iommu->lock); + + spin_lock(lock); + } + update_intremap_entry(entry, vector, delivery_mode, dest_mode, dest); spin_unlock_irqrestore(lock, flags); @@ -663,16 +693,6 @@ static int update_intremap_entry_from_ms get_ivrs_mappings(iommu->seg)[alias_id].intremap_table); } -done: - if ( iommu->enabled ) - { - spin_lock_irqsave(&iommu->lock, flags); - amd_iommu_flush_intremap(iommu, req_id); - if ( alias_id != req_id ) - amd_iommu_flush_intremap(iommu, alias_id); - spin_unlock_irqrestore(&iommu->lock, flags); - } - return 0; }