From patchwork Mon Feb 10 17:33:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew Cooper X-Patchwork-Id: 11373845 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8DF2292A for ; Mon, 10 Feb 2020 17:34:23 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5F3E42082F for ; Mon, 10 Feb 2020 17:34:23 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (1024-bit key) header.d=citrix.com header.i=@citrix.com header.b="DBkuUr4x" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5F3E42082F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=citrix.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j1Cvw-00080c-Qi; Mon, 10 Feb 2020 17:33:20 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.89) (envelope-from ) id 1j1Cvv-00080W-Kz for xen-devel@lists.xenproject.org; Mon, 10 Feb 2020 17:33:19 +0000 X-Inumbo-ID: 6cdb1e04-4c2b-11ea-852a-bc764e2007e4 Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 6cdb1e04-4c2b-11ea-852a-bc764e2007e4; Mon, 10 Feb 2020 17:33:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=citrix.com; s=securemail; t=1581355999; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=jTiP3tkDdsjIXkTSj3Vh0lwnBnBBu6bNZnmi7jPJ6L4=; b=DBkuUr4xbHRo3F1rJ0ysUZ7Zj72aGbcorFYkt8ACF/rifFzM0oCwyxiR 3J3q+EsYPBtL5OuywdX4NZc6XawEzP9qiJ6oa58bs6i0yCBCn+T0JZfAk Pzt/hR3+gbyLwfUcqhiaq7FFOJ/T/JzlHI1Or6XzeXLFr8GBjoBIIOwS1 Y=; Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@citrix.com; spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com; spf=None smtp.helo=postmaster@mail.citrix.com Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of andrew.cooper3@citrix.com) identity=pra; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="andrew.cooper3@citrix.com"; x-conformance=sidf_compatible Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of Andrew.Cooper3@citrix.com designates 162.221.158.21 as permitted sender) identity=mailfrom; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="Andrew.Cooper3@citrix.com"; x-conformance=sidf_compatible; x-record-type="v=spf1"; x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83 ip4:168.245.78.127 ~all" Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender authenticity information available from domain of postmaster@mail.citrix.com) identity=helo; client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com; envelope-from="Andrew.Cooper3@citrix.com"; x-sender="postmaster@mail.citrix.com"; x-conformance=sidf_compatible IronPort-SDR: FwlNcXYOoZoTntgmXfT6vA+yaeqIZlJan2hy0/R4Os3e8NCte98FIZSlImxlMYmelGEc6X+exd RNO9Ff3nJClanVHS6ND64Sqweqx6xHBS0zGsK3oXTCZzeS/Yos0+saNoWvV3OH6FX+Owb4tpGo ITCH1YAvOrKLFrSkRq9elNWOKNvHrPDO1nKHqySPNhBy0q521BTLx51tpBufoVaCw8tcgDIbFH 4PdANUl5eo7ftLqL4GxZ/DEtSFR5hDQwANTQU900bSkyIcS+0bSSiRjxAUI8AwfFmpl3GmNuS8 smQ= X-SBRS: 2.7 X-MesageID: 12215004 X-Ironport-Server: esa3.hc3370-68.iphmx.com X-Remote-IP: 162.221.158.21 X-Policy: $RELAYED X-IronPort-AV: E=Sophos;i="5.70,425,1574139600"; d="scan'208";a="12215004" From: Andrew Cooper To: Xen-devel Date: Mon, 10 Feb 2020 17:33:09 +0000 Message-ID: <20200210173309.16521-1-andrew.cooper3@citrix.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20200203144340.4614-5-andrew.cooper3@citrix.com> References: <20200203144340.4614-5-andrew.cooper3@citrix.com> MIME-Version: 1.0 Subject: [Xen-devel] [PATCH v2 4/4] AMD/IOMMU: Treat head/tail pointers as byte offsets X-BeenThere: xen-devel@lists.xenproject.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Cc: Andrew Cooper , Wei Liu , Jan Beulich , =?utf-8?q?Roger_Pau_Monn=C3=A9?= Errors-To: xen-devel-bounces@lists.xenproject.org Sender: "Xen-devel" The MMIO registers as already byte offsets. Using them in this form removes the need to shift their values for use. It is also inefficient to store both entries and alloc_size (which only differ by entry_size). Rename alloc_size to size, and drop entries entirely, which simplifies the allocation/deallocation helpers slightly. Mark send_iommu_command() and invalidate_iommu_all() as static, as they have no external declaration or callers. Signed-off-by: Andrew Cooper Reviewed-by: Jan Beulich --- CC: Jan Beulich CC: Wei Liu CC: Roger Pau Monné v2: * Mask head/tail pointers * Drop unnecessary cast. Tested on a Rome system. --- xen/drivers/passthrough/amd/iommu-defs.h | 1 - xen/drivers/passthrough/amd/iommu.h | 18 ++---------------- xen/drivers/passthrough/amd/iommu_cmd.c | 21 +++++++++------------ xen/drivers/passthrough/amd/iommu_init.c | 32 ++++++++++++++------------------ 4 files changed, 25 insertions(+), 47 deletions(-) diff --git a/xen/drivers/passthrough/amd/iommu-defs.h b/xen/drivers/passthrough/amd/iommu-defs.h index 963009de6a..50613ca150 100644 --- a/xen/drivers/passthrough/amd/iommu-defs.h +++ b/xen/drivers/passthrough/amd/iommu-defs.h @@ -481,7 +481,6 @@ struct amd_iommu_pte { #define INV_IOMMU_ALL_PAGES_ADDRESS ((1ULL << 63) - 1) #define IOMMU_RING_BUFFER_PTR_MASK 0x0007FFF0 -#define IOMMU_RING_BUFFER_PTR_SHIFT 4 #define IOMMU_CMD_DEVICE_ID_MASK 0x0000FFFF #define IOMMU_CMD_DEVICE_ID_SHIFT 0 diff --git a/xen/drivers/passthrough/amd/iommu.h b/xen/drivers/passthrough/amd/iommu.h index 0b598d06f8..1abfdc685a 100644 --- a/xen/drivers/passthrough/amd/iommu.h +++ b/xen/drivers/passthrough/amd/iommu.h @@ -58,12 +58,11 @@ struct table_struct { }; struct ring_buffer { + spinlock_t lock; /* protect buffer pointers */ void *buffer; - unsigned long entries; - unsigned long alloc_size; uint32_t tail; uint32_t head; - spinlock_t lock; /* protect buffer pointers */ + uint32_t size; }; typedef struct iommu_cap { @@ -379,19 +378,6 @@ static inline int iommu_has_cap(struct amd_iommu *iommu, uint32_t bit) return !!(iommu->cap.header & (1u << bit)); } -/* access tail or head pointer of ring buffer */ -static inline uint32_t iommu_get_rb_pointer(uint32_t reg) -{ - return get_field_from_reg_u32(reg, IOMMU_RING_BUFFER_PTR_MASK, - IOMMU_RING_BUFFER_PTR_SHIFT); -} - -static inline void iommu_set_rb_pointer(uint32_t *reg, uint32_t val) -{ - set_field_in_reg_u32(val, *reg, IOMMU_RING_BUFFER_PTR_MASK, - IOMMU_RING_BUFFER_PTR_SHIFT, reg); -} - /* access device id field from iommu cmd */ static inline uint16_t iommu_get_devid_from_cmd(uint32_t cmd) { diff --git a/xen/drivers/passthrough/amd/iommu_cmd.c b/xen/drivers/passthrough/amd/iommu_cmd.c index 1eae339692..249ed345a0 100644 --- a/xen/drivers/passthrough/amd/iommu_cmd.c +++ b/xen/drivers/passthrough/amd/iommu_cmd.c @@ -24,16 +24,15 @@ static int queue_iommu_command(struct amd_iommu *iommu, u32 cmd[]) { uint32_t tail, head; - tail = iommu->cmd_buffer.tail; - if ( ++tail == iommu->cmd_buffer.entries ) + tail = iommu->cmd_buffer.tail + IOMMU_CMD_BUFFER_ENTRY_SIZE; + if ( tail == iommu->cmd_buffer.size ) tail = 0; - head = iommu_get_rb_pointer(readl(iommu->mmio_base + - IOMMU_CMD_BUFFER_HEAD_OFFSET)); + head = readl(iommu->mmio_base + + IOMMU_CMD_BUFFER_HEAD_OFFSET) & IOMMU_RING_BUFFER_PTR_MASK; if ( head != tail ) { - memcpy(iommu->cmd_buffer.buffer + - (iommu->cmd_buffer.tail * IOMMU_CMD_BUFFER_ENTRY_SIZE), + memcpy(iommu->cmd_buffer.buffer + iommu->cmd_buffer.tail, cmd, IOMMU_CMD_BUFFER_ENTRY_SIZE); iommu->cmd_buffer.tail = tail; @@ -45,13 +44,11 @@ static int queue_iommu_command(struct amd_iommu *iommu, u32 cmd[]) static void commit_iommu_command_buffer(struct amd_iommu *iommu) { - u32 tail = 0; - - iommu_set_rb_pointer(&tail, iommu->cmd_buffer.tail); - writel(tail, iommu->mmio_base+IOMMU_CMD_BUFFER_TAIL_OFFSET); + writel(iommu->cmd_buffer.tail, + iommu->mmio_base + IOMMU_CMD_BUFFER_TAIL_OFFSET); } -int send_iommu_command(struct amd_iommu *iommu, u32 cmd[]) +static int send_iommu_command(struct amd_iommu *iommu, u32 cmd[]) { if ( queue_iommu_command(iommu, cmd) ) { @@ -261,7 +258,7 @@ static void invalidate_interrupt_table(struct amd_iommu *iommu, u16 device_id) send_iommu_command(iommu, cmd); } -void invalidate_iommu_all(struct amd_iommu *iommu) +static void invalidate_iommu_all(struct amd_iommu *iommu) { u32 cmd[4], entry; diff --git a/xen/drivers/passthrough/amd/iommu_init.c b/xen/drivers/passthrough/amd/iommu_init.c index 5544dd9505..c42b608f07 100644 --- a/xen/drivers/passthrough/amd/iommu_init.c +++ b/xen/drivers/passthrough/amd/iommu_init.c @@ -117,7 +117,7 @@ static void register_iommu_cmd_buffer_in_mmio_space(struct amd_iommu *iommu) iommu_set_addr_lo_to_reg(&entry, addr_lo >> PAGE_SHIFT); writel(entry, iommu->mmio_base + IOMMU_CMD_BUFFER_BASE_LOW_OFFSET); - power_of2_entries = get_order_from_bytes(iommu->cmd_buffer.alloc_size) + + power_of2_entries = get_order_from_bytes(iommu->cmd_buffer.size) + IOMMU_CMD_BUFFER_POWER_OF2_ENTRIES_PER_PAGE; entry = 0; @@ -145,7 +145,7 @@ static void register_iommu_event_log_in_mmio_space(struct amd_iommu *iommu) iommu_set_addr_lo_to_reg(&entry, addr_lo >> PAGE_SHIFT); writel(entry, iommu->mmio_base + IOMMU_EVENT_LOG_BASE_LOW_OFFSET); - power_of2_entries = get_order_from_bytes(iommu->event_log.alloc_size) + + power_of2_entries = get_order_from_bytes(iommu->event_log.size) + IOMMU_EVENT_LOG_POWER_OF2_ENTRIES_PER_PAGE; entry = 0; @@ -173,7 +173,7 @@ static void register_iommu_ppr_log_in_mmio_space(struct amd_iommu *iommu) iommu_set_addr_lo_to_reg(&entry, addr_lo >> PAGE_SHIFT); writel(entry, iommu->mmio_base + IOMMU_PPR_LOG_BASE_LOW_OFFSET); - power_of2_entries = get_order_from_bytes(iommu->ppr_log.alloc_size) + + power_of2_entries = get_order_from_bytes(iommu->ppr_log.size) + IOMMU_PPR_LOG_POWER_OF2_ENTRIES_PER_PAGE; entry = 0; @@ -300,7 +300,7 @@ static int iommu_read_log(struct amd_iommu *iommu, unsigned int entry_size, void (*parse_func)(struct amd_iommu *, u32 *)) { - u32 tail, head, *entry, tail_offest, head_offset; + u32 tail, *entry, tail_offest, head_offset; BUG_ON(!iommu || ((log != &iommu->event_log) && (log != &iommu->ppr_log))); @@ -315,23 +315,21 @@ static int iommu_read_log(struct amd_iommu *iommu, IOMMU_EVENT_LOG_HEAD_OFFSET : IOMMU_PPR_LOG_HEAD_OFFSET; - tail = readl(iommu->mmio_base + tail_offest); - tail = iommu_get_rb_pointer(tail); + tail = readl(iommu->mmio_base + tail_offest) & IOMMU_RING_BUFFER_PTR_MASK; while ( tail != log->head ) { /* read event log entry */ - entry = (u32 *)(log->buffer + log->head * entry_size); + entry = log->buffer + log->head; parse_func(iommu, entry); - if ( ++log->head == log->entries ) + + log->head += entry_size; + if ( log->head == log->size ) log->head = 0; /* update head pointer */ - head = 0; - iommu_set_rb_pointer(&head, log->head); - - writel(head, iommu->mmio_base + head_offset); + writel(log->head, iommu->mmio_base + head_offset); } spin_unlock(&log->lock); @@ -1000,7 +998,7 @@ static void __init deallocate_buffer(void *buf, unsigned long sz) static void __init deallocate_ring_buffer(struct ring_buffer *ring_buf) { - deallocate_buffer(ring_buf->buffer, ring_buf->alloc_size); + deallocate_buffer(ring_buf->buffer, ring_buf->size); ring_buf->buffer = NULL; ring_buf->head = 0; ring_buf->tail = 0; @@ -1035,11 +1033,9 @@ static void *__init allocate_ring_buffer(struct ring_buffer *ring_buf, ring_buf->tail = 0; spin_lock_init(&ring_buf->lock); - - ring_buf->alloc_size = PAGE_SIZE << get_order_from_bytes(entries * - entry_size); - ring_buf->entries = ring_buf->alloc_size / entry_size; - ring_buf->buffer = allocate_buffer(ring_buf->alloc_size, name, clear); + + ring_buf->size = PAGE_SIZE << get_order_from_bytes(entries * entry_size); + ring_buf->buffer = allocate_buffer(ring_buf->size, name, clear); return ring_buf->buffer; }