From patchwork Thu Aug 24 16:15:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 13364435 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0DAFAC6FA8F for ; Thu, 24 Aug 2023 16:26:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234934AbjHXQZn (ORCPT ); Thu, 24 Aug 2023 12:25:43 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242566AbjHXQZi (ORCPT ); Thu, 24 Aug 2023 12:25:38 -0400 X-Greylist: delayed 503 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Thu, 24 Aug 2023 09:25:36 PDT Received: from mailout1.hostsharing.net (mailout1.hostsharing.net [83.223.95.204]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD0031BE for ; Thu, 24 Aug 2023 09:25:36 -0700 (PDT) Received: from h08.hostsharing.net (h08.hostsharing.net [IPv6:2a01:37:1000::53df:5f1c:0]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "*.hostsharing.net", Issuer "RapidSSL Global TLS RSA4096 SHA256 2022 CA1" (verified OK)) by mailout1.hostsharing.net (Postfix) with ESMTPS id 18235101920CC; Thu, 24 Aug 2023 18:25:35 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id DA83B603DADA; Thu, 24 Aug 2023 18:25:34 +0200 (CEST) X-Mailbox-Line: From 872c15d66229da67c07cf455166f34ab39db33ed Mon Sep 17 00:00:00 2001 Message-Id: <872c15d66229da67c07cf455166f34ab39db33ed.1692892942.git.lukas@wunner.de> In-Reply-To: References: From: Lukas Wunner Date: Thu, 24 Aug 2023 18:15:01 +0200 Subject: [PATCH v2 01/10] xhci: Clear EHB bit only at end of interrupt handler To: Mathias Nyman , Greg Kroah-Hartman Cc: linux-usb@vger.kernel.org, Jonathan Bell , Phil Elwell , Nicolas Saenz Julienne , Stefan Wahren , Philipp Rosenberger , Lino Sanfilippo , Peter Chen Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org The Event Handler Busy bit shall be cleared by software when the Event Ring is empty. The xHC is thereby informed that it may raise another interrupt once it has enqueued new events (sec 4.17.2). However since commit dc0ffbea5729 ("usb: host: xhci: update event ring dequeue pointer on purpose"), the EHB bit is already cleared after half a segment has been processed. As a result, spurious interrupts may occur: - xhci_irq() processes half a segment, clears EHB, continues processing remaining events. - xHC enqueues new events. Because EHB has been cleared, xHC sets Interrupt Pending bit. Interrupt moderation countdown begins. - Meanwhile xhci_irq() continues processing events. Interrupt moderation countdown reaches zero, so an MSI interrupt is signaled. - xhci_irq() empties the Event Ring, clears EHB again and is done. - Because an MSI interrupt has been signaled, xhci_irq() is run again. It discovers there's nothing to do and returns IRQ_NONE. Avoid by clearing the EHB bit only at the end of xhci_irq(). Fixes: dc0ffbea5729 ("usb: host: xhci: update event ring dequeue pointer on purpose") Signed-off-by: Lukas Wunner Cc: stable@vger.kernel.org # v5.5+ Cc: Peter Chen --- Change v1 -> v2: Use ERST_DESI_MASK instead of ERST_PTR_MASK when constructing the new ERDP value to avoid carrying over a set EHB bit. drivers/usb/host/xhci-ring.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 1dde53f6eb31..21f50d25157f 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -2996,7 +2996,8 @@ static int xhci_handle_event(struct xhci_hcd *xhci, struct xhci_interrupter *ir) */ static void xhci_update_erst_dequeue(struct xhci_hcd *xhci, struct xhci_interrupter *ir, - union xhci_trb *event_ring_deq) + union xhci_trb *event_ring_deq, + bool clear_ehb) { u64 temp_64; dma_addr_t deq; @@ -3017,12 +3018,13 @@ static void xhci_update_erst_dequeue(struct xhci_hcd *xhci, return; /* Update HC event ring dequeue pointer */ - temp_64 &= ERST_PTR_MASK; + temp_64 &= ERST_DESI_MASK; temp_64 |= ((u64) deq & (u64) ~ERST_PTR_MASK); } /* Clear the event handler busy flag (RW1C) */ - temp_64 |= ERST_EHB; + if (clear_ehb) + temp_64 |= ERST_EHB; xhci_write_64(xhci, temp_64, &ir->ir_set->erst_dequeue); } @@ -3103,7 +3105,7 @@ irqreturn_t xhci_irq(struct usb_hcd *hcd) while (xhci_handle_event(xhci, ir) > 0) { if (event_loop++ < TRBS_PER_SEGMENT / 2) continue; - xhci_update_erst_dequeue(xhci, ir, event_ring_deq); + xhci_update_erst_dequeue(xhci, ir, event_ring_deq, false); event_ring_deq = ir->event_ring->dequeue; /* ring is half-full, force isoc trbs to interrupt more often */ @@ -3113,7 +3115,7 @@ irqreturn_t xhci_irq(struct usb_hcd *hcd) event_loop = 0; } - xhci_update_erst_dequeue(xhci, ir, event_ring_deq); + xhci_update_erst_dequeue(xhci, ir, event_ring_deq, true); ret = IRQ_HANDLED; out: From patchwork Thu Aug 24 16:15:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 13364443 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6D002C3DA6F for ; Thu, 24 Aug 2023 16:29:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242351AbjHXQ31 (ORCPT ); Thu, 24 Aug 2023 12:29:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43352 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242588AbjHXQ3E (ORCPT ); Thu, 24 Aug 2023 12:29:04 -0400 Received: from mailout3.hostsharing.net (mailout3.hostsharing.net [176.9.242.54]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A158C1FD7 for ; Thu, 24 Aug 2023 09:28:31 -0700 (PDT) Received: from h08.hostsharing.net (h08.hostsharing.net [83.223.95.28]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "*.hostsharing.net", Issuer "RapidSSL Global TLS RSA4096 SHA256 2022 CA1" (verified OK)) by mailout3.hostsharing.net (Postfix) with ESMTPS id 20FCE101E6A3D; Thu, 24 Aug 2023 18:28:18 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id C52B0603DADA; Thu, 24 Aug 2023 18:28:17 +0200 (CEST) X-Mailbox-Line: From 369fc112b3008bc31b16e4837a24743e685b8ca8 Mon Sep 17 00:00:00 2001 Message-Id: <369fc112b3008bc31b16e4837a24743e685b8ca8.1692892942.git.lukas@wunner.de> In-Reply-To: References: From: Lukas Wunner Date: Thu, 24 Aug 2023 18:15:02 +0200 Subject: [PATCH v2 02/10] xhci: Preserve RsvdP bits in ERSTBA register correctly To: Mathias Nyman , Greg Kroah-Hartman Cc: linux-usb@vger.kernel.org, Jonathan Bell , Phil Elwell , Nicolas Saenz Julienne , Stefan Wahren , Philipp Rosenberger , Lino Sanfilippo Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org xhci_add_interrupter() erroneously preserves only the lowest 4 bits when writing the ERSTBA register, not the lowest 6 bits. Fix it. Migrate the ERST_BASE_RSVDP macro to the modern GENMASK_ULL() syntax to avoid a u64 cast. This was previously fixed by commit 8c1cbec9db1a ("xhci: fix event ring segment table related masks and variables in header"), but immediately undone by commit b17a57f89f69 ("xhci: Refactor interrupter code for initial multi interrupter support."). Fixes: b17a57f89f69 ("xhci: Refactor interrupter code for initial multi interrupter support.") Signed-off-by: Lukas Wunner Cc: stable@vger.kernel.org # v6.3+ --- drivers/usb/host/xhci-mem.c | 4 ++-- drivers/usb/host/xhci.h | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index 8714ab5bf04d..0a37f0d511cf 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -2285,8 +2285,8 @@ xhci_add_interrupter(struct xhci_hcd *xhci, struct xhci_interrupter *ir, writel(erst_size, &ir->ir_set->erst_size); erst_base = xhci_read_64(xhci, &ir->ir_set->erst_base); - erst_base &= ERST_PTR_MASK; - erst_base |= (ir->erst.erst_dma_addr & (u64) ~ERST_PTR_MASK); + erst_base &= ERST_BASE_RSVDP; + erst_base |= ir->erst.erst_dma_addr & ~ERST_BASE_RSVDP; xhci_write_64(xhci, erst_base, &ir->ir_set->erst_base); /* Set the event ring dequeue address of this interrupter */ diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index 7e282b4522c0..5df370482521 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -514,7 +514,7 @@ struct xhci_intr_reg { #define ERST_SIZE_MASK (0xffff << 16) /* erst_base bitmasks */ -#define ERST_BASE_RSVDP (0x3f) +#define ERST_BASE_RSVDP (GENMASK_ULL(5, 0)) /* erst_dequeue bitmasks */ /* Dequeue ERST Segment Index (DESI) - Segment number (or alias) From patchwork Thu Aug 24 16:15:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 13364445 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C79A3C6FA8F for ; Thu, 24 Aug 2023 16:33:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242572AbjHXQdP (ORCPT ); Thu, 24 Aug 2023 12:33:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35814 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242573AbjHXQdK (ORCPT ); Thu, 24 Aug 2023 12:33:10 -0400 Received: from mailout1.hostsharing.net (mailout1.hostsharing.net [83.223.95.204]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E4AF9E5E for ; Thu, 24 Aug 2023 09:33:07 -0700 (PDT) Received: from h08.hostsharing.net (h08.hostsharing.net [83.223.95.28]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "*.hostsharing.net", Issuer "RapidSSL Global TLS RSA4096 SHA256 2022 CA1" (verified OK)) by mailout1.hostsharing.net (Postfix) with ESMTPS id 5A94C101920CC; Thu, 24 Aug 2023 18:33:06 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id 34F99603DADA; Thu, 24 Aug 2023 18:33:06 +0200 (CEST) X-Mailbox-Line: From f48daf6c36eed754ff5e4f16e075399f4fa480c6 Mon Sep 17 00:00:00 2001 Message-Id: In-Reply-To: References: From: Lukas Wunner Date: Thu, 24 Aug 2023 18:15:03 +0200 Subject: [PATCH v2 03/10] xhci: Set DESI bits in ERDP register correctly To: Mathias Nyman , Greg Kroah-Hartman Cc: linux-usb@vger.kernel.org, Jonathan Bell , Phil Elwell , Nicolas Saenz Julienne , Stefan Wahren , Philipp Rosenberger , Lino Sanfilippo Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org When using more than one Event Ring segment (ERSTSZ > 1), software shall set the DESI bits in the ERDP register to the number of the segment to which the upper ERDP bits are pointing. The xHC may use the DESI bits as a shortcut to determine whether it needs to check for an Event Ring Full condition: If it's enqueueing events in a different segment, it need not compare its internal Enqueue Pointer with the Dequeue Pointer in the upper bits of the ERDP register (sec 5.5.2.3.3). Not setting the DESI bits correctly can result in the xHC enqueueing events past the Dequeue Pointer. On Renesas uPD720201 host controllers, incorrect DESI bits cause an interrupt storm. For comparison, VIA VL805 host controllers do not exhibit such problems. Perhaps they do not take advantage of the optimization afforded by the DESI bits. To fix the issue, assign the segment number to each struct xhci_segment in xhci_segment_alloc(). When advancing the Dequeue Pointer in xhci_update_erst_dequeue(), write the segment number to the DESI bits. On driver probe, set the DESI bits to zero in xhci_set_hc_event_deq() as processing starts in segment 0. Likewise on driver teardown, clear the DESI bits to zero in xhci_free_interrupter() when clearing the upper bits of the ERDP register. Previously those functions (incorrectly) treated the DESI bits as if they're declared RsvdP. Signed-off-by: Lukas Wunner --- drivers/usb/host/xhci-mem.c | 25 +++++++++++-------------- drivers/usb/host/xhci-ring.c | 2 +- drivers/usb/host/xhci.h | 1 + 3 files changed, 13 insertions(+), 15 deletions(-) diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index 0a37f0d511cf..17c9942010c8 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -29,6 +29,7 @@ static struct xhci_segment *xhci_segment_alloc(struct xhci_hcd *xhci, unsigned int cycle_state, unsigned int max_packet, + unsigned int num, gfp_t flags) { struct xhci_segment *seg; @@ -60,6 +61,7 @@ static struct xhci_segment *xhci_segment_alloc(struct xhci_hcd *xhci, for (i = 0; i < TRBS_PER_SEGMENT; i++) seg->trbs[i].link.control = cpu_to_le32(TRB_CYCLE); } + seg->num = num; seg->dma = dma; seg->next = NULL; @@ -324,6 +326,7 @@ static int xhci_alloc_segments_for_ring(struct xhci_hcd *xhci, enum xhci_ring_type type, unsigned int max_packet, gfp_t flags) { struct xhci_segment *prev; + unsigned int num = 0; bool chain_links; /* Set chain bit for 0.95 hosts, and for isoc rings on AMD 0.96 host */ @@ -331,16 +334,17 @@ static int xhci_alloc_segments_for_ring(struct xhci_hcd *xhci, (type == TYPE_ISOC && (xhci->quirks & XHCI_AMD_0x96_HOST))); - prev = xhci_segment_alloc(xhci, cycle_state, max_packet, flags); + prev = xhci_segment_alloc(xhci, cycle_state, max_packet, num, flags); if (!prev) return -ENOMEM; - num_segs--; + num++; *first = prev; - while (num_segs > 0) { + while (num < num_segs) { struct xhci_segment *next; - next = xhci_segment_alloc(xhci, cycle_state, max_packet, flags); + next = xhci_segment_alloc(xhci, cycle_state, max_packet, num, + flags); if (!next) { prev = *first; while (prev) { @@ -353,7 +357,7 @@ static int xhci_alloc_segments_for_ring(struct xhci_hcd *xhci, xhci_link_segments(prev, next, type, chain_links); prev = next; - num_segs--; + num++; } xhci_link_segments(prev, *first, type, chain_links); *last = prev; @@ -1801,7 +1805,6 @@ xhci_free_interrupter(struct xhci_hcd *xhci, struct xhci_interrupter *ir) { struct device *dev = xhci_to_hcd(xhci)->self.sysdev; size_t erst_size; - u64 tmp64; u32 tmp; if (!ir) @@ -1824,9 +1827,7 @@ xhci_free_interrupter(struct xhci_hcd *xhci, struct xhci_interrupter *ir) tmp &= ERST_SIZE_MASK; writel(tmp, &ir->ir_set->erst_size); - tmp64 = xhci_read_64(xhci, &ir->ir_set->erst_dequeue); - tmp64 &= (u64) ERST_PTR_MASK; - xhci_write_64(xhci, tmp64, &ir->ir_set->erst_dequeue); + xhci_write_64(xhci, ERST_EHB, &ir->ir_set->erst_dequeue); } /* free interrrupter event ring */ @@ -1933,7 +1934,6 @@ void xhci_mem_cleanup(struct xhci_hcd *xhci) static void xhci_set_hc_event_deq(struct xhci_hcd *xhci, struct xhci_interrupter *ir) { - u64 temp; dma_addr_t deq; deq = xhci_trb_virt_to_dma(ir->event_ring->deq_seg, @@ -1941,15 +1941,12 @@ static void xhci_set_hc_event_deq(struct xhci_hcd *xhci, struct xhci_interrupter if (!deq) xhci_warn(xhci, "WARN something wrong with SW event ring dequeue ptr.\n"); /* Update HC event ring dequeue pointer */ - temp = xhci_read_64(xhci, &ir->ir_set->erst_dequeue); - temp &= ERST_PTR_MASK; /* Don't clear the EHB bit (which is RW1C) because * there might be more events to service. */ - temp &= ~ERST_EHB; xhci_dbg_trace(xhci, trace_xhci_dbg_init, "// Write event ring dequeue pointer, preserving EHB bit"); - xhci_write_64(xhci, ((u64) deq & (u64) ~ERST_PTR_MASK) | temp, + xhci_write_64(xhci, ((u64) deq & (u64) ~ERST_PTR_MASK), &ir->ir_set->erst_dequeue); } diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index 21f50d25157f..ca8b84816e41 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -3018,7 +3018,7 @@ static void xhci_update_erst_dequeue(struct xhci_hcd *xhci, return; /* Update HC event ring dequeue pointer */ - temp_64 &= ERST_DESI_MASK; + temp_64 = ir->event_ring->deq_seg->num & ERST_DESI_MASK; temp_64 |= ((u64) deq & (u64) ~ERST_PTR_MASK); } diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index 5df370482521..dc567be207ce 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -1545,6 +1545,7 @@ struct xhci_segment { union xhci_trb *trbs; /* private to HCD */ struct xhci_segment *next; + unsigned int num; dma_addr_t dma; /* Max packet sized bounce buffer for td-fragmant alignment */ dma_addr_t bounce_dma; From patchwork Thu Aug 24 16:15:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 13364480 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 56F0CC6FA8F for ; Thu, 24 Aug 2023 16:45:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242676AbjHXQoa (ORCPT ); Thu, 24 Aug 2023 12:44:30 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58252 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242746AbjHXQoQ (ORCPT ); Thu, 24 Aug 2023 12:44:16 -0400 X-Greylist: delayed 451 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Thu, 24 Aug 2023 09:44:09 PDT Received: from mailout2.hostsharing.net (mailout2.hostsharing.net [83.223.78.233]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3398719B5 for ; Thu, 24 Aug 2023 09:44:09 -0700 (PDT) Received: from h08.hostsharing.net (h08.hostsharing.net [83.223.95.28]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "*.hostsharing.net", Issuer "RapidSSL Global TLS RSA4096 SHA256 2022 CA1" (verified OK)) by mailout2.hostsharing.net (Postfix) with ESMTPS id 82F7710189CE5; Thu, 24 Aug 2023 18:36:36 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id 5E009603DADA; Thu, 24 Aug 2023 18:36:36 +0200 (CEST) X-Mailbox-Line: From f775835fc46bb5357c5fe857f996c7869b29adf6 Mon Sep 17 00:00:00 2001 Message-Id: In-Reply-To: References: From: Lukas Wunner Date: Thu, 24 Aug 2023 18:15:04 +0200 Subject: [PATCH v2 04/10] xhci: Use more than one Event Ring segment To: Mathias Nyman , Greg Kroah-Hartman Cc: linux-usb@vger.kernel.org, Jonathan Bell , Phil Elwell , Nicolas Saenz Julienne , Stefan Wahren , Philipp Rosenberger , Lino Sanfilippo Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org From: Jonathan Bell Users have reported log spam created by "Event Ring Full" xHC event TRBs. These are caused by interrupt latency in conjunction with a very busy set of devices on the bus. The errors are benign, but throughput will suffer as the xHC will pause processing of transfers until the Event Ring is drained by the kernel. Commit dc0ffbea5729 ("usb: host: xhci: update event ring dequeue pointer on purpose") mitigated the issue by advancing the Event Ring Dequeue Pointer already after half a segment has been processed. Nevertheless, providing a larger Event Ring would be useful to cope with load peaks. Expand the number of event TRB slots available by increasing the number of Event Ring segments in the ERST. Controllers have a hardware-defined limit as to the number of ERST entries they can process, but with up to 32k it can be excessively high (sec 5.3.4). So cap the actual number at 2 (configurable through the ERST_MAX_SEGS macro), which seems like a reasonable quantity. It is supported by any xHC because the limit in the HCSPARAMS2 register is defined as a power of 2. Renesas uPD720201 and VIA VL805 controllers do not support more than 2 ERST entries. An alternative to increasing the number of Event Ring segments would be an increase of the segment size. But that requires allocating multiple contiguous pages, which may be impossible if memory is fragmented. Signed-off-by: Jonathan Bell Signed-off-by: Lukas Wunner --- Change v1 -> v2: Only use up to 2 Event Ring segments by default (instead of 8). drivers/usb/host/xhci-mem.c | 10 +++++++--- drivers/usb/host/xhci.h | 4 ++-- 2 files changed, 9 insertions(+), 5 deletions(-) diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index 17c9942010c8..a8a8fc2cc4a5 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -2235,14 +2235,18 @@ xhci_alloc_interrupter(struct xhci_hcd *xhci, gfp_t flags) { struct device *dev = xhci_to_hcd(xhci)->self.sysdev; struct xhci_interrupter *ir; + unsigned int num_segs; int ret; ir = kzalloc_node(sizeof(*ir), flags, dev_to_node(dev)); if (!ir) return NULL; - ir->event_ring = xhci_ring_alloc(xhci, ERST_NUM_SEGS, 1, TYPE_EVENT, - 0, flags); + num_segs = min_t(unsigned int, 1 << HCS_ERST_MAX(xhci->hcs_params2), + ERST_MAX_SEGS); + + ir->event_ring = xhci_ring_alloc(xhci, num_segs, 1, TYPE_EVENT, 0, + flags); if (!ir->event_ring) { xhci_warn(xhci, "Failed to allocate interrupter event ring\n"); kfree(ir); @@ -2278,7 +2282,7 @@ xhci_add_interrupter(struct xhci_hcd *xhci, struct xhci_interrupter *ir, /* set ERST count with the number of entries in the segment table */ erst_size = readl(&ir->ir_set->erst_size); erst_size &= ERST_SIZE_MASK; - erst_size |= ERST_NUM_SEGS; + erst_size |= ir->event_ring->num_segs; writel(erst_size, &ir->ir_set->erst_size); erst_base = xhci_read_64(xhci, &ir->ir_set->erst_base); diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index dc567be207ce..a2ca5e4e0b9f 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -1674,8 +1674,8 @@ struct urb_priv { * Each segment table entry is 4*32bits long. 1K seems like an ok size: * (1K bytes * 8bytes/bit) / (4*32 bits) = 64 segment entries in the table, * meaning 64 ring segments. - * Initial allocated size of the ERST, in number of entries */ -#define ERST_NUM_SEGS 1 + * Reasonable limit for number of Event Ring segments (spec allows 32k) */ +#define ERST_MAX_SEGS 2 /* Poll every 60 seconds */ #define POLL_TIMEOUT 60 /* Stop endpoint command timeout (secs) for URB cancellation watchdog timer */ From patchwork Thu Aug 24 16:15:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 13364460 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 418F2C3DA6F for ; Thu, 24 Aug 2023 16:40:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242288AbjHXQkN (ORCPT ); Thu, 24 Aug 2023 12:40:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56444 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242337AbjHXQjp (ORCPT ); Thu, 24 Aug 2023 12:39:45 -0400 Received: from mailout3.hostsharing.net (mailout3.hostsharing.net [176.9.242.54]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0D59B199D for ; Thu, 24 Aug 2023 09:39:43 -0700 (PDT) Received: from h08.hostsharing.net (h08.hostsharing.net [IPv6:2a01:37:1000::53df:5f1c:0]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "*.hostsharing.net", Issuer "RapidSSL Global TLS RSA4096 SHA256 2022 CA1" (verified OK)) by mailout3.hostsharing.net (Postfix) with ESMTPS id 0BCCD101E6B30; Thu, 24 Aug 2023 18:39:42 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id A28B8603DADA; Thu, 24 Aug 2023 18:39:41 +0200 (CEST) X-Mailbox-Line: From 8ddf91a065a3f5ee493abdc17b37a27c034786fa Mon Sep 17 00:00:00 2001 Message-Id: <8ddf91a065a3f5ee493abdc17b37a27c034786fa.1692892942.git.lukas@wunner.de> In-Reply-To: References: From: Lukas Wunner Date: Thu, 24 Aug 2023 18:15:05 +0200 Subject: [PATCH v2 05/10] xhci: Adjust segment numbers after ring expansion To: Mathias Nyman , Greg Kroah-Hartman Cc: linux-usb@vger.kernel.org, Jonathan Bell , Phil Elwell , Nicolas Saenz Julienne , Stefan Wahren , Philipp Rosenberger , Lino Sanfilippo Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Initial xhci_ring allocation has just been amended to assign a monotonically increasing number to each ring segment. However rings may be expanded after initial allocation. So number newly inserted segments starting from the preceding segment in the ring and renumber all segments succeeding the newly inserted ones. This is not a fix because ring expansion currently isn't done on the Event Ring and that's the only ring type using the segment number. It's just in preparation for when either Event Ring expansion is added or when other ring types start making use of the segment number. Signed-off-by: Lukas Wunner --- drivers/usb/host/xhci-mem.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index a8a8fc2cc4a5..1c0f5263cf81 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -130,7 +130,7 @@ static void xhci_link_rings(struct xhci_hcd *xhci, struct xhci_ring *ring, struct xhci_segment *first, struct xhci_segment *last, unsigned int num_segs) { - struct xhci_segment *next; + struct xhci_segment *next, *seg; bool chain_links; if (!ring || !first || !last) @@ -153,6 +153,9 @@ static void xhci_link_rings(struct xhci_hcd *xhci, struct xhci_ring *ring, |= cpu_to_le32(LINK_TOGGLE); ring->last_seg = last; } + + for (seg = last; seg != ring->last_seg; seg = seg->next) + seg->next->num = seg->num + 1; } /* @@ -322,11 +325,11 @@ void xhci_initialize_ring_info(struct xhci_ring *ring, /* Allocate segments and link them for a ring */ static int xhci_alloc_segments_for_ring(struct xhci_hcd *xhci, struct xhci_segment **first, struct xhci_segment **last, - unsigned int num_segs, unsigned int cycle_state, - enum xhci_ring_type type, unsigned int max_packet, gfp_t flags) + unsigned int num_segs, unsigned int num, + unsigned int cycle_state, enum xhci_ring_type type, + unsigned int max_packet, gfp_t flags) { struct xhci_segment *prev; - unsigned int num = 0; bool chain_links; /* Set chain bit for 0.95 hosts, and for isoc rings on AMD 0.96 host */ @@ -392,7 +395,7 @@ struct xhci_ring *xhci_ring_alloc(struct xhci_hcd *xhci, return ring; ret = xhci_alloc_segments_for_ring(xhci, &ring->first_seg, - &ring->last_seg, num_segs, cycle_state, type, + &ring->last_seg, num_segs, 0, cycle_state, type, max_packet, flags); if (ret) goto fail; @@ -432,7 +435,8 @@ int xhci_ring_expansion(struct xhci_hcd *xhci, struct xhci_ring *ring, int ret; ret = xhci_alloc_segments_for_ring(xhci, &first, &last, - num_new_segs, ring->cycle_state, ring->type, + num_new_segs, ring->enq_seg->num + 1, + ring->cycle_state, ring->type, ring->bounce_buf_len, flags); if (ret) return -ENOMEM; From patchwork Thu Aug 24 16:15:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 13364479 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 730DCC71153 for ; Thu, 24 Aug 2023 16:44:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242581AbjHXQn6 (ORCPT ); Thu, 24 Aug 2023 12:43:58 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:47928 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236401AbjHXQnx (ORCPT ); Thu, 24 Aug 2023 12:43:53 -0400 Received: from mailout3.hostsharing.net (mailout3.hostsharing.net [176.9.242.54]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 618D8198 for ; Thu, 24 Aug 2023 09:43:51 -0700 (PDT) Received: from h08.hostsharing.net (h08.hostsharing.net [83.223.95.28]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "*.hostsharing.net", Issuer "RapidSSL Global TLS RSA4096 SHA256 2022 CA1" (verified OK)) by mailout3.hostsharing.net (Postfix) with ESMTPS id D3251101E6B30; Thu, 24 Aug 2023 18:43:47 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id 8F7EB603DADA; Thu, 24 Aug 2023 18:43:47 +0200 (CEST) X-Mailbox-Line: From f2849ad55385886c8c0b98bd5de04daf970eefcb Mon Sep 17 00:00:00 2001 Message-Id: In-Reply-To: References: From: Lukas Wunner Date: Thu, 24 Aug 2023 18:15:06 +0200 Subject: [PATCH v2 06/10] xhci: Update last segment pointer after Event Ring expansion To: Mathias Nyman , Greg Kroah-Hartman Cc: linux-usb@vger.kernel.org, Jonathan Bell , Phil Elwell , Nicolas Saenz Julienne , Stefan Wahren , Philipp Rosenberger , Lino Sanfilippo Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org When expanding a ring at its "end", ring->last_seg needs to be updated for Event Rings as well, not just for all the other ring types. This is not a fix because ring expansion currently isn't done on the Event Ring. It's just in preparation for when it's added. Signed-off-by: Lukas Wunner --- drivers/usb/host/xhci-mem.c | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index 1c0f5263cf81..d4123e6f2549 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -146,11 +146,13 @@ static void xhci_link_rings(struct xhci_hcd *xhci, struct xhci_ring *ring, xhci_link_segments(last, next, ring->type, chain_links); ring->num_segs += num_segs; - if (ring->type != TYPE_EVENT && ring->enq_seg == ring->last_seg) { - ring->last_seg->trbs[TRBS_PER_SEGMENT-1].link.control - &= ~cpu_to_le32(LINK_TOGGLE); - last->trbs[TRBS_PER_SEGMENT-1].link.control - |= cpu_to_le32(LINK_TOGGLE); + if (ring->enq_seg == ring->last_seg) { + if (ring->type != TYPE_EVENT) { + ring->last_seg->trbs[TRBS_PER_SEGMENT-1].link.control + &= ~cpu_to_le32(LINK_TOGGLE); + last->trbs[TRBS_PER_SEGMENT-1].link.control + |= cpu_to_le32(LINK_TOGGLE); + } ring->last_seg = last; } From patchwork Thu Aug 24 16:15:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 13364481 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E3CBC6FA8F for ; Thu, 24 Aug 2023 16:46:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240733AbjHXQqG (ORCPT ); Thu, 24 Aug 2023 12:46:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43028 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242748AbjHXQqB (ORCPT ); Thu, 24 Aug 2023 12:46:01 -0400 Received: from mailout1.hostsharing.net (mailout1.hostsharing.net [IPv6:2a01:37:1000::53df:5fcc:0]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C90C310FE for ; Thu, 24 Aug 2023 09:45:59 -0700 (PDT) Received: from h08.hostsharing.net (h08.hostsharing.net [IPv6:2a01:37:1000::53df:5f1c:0]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "*.hostsharing.net", Issuer "RapidSSL Global TLS RSA4096 SHA256 2022 CA1" (verified OK)) by mailout1.hostsharing.net (Postfix) with ESMTPS id 246FE101920CB; Thu, 24 Aug 2023 18:45:58 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id 00A5A603DADA; Thu, 24 Aug 2023 18:45:57 +0200 (CEST) X-Mailbox-Line: From 6049bb86ed80691312dbaac50d219721717ef29e Mon Sep 17 00:00:00 2001 Message-Id: <6049bb86ed80691312dbaac50d219721717ef29e.1692892942.git.lukas@wunner.de> In-Reply-To: References: From: Lukas Wunner Date: Thu, 24 Aug 2023 18:15:07 +0200 Subject: [PATCH v2 07/10] xhci: Expose segment numbers in debugfs To: Mathias Nyman , Greg Kroah-Hartman Cc: linux-usb@vger.kernel.org, Jonathan Bell , Phil Elwell , Nicolas Saenz Julienne , Stefan Wahren , Philipp Rosenberger , Lino Sanfilippo Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Ring segments have just been amended with a monotonically increasing number. To allow developers to inspect the segment numbers and ensure correctness in particular after ring expansion, expose them in each ring's "trbs" file in debugfs. Signed-off-by: Lukas Wunner --- drivers/usb/host/xhci-debugfs.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/usb/host/xhci-debugfs.c b/drivers/usb/host/xhci-debugfs.c index 99baa60ef50f..6d142cd61bd6 100644 --- a/drivers/usb/host/xhci-debugfs.c +++ b/drivers/usb/host/xhci-debugfs.c @@ -204,7 +204,7 @@ static void xhci_ring_dump_segment(struct seq_file *s, for (i = 0; i < TRBS_PER_SEGMENT; i++) { trb = &seg->trbs[i]; dma = seg->dma + i * sizeof(*trb); - seq_printf(s, "%pad: %s\n", &dma, + seq_printf(s, "%2u %pad: %s\n", seg->num, &dma, xhci_decode_trb(str, XHCI_MSG_MAX, le32_to_cpu(trb->generic.field[0]), le32_to_cpu(trb->generic.field[1]), le32_to_cpu(trb->generic.field[2]), From patchwork Thu Aug 24 16:15:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 13364490 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 66136C6FA8F for ; Thu, 24 Aug 2023 16:48:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242650AbjHXQsO (ORCPT ); Thu, 24 Aug 2023 12:48:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44166 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242777AbjHXQrm (ORCPT ); Thu, 24 Aug 2023 12:47:42 -0400 Received: from mailout3.hostsharing.net (mailout3.hostsharing.net [176.9.242.54]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46E89E51 for ; Thu, 24 Aug 2023 09:47:40 -0700 (PDT) Received: from h08.hostsharing.net (h08.hostsharing.net [IPv6:2a01:37:1000::53df:5f1c:0]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "*.hostsharing.net", Issuer "RapidSSL Global TLS RSA4096 SHA256 2022 CA1" (verified OK)) by mailout3.hostsharing.net (Postfix) with ESMTPS id 4664E101E6B30; Thu, 24 Aug 2023 18:47:38 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id F294B603DADA; Thu, 24 Aug 2023 18:47:37 +0200 (CEST) X-Mailbox-Line: From e81751a0fae84fc042ab422ab47da11ad4e33cdd Mon Sep 17 00:00:00 2001 Message-Id: In-Reply-To: References: From: Lukas Wunner Date: Thu, 24 Aug 2023 18:15:08 +0200 Subject: [PATCH v2 08/10] xhci: Clean up ERST_PTR_MASK inversion To: Mathias Nyman , Greg Kroah-Hartman Cc: linux-usb@vger.kernel.org, Jonathan Bell , Phil Elwell , Nicolas Saenz Julienne , Stefan Wahren , Philipp Rosenberger , Lino Sanfilippo Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Mathias notes that the ERST_PTR_MASK macro is named as if it's masking the Event Ring Dequeue Pointer in the ERDP register, but in actuality it's masking the inverse. Invert the macro's value for clarity. Migrate it to the modern GENMASK_ULL() syntax to avoid u64 casts. Signed-off-by: Lukas Wunner --- drivers/usb/host/xhci-mem.c | 3 +-- drivers/usb/host/xhci-ring.c | 5 ++--- drivers/usb/host/xhci.c | 2 +- drivers/usb/host/xhci.h | 2 +- 4 files changed, 5 insertions(+), 7 deletions(-) diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index d4123e6f2549..b133817ad180 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -1952,8 +1952,7 @@ static void xhci_set_hc_event_deq(struct xhci_hcd *xhci, struct xhci_interrupter */ xhci_dbg_trace(xhci, trace_xhci_dbg_init, "// Write event ring dequeue pointer, preserving EHB bit"); - xhci_write_64(xhci, ((u64) deq & (u64) ~ERST_PTR_MASK), - &ir->ir_set->erst_dequeue); + xhci_write_64(xhci, deq & ERST_PTR_MASK, &ir->ir_set->erst_dequeue); } static void xhci_add_in_port(struct xhci_hcd *xhci, unsigned int num_ports, diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c index ca8b84816e41..c6a89c483663 100644 --- a/drivers/usb/host/xhci-ring.c +++ b/drivers/usb/host/xhci-ring.c @@ -3013,13 +3013,12 @@ static void xhci_update_erst_dequeue(struct xhci_hcd *xhci, * Per 4.9.4, Software writes to the ERDP register shall * always advance the Event Ring Dequeue Pointer value. */ - if ((temp_64 & (u64) ~ERST_PTR_MASK) == - ((u64) deq & (u64) ~ERST_PTR_MASK)) + if ((temp_64 & ERST_PTR_MASK) == (deq & ERST_PTR_MASK)) return; /* Update HC event ring dequeue pointer */ temp_64 = ir->event_ring->deq_seg->num & ERST_DESI_MASK; - temp_64 |= ((u64) deq & (u64) ~ERST_PTR_MASK); + temp_64 |= deq & ERST_PTR_MASK; } /* Clear the event handler busy flag (RW1C) */ diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c index e1b1b64a0723..68920cb96044 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -520,7 +520,7 @@ int xhci_run(struct usb_hcd *hcd) xhci_dbg_trace(xhci, trace_xhci_dbg_init, "xhci_run"); temp_64 = xhci_read_64(xhci, &ir->ir_set->erst_dequeue); - temp_64 &= ~ERST_PTR_MASK; + temp_64 &= ERST_PTR_MASK; xhci_dbg_trace(xhci, trace_xhci_dbg_init, "ERST deq = 64'h%0lx", (long unsigned int) temp_64); diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index a2ca5e4e0b9f..20744e41e385 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -525,7 +525,7 @@ struct xhci_intr_reg { * a work queue (or delayed service routine)? */ #define ERST_EHB (1 << 3) -#define ERST_PTR_MASK (0xf) +#define ERST_PTR_MASK (GENMASK_ULL(63, 4)) /** * struct xhci_run_regs From patchwork Thu Aug 24 16:15:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 13364491 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67DE3C71153 for ; Thu, 24 Aug 2023 16:50:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242674AbjHXQtw (ORCPT ); Thu, 24 Aug 2023 12:49:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242712AbjHXQtl (ORCPT ); Thu, 24 Aug 2023 12:49:41 -0400 Received: from mailout3.hostsharing.net (mailout3.hostsharing.net [176.9.242.54]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A5A11B9 for ; Thu, 24 Aug 2023 09:49:39 -0700 (PDT) Received: from h08.hostsharing.net (h08.hostsharing.net [IPv6:2a01:37:1000::53df:5f1c:0]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "*.hostsharing.net", Issuer "RapidSSL Global TLS RSA4096 SHA256 2022 CA1" (verified OK)) by mailout3.hostsharing.net (Postfix) with ESMTPS id A3684101E6B30; Thu, 24 Aug 2023 18:49:37 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id 5B052603DADA; Thu, 24 Aug 2023 18:49:37 +0200 (CEST) X-Mailbox-Line: From f9632fa607cc0f83993547494e2fe640e3e9da24 Mon Sep 17 00:00:00 2001 Message-Id: In-Reply-To: References: From: Lukas Wunner Date: Thu, 24 Aug 2023 18:15:09 +0200 Subject: [PATCH v2 09/10] xhci: Clean up stale comment on ERST_SIZE macro To: Mathias Nyman , Greg Kroah-Hartman Cc: linux-usb@vger.kernel.org, Jonathan Bell , Phil Elwell , Nicolas Saenz Julienne , Stefan Wahren , Philipp Rosenberger , Lino Sanfilippo Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org Commit ebd88cf50729 ("xhci: Remove unused defines for ERST_SIZE and ERST_ENTRIES") removed the ERST_SIZE macro but retained a code comment explaining the quantity chosen in the macro. Remove the code comment as well. Signed-off-by: Lukas Wunner --- drivers/usb/host/xhci.h | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index 20744e41e385..effc80eb8fa9 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -1670,11 +1670,7 @@ struct urb_priv { struct xhci_td td[]; }; -/* - * Each segment table entry is 4*32bits long. 1K seems like an ok size: - * (1K bytes * 8bytes/bit) / (4*32 bits) = 64 segment entries in the table, - * meaning 64 ring segments. - * Reasonable limit for number of Event Ring segments (spec allows 32k) */ +/* Reasonable limit for number of Event Ring segments (spec allows 32k) */ #define ERST_MAX_SEGS 2 /* Poll every 60 seconds */ #define POLL_TIMEOUT 60 From patchwork Thu Aug 24 16:15:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukas Wunner X-Patchwork-Id: 13364500 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 50420C71153 for ; Thu, 24 Aug 2023 16:54:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S242318AbjHXQxm (ORCPT ); Thu, 24 Aug 2023 12:53:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53098 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S242731AbjHXQxL (ORCPT ); Thu, 24 Aug 2023 12:53:11 -0400 X-Greylist: delayed 559 seconds by postgrey-1.37 at lindbergh.monkeyblade.net; Thu, 24 Aug 2023 09:53:08 PDT Received: from mailout3.hostsharing.net (mailout3.hostsharing.net [IPv6:2a01:4f8:150:2161:1:b009:f236:0]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7CEA01707 for ; Thu, 24 Aug 2023 09:53:08 -0700 (PDT) Received: from h08.hostsharing.net (h08.hostsharing.net [IPv6:2a01:37:1000::53df:5f1c:0]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256 client-signature RSA-PSS (4096 bits) client-digest SHA256) (Client CN "*.hostsharing.net", Issuer "RapidSSL Global TLS RSA4096 SHA256 2022 CA1" (verified OK)) by mailout3.hostsharing.net (Postfix) with ESMTPS id 84540101E6B30; Thu, 24 Aug 2023 18:53:06 +0200 (CEST) Received: from localhost (unknown [89.246.108.87]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by h08.hostsharing.net (Postfix) with ESMTPSA id F35C3603DADA; Thu, 24 Aug 2023 18:53:05 +0200 (CEST) X-Mailbox-Line: From e47eee6551e5866d586fa8d223e3d81323f81fdc Mon Sep 17 00:00:00 2001 Message-Id: In-Reply-To: References: From: Lukas Wunner Date: Thu, 24 Aug 2023 18:15:10 +0200 Subject: [PATCH v2 10/10] xhci: Clean up xhci_{alloc,free}_erst() declarations To: Mathias Nyman , Greg Kroah-Hartman Cc: linux-usb@vger.kernel.org, Jonathan Bell , Phil Elwell , Nicolas Saenz Julienne , Stefan Wahren , Philipp Rosenberger , Lino Sanfilippo Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org xhci_alloc_erst() has global scope even though it's only used in xhci-mem.c. Declare it static. xhci_free_erst() was removed by commit b17a57f89f69 ("xhci: Refactor interrupter code for initial multi interrupter support."), but a declaration in xhci.h still remains. Drop it. Signed-off-by: Lukas Wunner --- drivers/usb/host/xhci-mem.c | 2 +- drivers/usb/host/xhci.h | 5 ----- 2 files changed, 1 insertion(+), 6 deletions(-) diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c index b133817ad180..4d0b1c0e61a8 100644 --- a/drivers/usb/host/xhci-mem.c +++ b/drivers/usb/host/xhci-mem.c @@ -1776,7 +1776,7 @@ void xhci_free_command(struct xhci_hcd *xhci, kfree(command); } -int xhci_alloc_erst(struct xhci_hcd *xhci, +static int xhci_alloc_erst(struct xhci_hcd *xhci, struct xhci_ring *evt_ring, struct xhci_erst *erst, gfp_t flags) diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h index effc80eb8fa9..7749499ed32a 100644 --- a/drivers/usb/host/xhci.h +++ b/drivers/usb/host/xhci.h @@ -2075,13 +2075,8 @@ struct xhci_ring *xhci_ring_alloc(struct xhci_hcd *xhci, void xhci_ring_free(struct xhci_hcd *xhci, struct xhci_ring *ring); int xhci_ring_expansion(struct xhci_hcd *xhci, struct xhci_ring *ring, unsigned int num_trbs, gfp_t flags); -int xhci_alloc_erst(struct xhci_hcd *xhci, - struct xhci_ring *evt_ring, - struct xhci_erst *erst, - gfp_t flags); void xhci_initialize_ring_info(struct xhci_ring *ring, unsigned int cycle_state); -void xhci_free_erst(struct xhci_hcd *xhci, struct xhci_erst *erst); void xhci_free_endpoint_ring(struct xhci_hcd *xhci, struct xhci_virt_device *virt_dev, unsigned int ep_index);