From patchwork Mon Dec 14 08:39:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ikjoon Jang X-Patchwork-Id: 11971559 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D40AAC4361B for ; Mon, 14 Dec 2020 08:42:32 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 5F75F206A4 for ; Mon, 14 Dec 2020 08:42:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5F75F206A4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To:Message-Id:Date: Subject:To:From:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=OKvPPI7yh6aWC6tIguxL6Jok4m/BARWlRviSTafMdwI=; b=CtyPUiZYcDIjEUnzxk4nHNY3b s6sRnMs9jEwlVxoa+bSpdJyV2IoFh7E4M0og1rIIpqrdIWlAZN/gcDHhIDdRglhZQGLHKFMM2v5fg PCV6Yto+Nq8QWHVk6Pipg5ZXmUzliSCR1jHp20huXZetI6w6tGi6utrvYC5z2cVtEIFmfqhqlLC5g GftNQEkSEUgPwFGh2UZHZy+YHR7RZ2s4dGDl9bv/cywfW7VVNdS52qDbddSOfsbi30u8faaWOSG5O ScA3qD5NzhyLNBhBWLSGnrTIdd9u4qSSBcmK/KfcHiv68QLuKxjpntS4K99l90CUVf7VeRBbeCY4D nD3vVmpSg==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kojP7-0006rL-BV; Mon, 14 Dec 2020 08:40:25 +0000 Received: from mail-pf1-x444.google.com ([2607:f8b0:4864:20::444]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kojP1-0006qH-FW for linux-arm-kernel@lists.infradead.org; Mon, 14 Dec 2020 08:40:20 +0000 Received: by mail-pf1-x444.google.com with SMTP id s21so11613042pfu.13 for ; Mon, 14 Dec 2020 00:40:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UJDTefHsdcJyvOhY81AQdb/UxL3AvZFQ4NimqKN0Ur8=; b=aT4moFJVSxq0ufLAmXzLxnCv4F7DY+lFzYe2RiIDGDJxpCb689fge0aTlhGeG2PaPm 6IUKDQ3/u6+0cG1omWb+migFuhsxWrwEPDI7iX86QuhUzztFprse0iNApR+kC+NuM7vz an+wNWUNbeDy4aA1iRWqENVF9B7iWBsRaoQHc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UJDTefHsdcJyvOhY81AQdb/UxL3AvZFQ4NimqKN0Ur8=; b=IiyyizffwhT471JgI1NnmSfyFtqb2CK6pRVCh1S84Bxw9rt31Rz1oVTOgHG7oAnAPp 7NjkQhMXdFul4LKtjFNeQgW5bkDygejAGek52WV4Q9J1PRJbP/79zmJCA4wqP96lM84y 1r2ZpNja676tG8C8WJYXjLtB4ExYao8sbiugHQ0e17tlpsmykMada+nNuMh52pu4n9Vr a1Apy3ivChl0d9MqmWJCEwWBm4oFk7gg/tNWN/hDU5J7VmadAKZaMf8YUNasGbOWhXJJ 7/gUdClESzhMGV2iMEoTKq/1D7vHm1sdTAPHhkW8jzEx13X+zXCiWngyl5bvLy3OaHoG ALAg== X-Gm-Message-State: AOAM532quvPmbaF6WPHi8kbfMewo5EZ66TELhABRJMnXIsaBuyoYv2ff ufAsuQbWty+6fCzDaRcjb4ARrA== X-Google-Smtp-Source: ABdhPJwxOU7UoYIL86+Fif17IFTMdRxXM2DzLDZIHdEhpW0vnSaLMGel11VHcGnAt1gMr+WLHYWjOA== X-Received: by 2002:a62:7907:0:b029:19d:cf5d:20ae with SMTP id u7-20020a6279070000b029019dcf5d20aemr23197280pfc.2.1607935216910; Mon, 14 Dec 2020 00:40:16 -0800 (PST) Received: from ikjn-p920.tpe.corp.google.com ([2401:fa00:1:b:f693:9fff:fef4:a8fc]) by smtp.gmail.com with ESMTPSA id i7sm358684pfc.50.2020.12.14.00.40.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 14 Dec 2020 00:40:16 -0800 (PST) From: Ikjoon Jang To: linux-mediatek@lists.infradead.org, linux-usb@vger.kernel.org Subject: [PATCH v4] usb: xhci-mtk: fix unreleased bandwidth data Date: Mon, 14 Dec 2020 16:39:53 +0800 Message-Id: <20201214163945.v4.1.Id0d31b5f3ddf5e734d2ab11161ac5821921b1e1e@changeid> X-Mailer: git-send-email 2.29.2.684.gfbc64c5ab5-goog In-Reply-To: <20201214131640.v3.1.Id0d31b5f3ddf5e734d2ab11161ac5821921b1e1e@changeid> References: <20201214131640.v3.1.Id0d31b5f3ddf5e734d2ab11161ac5821921b1e1e@changeid> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201214_034019_579024_DF5AC179 X-CRM114-Status: GOOD ( 24.35 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Zhanyong Wang , Mathias Nyman , Greg Kroah-Hartman , linux-kernel@vger.kernel.org, Tianping Fang , Matthias Brugger , Chunfeng Yun , Ikjoon Jang , linux-arm-kernel@lists.infradead.org Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org xhci-mtk has hooks on add_endpoint() and drop_endpoint() from xhci to handle its own sw bandwidth managements and stores bandwidth data into internal table every time add_endpoint() is called, so when bandwidth allocation fails at one endpoint, all earlier allocation from the same interface could still remain at the table. This patch adds two more hooks from check_bandwidth() and reset_bandwidth(), and make mtk-xhci to releases all failed endpoints from reset_bandwidth(). Fixes: 4b0f7a77fb3c ("usb: xhci-mtk: supports bandwidth scheduling with multi-TT") Signed-off-by: Ikjoon Jang --- Changes in v4: - bugfix in v3, check_bandwidth() returns an uninitialized value when no endpoints were newly added. Changes in v3: - drop unrelated code cleanups - change Fixes tag to keep dependency Changes in v2: - fix a 0-day warning from unused variable - split one big patch into three patches - fix wrong offset in mediatek hw flags drivers/usb/host/xhci-mtk-sch.c | 124 ++++++++++++++++++++++---------- drivers/usb/host/xhci-mtk.h | 13 ++++ drivers/usb/host/xhci.c | 9 +++ 3 files changed, 109 insertions(+), 37 deletions(-) diff --git a/drivers/usb/host/xhci-mtk-sch.c b/drivers/usb/host/xhci-mtk-sch.c index 45c54d56ecbd..95d20de9fd1f 100644 --- a/drivers/usb/host/xhci-mtk-sch.c +++ b/drivers/usb/host/xhci-mtk-sch.c @@ -200,6 +200,7 @@ static struct mu3h_sch_ep_info *create_sch_ep(struct usb_device *udev, sch_ep->sch_tt = tt; sch_ep->ep = ep; + INIT_LIST_HEAD(&sch_ep->tt_endpoint); return sch_ep; } @@ -583,6 +584,8 @@ int xhci_mtk_sch_init(struct xhci_hcd_mtk *mtk) mtk->sch_array = sch_array; + INIT_LIST_HEAD(&mtk->bw_ep_list_new); + return 0; } EXPORT_SYMBOL_GPL(xhci_mtk_sch_init); @@ -601,19 +604,14 @@ int xhci_mtk_add_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev, struct xhci_ep_ctx *ep_ctx; struct xhci_slot_ctx *slot_ctx; struct xhci_virt_device *virt_dev; - struct mu3h_sch_bw_info *sch_bw; struct mu3h_sch_ep_info *sch_ep; - struct mu3h_sch_bw_info *sch_array; unsigned int ep_index; - int bw_index; - int ret = 0; xhci = hcd_to_xhci(hcd); virt_dev = xhci->devs[udev->slot_id]; ep_index = xhci_get_endpoint_index(&ep->desc); slot_ctx = xhci_get_slot_ctx(xhci, virt_dev->in_ctx); ep_ctx = xhci_get_ep_ctx(xhci, virt_dev->in_ctx, ep_index); - sch_array = mtk->sch_array; xhci_dbg(xhci, "%s() type:%d, speed:%d, mpkt:%d, dir:%d, ep:%p\n", __func__, usb_endpoint_type(&ep->desc), udev->speed, @@ -632,39 +630,34 @@ int xhci_mtk_add_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev, return 0; } - bw_index = get_bw_index(xhci, udev, ep); - sch_bw = &sch_array[bw_index]; - sch_ep = create_sch_ep(udev, ep, ep_ctx); if (IS_ERR_OR_NULL(sch_ep)) return -ENOMEM; setup_sch_info(udev, ep_ctx, sch_ep); - ret = check_sch_bw(udev, sch_bw, sch_ep); - if (ret) { - xhci_err(xhci, "Not enough bandwidth!\n"); - if (is_fs_or_ls(udev->speed)) - drop_tt(udev); - - kfree(sch_ep); - return -ENOSPC; - } + list_add_tail(&sch_ep->endpoint, &mtk->bw_ep_list_new); - list_add_tail(&sch_ep->endpoint, &sch_bw->bw_ep_list); + return 0; +} +EXPORT_SYMBOL_GPL(xhci_mtk_add_ep_quirk); - ep_ctx->reserved[0] |= cpu_to_le32(EP_BPKTS(sch_ep->pkts) - | EP_BCSCOUNT(sch_ep->cs_count) | EP_BBM(sch_ep->burst_mode)); - ep_ctx->reserved[1] |= cpu_to_le32(EP_BOFFSET(sch_ep->offset) - | EP_BREPEAT(sch_ep->repeat)); +static void xhci_mtk_drop_ep(struct xhci_hcd_mtk *mtk, struct usb_device *udev, + struct mu3h_sch_ep_info *sch_ep) +{ + struct xhci_hcd *xhci = hcd_to_xhci(mtk->hcd); + int bw_index = get_bw_index(xhci, udev, sch_ep->ep); + struct mu3h_sch_bw_info *sch_bw = &mtk->sch_array[bw_index]; - xhci_dbg(xhci, " PKTS:%x, CSCOUNT:%x, BM:%x, OFFSET:%x, REPEAT:%x\n", - sch_ep->pkts, sch_ep->cs_count, sch_ep->burst_mode, - sch_ep->offset, sch_ep->repeat); + update_bus_bw(sch_bw, sch_ep, 0); + list_del(&sch_ep->endpoint); - return 0; + if (sch_ep->sch_tt) { + list_del(&sch_ep->tt_endpoint); + drop_tt(udev); + } + kfree(sch_ep); } -EXPORT_SYMBOL_GPL(xhci_mtk_add_ep_quirk); void xhci_mtk_drop_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev, struct usb_host_endpoint *ep) @@ -675,7 +668,7 @@ void xhci_mtk_drop_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev, struct xhci_virt_device *virt_dev; struct mu3h_sch_bw_info *sch_array; struct mu3h_sch_bw_info *sch_bw; - struct mu3h_sch_ep_info *sch_ep; + struct mu3h_sch_ep_info *sch_ep, *tmp; int bw_index; xhci = hcd_to_xhci(hcd); @@ -694,17 +687,74 @@ void xhci_mtk_drop_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev, bw_index = get_bw_index(xhci, udev, ep); sch_bw = &sch_array[bw_index]; - list_for_each_entry(sch_ep, &sch_bw->bw_ep_list, endpoint) { + list_for_each_entry_safe(sch_ep, tmp, &sch_bw->bw_ep_list, endpoint) { if (sch_ep->ep == ep) { - update_bus_bw(sch_bw, sch_ep, 0); - list_del(&sch_ep->endpoint); - if (is_fs_or_ls(udev->speed)) { - list_del(&sch_ep->tt_endpoint); - drop_tt(udev); - } - kfree(sch_ep); - break; + xhci_mtk_drop_ep(mtk, udev, sch_ep); } } } EXPORT_SYMBOL_GPL(xhci_mtk_drop_ep_quirk); + +int xhci_mtk_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev) +{ + struct xhci_hcd_mtk *mtk = hcd_to_mtk(hcd); + struct xhci_hcd *xhci = hcd_to_xhci(hcd); + struct xhci_virt_device *virt_dev = xhci->devs[udev->slot_id]; + struct mu3h_sch_bw_info *sch_bw; + struct mu3h_sch_ep_info *sch_ep, *tmp; + int bw_index, ret; + + dev_dbg(&udev->dev, "%s\n", __func__); + + if (list_empty(&mtk->bw_ep_list_new)) + return 0; + + list_for_each_entry(sch_ep, &mtk->bw_ep_list_new, endpoint) { + bw_index = get_bw_index(xhci, udev, sch_ep->ep); + sch_bw = &mtk->sch_array[bw_index]; + + ret = check_sch_bw(udev, sch_bw, sch_ep); + if (ret) { + xhci_err(xhci, "Not enough bandwidth!\n"); + return -ENOSPC; + } + } + + list_for_each_entry_safe(sch_ep, tmp, &mtk->bw_ep_list_new, endpoint) { + struct xhci_ep_ctx *ep_ctx; + struct usb_host_endpoint *ep = sch_ep->ep; + unsigned int ep_index = xhci_get_endpoint_index(&ep->desc); + + bw_index = get_bw_index(xhci, udev, ep); + sch_bw = &mtk->sch_array[bw_index]; + + list_move_tail(&sch_ep->endpoint, &sch_bw->bw_ep_list); + + ep_ctx = xhci_get_ep_ctx(xhci, virt_dev->in_ctx, ep_index); + ep_ctx->reserved[0] |= cpu_to_le32(EP_BPKTS(sch_ep->pkts) + | EP_BCSCOUNT(sch_ep->cs_count) + | EP_BBM(sch_ep->burst_mode)); + ep_ctx->reserved[1] |= cpu_to_le32(EP_BOFFSET(sch_ep->offset) + | EP_BREPEAT(sch_ep->repeat)); + + xhci_dbg(xhci, " PKTS:%x, CSCOUNT:%x, BM:%x, OFFSET:%x, REPEAT:%x\n", + sch_ep->pkts, sch_ep->cs_count, sch_ep->burst_mode, + sch_ep->offset, sch_ep->repeat); + } + + return 0; +} +EXPORT_SYMBOL_GPL(xhci_mtk_check_bandwidth); + +void xhci_mtk_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev) +{ + struct xhci_hcd_mtk *mtk = hcd_to_mtk(hcd); + struct mu3h_sch_ep_info *sch_ep, *tmp; + + dev_dbg(&udev->dev, "%s\n", __func__); + + list_for_each_entry_safe(sch_ep, tmp, &mtk->bw_ep_list_new, endpoint) { + xhci_mtk_drop_ep(mtk, udev, sch_ep); + } +} +EXPORT_SYMBOL_GPL(xhci_mtk_reset_bandwidth); diff --git a/drivers/usb/host/xhci-mtk.h b/drivers/usb/host/xhci-mtk.h index 8be8c5f7ff62..05ca989985fc 100644 --- a/drivers/usb/host/xhci-mtk.h +++ b/drivers/usb/host/xhci-mtk.h @@ -130,6 +130,7 @@ struct mu3c_ippc_regs { struct xhci_hcd_mtk { struct device *dev; struct usb_hcd *hcd; + struct list_head bw_ep_list_new; struct mu3h_sch_bw_info *sch_array; struct mu3c_ippc_regs __iomem *ippc_regs; bool has_ippc; @@ -165,6 +166,8 @@ int xhci_mtk_add_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev, struct usb_host_endpoint *ep); void xhci_mtk_drop_ep_quirk(struct usb_hcd *hcd, struct usb_device *udev, struct usb_host_endpoint *ep); +int xhci_mtk_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev); +void xhci_mtk_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev); #else static inline int xhci_mtk_add_ep_quirk(struct usb_hcd *hcd, @@ -178,6 +181,16 @@ static inline void xhci_mtk_drop_ep_quirk(struct usb_hcd *hcd, { } +static inline int xhci_mtk_check_bandwidth(struct usb_hcd *hcd, + struct usb_device *udev) +{ + return 0; +} + +static inline void xhci_mtk_reset_bandwidth(struct usb_hcd *hcd, + struct usb_device *udev) +{ +} #endif #endif /* _XHCI_MTK_H_ */ diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c index 2bf6c526ac7a..5a9e01b33688 100644 --- a/drivers/usb/host/xhci.c +++ b/drivers/usb/host/xhci.c @@ -2854,6 +2854,12 @@ static int xhci_check_bandwidth(struct usb_hcd *hcd, struct usb_device *udev) xhci_dbg(xhci, "%s called for udev %p\n", __func__, udev); virt_dev = xhci->devs[udev->slot_id]; + if (xhci->quirks & XHCI_MTK_HOST) { + ret = xhci_mtk_check_bandwidth(hcd, udev); + if (ret < 0) + return ret; + } + command = xhci_alloc_command(xhci, true, GFP_KERNEL); if (!command) return -ENOMEM; @@ -2941,6 +2947,9 @@ static void xhci_reset_bandwidth(struct usb_hcd *hcd, struct usb_device *udev) return; xhci = hcd_to_xhci(hcd); + if (xhci->quirks & XHCI_MTK_HOST) + xhci_mtk_reset_bandwidth(hcd, udev); + xhci_dbg(xhci, "%s called for udev %p\n", __func__, udev); virt_dev = xhci->devs[udev->slot_id]; /* Free any rings allocated for added endpoints */