From patchwork Thu Jul 30 02:24:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Badhri Jagan Sridharan X-Patchwork-Id: 11692201 Return-Path: Received: from mail.kernel.org (pdx-korg-mail-1.web.codeaurora.org [172.30.200.123]) by pdx-korg-patchwork-2.web.codeaurora.org (Postfix) with ESMTP id 8C6E46C1 for ; Thu, 30 Jul 2020 02:25:12 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 6554E20FC3 for ; Thu, 30 Jul 2020 02:25:12 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vPJCYQaF" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728305AbgG3CZJ (ORCPT ); Wed, 29 Jul 2020 22:25:09 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:56912 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726342AbgG3CZI (ORCPT ); Wed, 29 Jul 2020 22:25:08 -0400 Received: from mail-yb1-xb49.google.com (mail-yb1-xb49.google.com [IPv6:2607:f8b0:4864:20::b49]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B606C061794 for ; Wed, 29 Jul 2020 19:25:08 -0700 (PDT) Received: by mail-yb1-xb49.google.com with SMTP id p138so8457030yba.12 for ; Wed, 29 Jul 2020 19:25:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:message-id:mime-version:subject:from:to:cc :content-transfer-encoding; bh=JC/tY5ekusJI0eThTWaTUW00D2RVmDvdSFBnku2/3HI=; b=vPJCYQaFhHn1XJzl/AdFxBj95jLGG4c/jk+Jf8cGvI+i0z2OGgCrGZ74Hp7e9Jlyya zkPCHG7Lzm2V0eOK/teECbVDsuVamBOlrmZelTvL1pLrxh5OtpCc0yKo/e5APnGsJxVm Z09rvADRbd6aBSQ922aqqeb96VSvdtcbNO7RH97hjyEOhAMCAXJ00Pb8cnq6Xum3noEG RXYlM+sa0nh8CuDkkGruhsIXRkFVZzpkeVKfvGYwpritmhCHZwTPbUOvN7HNryhG3VTU br3tLvfGMDsIsuHiMupEo+tJEt2t2obLMNK4PO764IJGXZvgZA+dtfDPXWMXtQuBFcWt WqNQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:message-id:mime-version:subject:from:to:cc :content-transfer-encoding; bh=JC/tY5ekusJI0eThTWaTUW00D2RVmDvdSFBnku2/3HI=; b=omq82Wl2wc/f782x15aIGCTY3UhOi1kXwRWV88lx1URepUX+RBRg4fkY3C2jzC8KRC Tyuv0J7+u5qe1rznhFgnVi93PUSMzeenai4IVE2aU/9HreErocUjtEknyqLrNy/iWaSw LowS4mhtyOCpLewvRxPIUwcv7elKqVxHnXm0biottnKa6zeQHeebtuSHltaIPbjcHiwn zVuacIVlvEWAYac3DHeVIZ6BjDrUE2OGoPFv+U9Tv/i2DkmaZT8j+LPbwcUrSaRnKLfI 0c8a1IxfYsDqixCehmoyG1Viy6Bw73kyHXcHDj8ViE8l9rkFyFwpuNNlNdHJTEMn1LrQ w/dw== X-Gm-Message-State: AOAM531qNngC3lXerh4ISY+p7IWMnstrOZ8g3E7Ot2J9wRpJ1Aa2FKyj 4BYmAFRdQqUqQ0PYF0uONILA02p4CzU= X-Google-Smtp-Source: ABdhPJx+9EIBHK8mqErh7BveMsJydqGVnkiNXCkV2l53QD0dyfLCWWNtbQlt7i5KnP1v1+vZhiUfzp6KJRw= X-Received: by 2002:a25:8546:: with SMTP id f6mr1570317ybn.101.1596075907395; Wed, 29 Jul 2020 19:25:07 -0700 (PDT) Date: Wed, 29 Jul 2020 19:24:57 -0700 Message-Id: <20200730022457.3021112-1-badhri@google.com> Mime-Version: 1.0 X-Mailer: git-send-email 2.28.0.163.g6104cc2f0b6-goog Subject: [PATCH v3] usb: typec: tcpm: Migrate workqueue to RT priority for processing events From: Badhri Jagan Sridharan To: Guenter Roeck , Heikki Krogerus , Greg Kroah-Hartman Cc: linux-usb@vger.kernel.org, linux-kernel@vger.kernel.org, Badhri Jagan Sridharan Sender: linux-usb-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-usb@vger.kernel.org "tReceiverResponse 15 ms Section 6.6.2 The receiver of a Message requiring a response Shall respond within tReceiverResponse in order to ensure that the sender’s SenderResponseTimer does not expire." When the cpu complex is busy running other lower priority work items, TCPM's work queue sometimes does not get scheduled on time to meet the above requirement from the spec. Moving to kthread_work apis to run with real time priority. Just lower than the default threaded irq priority, MAX_USER_RT_PRIO/2 + 1. (Higher number implies lower priority). Further, as observed in 1ff688209e2e, moving to hrtimers to overcome scheduling latency while scheduling the delayed work. TCPM has three work streams: 1. tcpm_state_machine 2. vdm_state_machine 3. event_work tcpm_state_machine and vdm_state_machine both schedule work in future i.e. delayed. Hence each of them have a corresponding hrtimer, tcpm_state_machine_timer & vdm_state_machine_timer. When work is queued right away kthread_queue_work is used. Else, the relevant timer is programmed and made to queue the kthread_work upon timer expiry. kthread_create_worker only creates one kthread worker thread, hence single threadedness of workqueue is retained. Signed-off-by: Badhri Jagan Sridharan Reviewed-by: Guenter Roeck Reviewed-by: Heikki Krogerus --- Changes since v1:(Guenter's suggestions) - Remove redundant call to hrtimer_cancel while calling hrtimer_start. Changes since v2:(Greg KH's suggestions) - Rebase usb-next TOT. 633198cd2945b7 (HEAD -> usb-next-1) usb: typec: tcpm: Migrate workqueue to RT priority for processing events fa56dd9152ef95 (origin/usb-next) Merge tag 'usb-serial-5.9-rc1' of https://git.kernel.org/pub/scm/linux/kernel/git/johan/usb-serial into usb-next 25252919a1050e xhci: dbgtty: Make some functions static b0e02550346e67 xhci: dbc: Make function xhci_dbc_ring_alloc() static ca6377900974c3 Revert "usb: dwc2: override PHY input signals with usb role switch support" 09df709cb5aeb2 Revert "usb: dwc2: don't use ID/Vbus detection if usb-role-switch on STM32MP15 SoCs" 17a82716587e9d USB: iowarrior: fix up report size handling for some devices e98ba8cc3f8a89 Merge tag 'usb-for-v5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/balbi/usb into usb-next c97793089b11f7 Merge 5.8-rc7 into usb-next 92ed301919932f (tag: v5.8-rc7, origin/usb-linus, origin/main) Linux 5.8-rc7 - Added reviewed-by tags. --- drivers/usb/typec/tcpm/tcpm.c | 140 ++++++++++++++++++++++------------ 1 file changed, 92 insertions(+), 48 deletions(-) diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c index ff1cbd2147ca8a..fa9002944dc766 100644 --- a/drivers/usb/typec/tcpm/tcpm.c +++ b/drivers/usb/typec/tcpm/tcpm.c @@ -8,8 +8,10 @@ #include #include #include +#include #include #include +#include #include #include #include @@ -28,7 +30,8 @@ #include #include #include -#include + +#include #define FOREACH_STATE(S) \ S(INVALID_STATE), \ @@ -195,7 +198,7 @@ struct tcpm_port { struct device *dev; struct mutex lock; /* tcpm state machine lock */ - struct workqueue_struct *wq; + struct kthread_worker *wq; struct typec_capability typec_caps; struct typec_port *typec_port; @@ -239,15 +242,17 @@ struct tcpm_port { enum tcpm_state prev_state; enum tcpm_state state; enum tcpm_state delayed_state; - unsigned long delayed_runtime; + ktime_t delayed_runtime; unsigned long delay_ms; spinlock_t pd_event_lock; u32 pd_events; - struct work_struct event_work; - struct delayed_work state_machine; - struct delayed_work vdm_state_machine; + struct kthread_work event_work; + struct hrtimer state_machine_timer; + struct kthread_work state_machine; + struct hrtimer vdm_state_machine_timer; + struct kthread_work vdm_state_machine; bool state_machine_running; struct completion tx_complete; @@ -332,7 +337,7 @@ struct tcpm_port { }; struct pd_rx_event { - struct work_struct work; + struct kthread_work work; struct tcpm_port *port; struct pd_message msg; }; @@ -906,6 +911,27 @@ static int tcpm_pd_send_sink_caps(struct tcpm_port *port) return tcpm_pd_transmit(port, TCPC_TX_SOP, &msg); } +static void mod_tcpm_delayed_work(struct tcpm_port *port, unsigned int delay_ms) +{ + if (delay_ms) { + hrtimer_start(&port->state_machine_timer, ms_to_ktime(delay_ms), HRTIMER_MODE_REL); + } else { + hrtimer_cancel(&port->state_machine_timer); + kthread_queue_work(port->wq, &port->state_machine); + } +} + +static void mod_vdm_delayed_work(struct tcpm_port *port, unsigned int delay_ms) +{ + if (delay_ms) { + hrtimer_start(&port->vdm_state_machine_timer, ms_to_ktime(delay_ms), + HRTIMER_MODE_REL); + } else { + hrtimer_cancel(&port->vdm_state_machine_timer); + kthread_queue_work(port->wq, &port->vdm_state_machine); + } +} + static void tcpm_set_state(struct tcpm_port *port, enum tcpm_state state, unsigned int delay_ms) { @@ -914,9 +940,8 @@ static void tcpm_set_state(struct tcpm_port *port, enum tcpm_state state, tcpm_states[port->state], tcpm_states[state], delay_ms); port->delayed_state = state; - mod_delayed_work(port->wq, &port->state_machine, - msecs_to_jiffies(delay_ms)); - port->delayed_runtime = jiffies + msecs_to_jiffies(delay_ms); + mod_tcpm_delayed_work(port, delay_ms); + port->delayed_runtime = ktime_add(ktime_get(), ms_to_ktime(delay_ms)); port->delay_ms = delay_ms; } else { tcpm_log(port, "state change %s -> %s", @@ -931,7 +956,7 @@ static void tcpm_set_state(struct tcpm_port *port, enum tcpm_state state, * machine. */ if (!port->state_machine_running) - mod_delayed_work(port->wq, &port->state_machine, 0); + mod_tcpm_delayed_work(port, 0); } } @@ -952,7 +977,7 @@ static void tcpm_queue_message(struct tcpm_port *port, enum pd_msg_request message) { port->queued_message = message; - mod_delayed_work(port->wq, &port->state_machine, 0); + mod_tcpm_delayed_work(port, 0); } /* @@ -1238,8 +1263,7 @@ static void tcpm_handle_vdm_request(struct tcpm_port *port, port->vdm_state = VDM_STATE_WAIT_RSP_BUSY; port->vdo_retry = (p0 & ~VDO_CMDT_MASK) | CMDT_INIT; - mod_delayed_work(port->wq, &port->vdm_state_machine, - msecs_to_jiffies(PD_T_VDM_BUSY)); + mod_vdm_delayed_work(port, PD_T_VDM_BUSY); return; } port->vdm_state = VDM_STATE_DONE; @@ -1250,7 +1274,7 @@ static void tcpm_handle_vdm_request(struct tcpm_port *port, if (rlen > 0) { tcpm_queue_vdm(port, response[0], &response[1], rlen - 1); - mod_delayed_work(port->wq, &port->vdm_state_machine, 0); + mod_vdm_delayed_work(port, 0); } } @@ -1267,7 +1291,7 @@ static void tcpm_send_vdm(struct tcpm_port *port, u32 vid, int cmd, 1 : (PD_VDO_CMD(cmd) <= CMD_ATTENTION), cmd); tcpm_queue_vdm(port, header, data, count); - mod_delayed_work(port->wq, &port->vdm_state_machine, 0); + mod_vdm_delayed_work(port, 0); } static unsigned int vdm_ready_timeout(u32 vdm_hdr) @@ -1334,8 +1358,7 @@ static void vdm_run_state_machine(struct tcpm_port *port) port->vdm_retries = 0; port->vdm_state = VDM_STATE_BUSY; timeout = vdm_ready_timeout(port->vdo_data[0]); - mod_delayed_work(port->wq, &port->vdm_state_machine, - timeout); + mod_vdm_delayed_work(port, timeout); } break; case VDM_STATE_WAIT_RSP_BUSY: @@ -1364,10 +1387,9 @@ static void vdm_run_state_machine(struct tcpm_port *port) } } -static void vdm_state_machine_work(struct work_struct *work) +static void vdm_state_machine_work(struct kthread_work *work) { - struct tcpm_port *port = container_of(work, struct tcpm_port, - vdm_state_machine.work); + struct tcpm_port *port = container_of(work, struct tcpm_port, vdm_state_machine); enum vdm_states prev_state; mutex_lock(&port->lock); @@ -1515,7 +1537,7 @@ static int tcpm_altmode_enter(struct typec_altmode *altmode, u32 *vdo) header |= VDO_OPOS(altmode->mode); tcpm_queue_vdm(port, header, vdo, vdo ? 1 : 0); - mod_delayed_work(port->wq, &port->vdm_state_machine, 0); + mod_vdm_delayed_work(port, 0); mutex_unlock(&port->lock); return 0; @@ -1531,7 +1553,7 @@ static int tcpm_altmode_exit(struct typec_altmode *altmode) header |= VDO_OPOS(altmode->mode); tcpm_queue_vdm(port, header, NULL, 0); - mod_delayed_work(port->wq, &port->vdm_state_machine, 0); + mod_vdm_delayed_work(port, 0); mutex_unlock(&port->lock); return 0; @@ -1544,7 +1566,7 @@ static int tcpm_altmode_vdm(struct typec_altmode *altmode, mutex_lock(&port->lock); tcpm_queue_vdm(port, header, data, count - 1); - mod_delayed_work(port->wq, &port->vdm_state_machine, 0); + mod_vdm_delayed_work(port, 0); mutex_unlock(&port->lock); return 0; @@ -1961,7 +1983,7 @@ static void tcpm_pd_ext_msg_request(struct tcpm_port *port, } } -static void tcpm_pd_rx_handler(struct work_struct *work) +static void tcpm_pd_rx_handler(struct kthread_work *work) { struct pd_rx_event *event = container_of(work, struct pd_rx_event, work); @@ -2023,10 +2045,10 @@ void tcpm_pd_receive(struct tcpm_port *port, const struct pd_message *msg) if (!event) return; - INIT_WORK(&event->work, tcpm_pd_rx_handler); + kthread_init_work(&event->work, tcpm_pd_rx_handler); event->port = port; memcpy(&event->msg, msg, sizeof(*msg)); - queue_work(port->wq, &event->work); + kthread_queue_work(port->wq, &event->work); } EXPORT_SYMBOL_GPL(tcpm_pd_receive); @@ -2079,9 +2101,9 @@ static bool tcpm_send_queued_message(struct tcpm_port *port) } while (port->queued_message != PD_MSG_NONE); if (port->delayed_state != INVALID_STATE) { - if (time_is_after_jiffies(port->delayed_runtime)) { - mod_delayed_work(port->wq, &port->state_machine, - port->delayed_runtime - jiffies); + if (ktime_after(port->delayed_runtime, ktime_get())) { + mod_tcpm_delayed_work(port, ktime_to_ms(ktime_sub(port->delayed_runtime, + ktime_get()))); return true; } port->delayed_state = INVALID_STATE; @@ -3214,10 +3236,9 @@ static void run_state_machine(struct tcpm_port *port) case SNK_DISCOVERY_DEBOUNCE_DONE: if (!tcpm_port_is_disconnected(port) && tcpm_port_is_sink(port) && - time_is_after_jiffies(port->delayed_runtime)) { + ktime_after(port->delayed_runtime, ktime_get())) { tcpm_set_state(port, SNK_DISCOVERY, - jiffies_to_msecs(port->delayed_runtime - - jiffies)); + ktime_to_ms(ktime_sub(port->delayed_runtime, ktime_get()))); break; } tcpm_set_state(port, unattached_state(port), 0); @@ -3612,10 +3633,9 @@ static void run_state_machine(struct tcpm_port *port) } } -static void tcpm_state_machine_work(struct work_struct *work) +static void tcpm_state_machine_work(struct kthread_work *work) { - struct tcpm_port *port = container_of(work, struct tcpm_port, - state_machine.work); + struct tcpm_port *port = container_of(work, struct tcpm_port, state_machine); enum tcpm_state prev_state; mutex_lock(&port->lock); @@ -3975,7 +3995,7 @@ static void _tcpm_pd_hard_reset(struct tcpm_port *port) 0); } -static void tcpm_pd_event_handler(struct work_struct *work) +static void tcpm_pd_event_handler(struct kthread_work *work) { struct tcpm_port *port = container_of(work, struct tcpm_port, event_work); @@ -4016,7 +4036,7 @@ void tcpm_cc_change(struct tcpm_port *port) spin_lock(&port->pd_event_lock); port->pd_events |= TCPM_CC_EVENT; spin_unlock(&port->pd_event_lock); - queue_work(port->wq, &port->event_work); + kthread_queue_work(port->wq, &port->event_work); } EXPORT_SYMBOL_GPL(tcpm_cc_change); @@ -4025,7 +4045,7 @@ void tcpm_vbus_change(struct tcpm_port *port) spin_lock(&port->pd_event_lock); port->pd_events |= TCPM_VBUS_EVENT; spin_unlock(&port->pd_event_lock); - queue_work(port->wq, &port->event_work); + kthread_queue_work(port->wq, &port->event_work); } EXPORT_SYMBOL_GPL(tcpm_vbus_change); @@ -4034,7 +4054,7 @@ void tcpm_pd_hard_reset(struct tcpm_port *port) spin_lock(&port->pd_event_lock); port->pd_events = TCPM_RESET_EVENT; spin_unlock(&port->pd_event_lock); - queue_work(port->wq, &port->event_work); + kthread_queue_work(port->wq, &port->event_work); } EXPORT_SYMBOL_GPL(tcpm_pd_hard_reset); @@ -4742,10 +4762,28 @@ static int devm_tcpm_psy_register(struct tcpm_port *port) return PTR_ERR_OR_ZERO(port->psy); } +static enum hrtimer_restart state_machine_timer_handler(struct hrtimer *timer) +{ + struct tcpm_port *port = container_of(timer, struct tcpm_port, state_machine_timer); + + kthread_queue_work(port->wq, &port->state_machine); + return HRTIMER_NORESTART; +} + +static enum hrtimer_restart vdm_state_machine_timer_handler(struct hrtimer *timer) +{ + struct tcpm_port *port = container_of(timer, struct tcpm_port, vdm_state_machine_timer); + + kthread_queue_work(port->wq, &port->vdm_state_machine); + return HRTIMER_NORESTART; +} + struct tcpm_port *tcpm_register_port(struct device *dev, struct tcpc_dev *tcpc) { struct tcpm_port *port; int err; + /* Priority just lower than default irq thread priority */ + struct sched_param param = {.sched_priority = (MAX_USER_RT_PRIO / 2) + 1,}; if (!dev || !tcpc || !tcpc->get_vbus || !tcpc->set_cc || !tcpc->get_cc || @@ -4763,12 +4801,18 @@ struct tcpm_port *tcpm_register_port(struct device *dev, struct tcpc_dev *tcpc) mutex_init(&port->lock); mutex_init(&port->swap_lock); - port->wq = create_singlethread_workqueue(dev_name(dev)); - if (!port->wq) - return ERR_PTR(-ENOMEM); - INIT_DELAYED_WORK(&port->state_machine, tcpm_state_machine_work); - INIT_DELAYED_WORK(&port->vdm_state_machine, vdm_state_machine_work); - INIT_WORK(&port->event_work, tcpm_pd_event_handler); + port->wq = kthread_create_worker(0, dev_name(dev)); + if (IS_ERR(port->wq)) + return (struct tcpm_port *)port->wq; + sched_setscheduler(port->wq->task, SCHED_FIFO, ¶m); + + kthread_init_work(&port->state_machine, tcpm_state_machine_work); + kthread_init_work(&port->vdm_state_machine, vdm_state_machine_work); + kthread_init_work(&port->event_work, tcpm_pd_event_handler); + hrtimer_init(&port->state_machine_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + port->state_machine_timer.function = state_machine_timer_handler; + hrtimer_init(&port->vdm_state_machine_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL); + port->vdm_state_machine_timer.function = vdm_state_machine_timer_handler; spin_lock_init(&port->pd_event_lock); @@ -4820,7 +4864,7 @@ struct tcpm_port *tcpm_register_port(struct device *dev, struct tcpc_dev *tcpc) usb_role_switch_put(port->role_sw); out_destroy_wq: tcpm_debugfs_exit(port); - destroy_workqueue(port->wq); + kthread_destroy_worker(port->wq); return ERR_PTR(err); } EXPORT_SYMBOL_GPL(tcpm_register_port); @@ -4835,7 +4879,7 @@ void tcpm_unregister_port(struct tcpm_port *port) typec_unregister_port(port->typec_port); usb_role_switch_put(port->role_sw); tcpm_debugfs_exit(port); - destroy_workqueue(port->wq); + kthread_destroy_worker(port->wq); } EXPORT_SYMBOL_GPL(tcpm_unregister_port);