Message ID | 20230321062600.2539544-1-s-vadapalli@ti.com (mailing list archive) |
---|---|
State | Accepted |
Commit | 7849c42da2ca09f11fe04aeb8dbde63df756edaa |
Delegated to: | Netdev Maintainers |
Headers | show |
Series | [net-next] net: ethernet: ti: am65-cpts: adjust estf following ptp changes | expand |
On 21/03/2023 08:26, Siddharth Vadapalli wrote: > From: Grygorii Strashko <grygorii.strashko@ti.com> > > When the CPTS clock is synced/adjusted by running linuxptp (ptp4l/phc2sys), > it will cause the TSN EST schedule to drift away over time. This is because > the schedule is driven by the EstF periodic counter whose pulse length is > defined in ref_clk cycles and it does not automatically sync to CPTS clock. > _______ > _| > ^ > expected cycle start time boundary > _______________ > _|_|___|_| > ^ > EstF drifted away -> direction > > To fix it, the same PPM adjustment has to be applied to EstF as done to the > PHC CPTS clock, in order to correct the TSN EST cycle length and keep them > in sync. > > Drifted cycle: > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635968230373377017 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635968230373877017 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635968230374377017 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635968230374877017 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635968230375377017 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635968230375877023 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635968230376377018 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635968230376877018 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635968230377377018 > > Stable cycle: > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635966863193375473 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635966863193875473 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635966863194375473 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635966863194875473 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635966863195375473 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635966863195875473 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635966863196375473 > > Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> > Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com> Reviewed-by: Roger Quadros <rogerq@kernel.org>
On Tue, Mar 21, 2023 at 11:56:00AM +0530, Siddharth Vadapalli wrote: > From: Grygorii Strashko <grygorii.strashko@ti.com> > > When the CPTS clock is synced/adjusted by running linuxptp (ptp4l/phc2sys), > it will cause the TSN EST schedule to drift away over time. This is because > the schedule is driven by the EstF periodic counter whose pulse length is > defined in ref_clk cycles and it does not automatically sync to CPTS clock. > _______ > _| > ^ > expected cycle start time boundary > _______________ > _|_|___|_| > ^ > EstF drifted away -> direction > > To fix it, the same PPM adjustment has to be applied to EstF as done to the > PHC CPTS clock, in order to correct the TSN EST cycle length and keep them > in sync. > > Drifted cycle: > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635968230373377017 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635968230373877017 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635968230374377017 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635968230374877017 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635968230375377017 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635968230375877023 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635968230376377018 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635968230376877018 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635968230377377018 > > Stable cycle: > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635966863193375473 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635966863193875473 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635966863194375473 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635966863194875473 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635966863195375473 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635966863195875473 > AM65_CPTS_EVT: 7 e1:01770001 e2:000000ff t:1635966863196375473 > > Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> > Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com> Acked-by: Richard Cochran <richardcochran@gmail.com>
Hello: This patch was applied to netdev/net-next.git (main) by Paolo Abeni <pabeni@redhat.com>: On Tue, 21 Mar 2023 11:56:00 +0530 you wrote: > From: Grygorii Strashko <grygorii.strashko@ti.com> > > When the CPTS clock is synced/adjusted by running linuxptp (ptp4l/phc2sys), > it will cause the TSN EST schedule to drift away over time. This is because > the schedule is driven by the EstF periodic counter whose pulse length is > defined in ref_clk cycles and it does not automatically sync to CPTS clock. > _______ > _| > ^ > expected cycle start time boundary > _______________ > _|_|___|_| > ^ > EstF drifted away -> direction > > [...] Here is the summary with links: - [net-next] net: ethernet: ti: am65-cpts: adjust estf following ptp changes https://git.kernel.org/netdev/net-next/c/7849c42da2ca You are awesome, thank you!
diff --git a/drivers/net/ethernet/ti/am65-cpts.c b/drivers/net/ethernet/ti/am65-cpts.c index 16ee9c29cb35..1d9a399c9661 100644 --- a/drivers/net/ethernet/ti/am65-cpts.c +++ b/drivers/net/ethernet/ti/am65-cpts.c @@ -175,6 +175,7 @@ struct am65_cpts { u64 timestamp; u32 genf_enable; u32 hw_ts_enable; + u32 estf_enable; struct sk_buff_head txq; bool pps_enabled; bool pps_present; @@ -405,13 +406,13 @@ static irqreturn_t am65_cpts_interrupt(int irq, void *dev_id) static int am65_cpts_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm) { struct am65_cpts *cpts = container_of(ptp, struct am65_cpts, ptp_info); - u32 pps_ctrl_val = 0, pps_ppm_hi = 0, pps_ppm_low = 0; + u32 estf_ctrl_val = 0, estf_ppm_hi = 0, estf_ppm_low = 0; s32 ppb = scaled_ppm_to_ppb(scaled_ppm); int pps_index = cpts->pps_genf_idx; u64 adj_period, pps_adj_period; u32 ctrl_val, ppm_hi, ppm_low; unsigned long flags; - int neg_adj = 0; + int neg_adj = 0, i; if (ppb < 0) { neg_adj = 1; @@ -441,19 +442,19 @@ static int am65_cpts_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm) ppm_low = lower_32_bits(adj_period); if (cpts->pps_enabled) { - pps_ctrl_val = am65_cpts_read32(cpts, genf[pps_index].control); + estf_ctrl_val = am65_cpts_read32(cpts, genf[pps_index].control); if (neg_adj) - pps_ctrl_val &= ~BIT(1); + estf_ctrl_val &= ~BIT(1); else - pps_ctrl_val |= BIT(1); + estf_ctrl_val |= BIT(1); /* GenF PPM will do correction using cpts refclk tick which is * (cpts->ts_add_val + 1) ns, so GenF length PPM adj period * need to be corrected. */ pps_adj_period = adj_period * (cpts->ts_add_val + 1); - pps_ppm_hi = upper_32_bits(pps_adj_period) & 0x3FF; - pps_ppm_low = lower_32_bits(pps_adj_period); + estf_ppm_hi = upper_32_bits(pps_adj_period) & 0x3FF; + estf_ppm_low = lower_32_bits(pps_adj_period); } spin_lock_irqsave(&cpts->lock, flags); @@ -471,11 +472,18 @@ static int am65_cpts_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm) am65_cpts_write32(cpts, ppm_low, ts_ppm_low); if (cpts->pps_enabled) { - am65_cpts_write32(cpts, pps_ctrl_val, genf[pps_index].control); - am65_cpts_write32(cpts, pps_ppm_hi, genf[pps_index].ppm_hi); - am65_cpts_write32(cpts, pps_ppm_low, genf[pps_index].ppm_low); + am65_cpts_write32(cpts, estf_ctrl_val, genf[pps_index].control); + am65_cpts_write32(cpts, estf_ppm_hi, genf[pps_index].ppm_hi); + am65_cpts_write32(cpts, estf_ppm_low, genf[pps_index].ppm_low); } + for (i = 0; i < AM65_CPTS_ESTF_MAX_NUM; i++) { + if (cpts->estf_enable & BIT(i)) { + am65_cpts_write32(cpts, estf_ctrl_val, estf[i].control); + am65_cpts_write32(cpts, estf_ppm_hi, estf[i].ppm_hi); + am65_cpts_write32(cpts, estf_ppm_low, estf[i].ppm_low); + } + } /* All GenF/EstF can be updated here the same way */ spin_unlock_irqrestore(&cpts->lock, flags); @@ -596,6 +604,11 @@ int am65_cpts_estf_enable(struct am65_cpts *cpts, int idx, am65_cpts_write32(cpts, val, estf[idx].comp_lo); val = lower_32_bits(cycles); am65_cpts_write32(cpts, val, estf[idx].length); + am65_cpts_write32(cpts, 0, estf[idx].control); + am65_cpts_write32(cpts, 0, estf[idx].ppm_hi); + am65_cpts_write32(cpts, 0, estf[idx].ppm_low); + + cpts->estf_enable |= BIT(idx); dev_dbg(cpts->dev, "%s: ESTF:%u enabled\n", __func__, idx); @@ -606,6 +619,7 @@ EXPORT_SYMBOL_GPL(am65_cpts_estf_enable); void am65_cpts_estf_disable(struct am65_cpts *cpts, int idx) { am65_cpts_write32(cpts, 0, estf[idx].length); + cpts->estf_enable &= ~BIT(idx); dev_dbg(cpts->dev, "%s: ESTF:%u disabled\n", __func__, idx); }