From patchwork Wed Jan 11 11:44:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: s-vadapalli X-Patchwork-Id: 13096519 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 121DFC46467 for ; Wed, 11 Jan 2023 11:46:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:CC:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=mPt8UvDbQlgYkuyLsnCo34VCkUsiSQ8nrCLpzkYOR8o=; b=zemWcuAW/WGtQs i/yR/JML0NPQOEvbaFFf0ggufvzRArB24gpKqMZxUXfD8BpMNSevNIve0MV6KP6ag6qhFBTuWpH7Q VinvaoUtt/+T4zOEWsDLxl4/X74kUSb63ujxSBMghRbKvchTYD2V82YgeIC/q3cnJrbKFHMGjI9nY eHcUuq78fUI/pUcAyFWBcXu+FrBtm6wT6m+ZwZU2hGmU3vIJZaprffmmT3aalSBHURCEOBQ4iyDmZ iUeB5Xo0ZjO39qCznc3ZmuU0bA5ar5g//qI/r+LE7JVkzA9nlKu89dOUV7ggyNs9MiQBhvu1dlM9O k5+w8/QONkzbr4hB4Srg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFZXn-00B88b-QU; Wed, 11 Jan 2023 11:45:23 +0000 Received: from fllv0016.ext.ti.com ([198.47.19.142]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1pFZXT-00B80Y-BU for linux-arm-kernel@lists.infradead.org; Wed, 11 Jan 2023 11:45:05 +0000 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 30BBikp1037016; Wed, 11 Jan 2023 05:44:46 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1673437486; bh=RUXoMj/ujWbksjIw8O5icz+dONmjMye2yKjJ5W1B/hM=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=T7G/xtGP5e5svTjPTxcz3oz/XaPoI9ppXOMYH7F5OIDVobyC9SToR1fV6KiZCB6wG NPHvpHMVTBZLrgIhMKcQE9R8+TKfJhe1ms27U0gjc68TrsEFESqAdTvhL2qdzgq+lA 85Gz82UJBvG1VmiKF8OReah2fzcQMyYPkcRh61fs= Received: from DLEE115.ent.ti.com (dlee115.ent.ti.com [157.170.170.26]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 30BBijCS011974 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 11 Jan 2023 05:44:45 -0600 Received: from DLEE101.ent.ti.com (157.170.170.31) by DLEE115.ent.ti.com (157.170.170.26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.16; Wed, 11 Jan 2023 05:44:45 -0600 Received: from lelv0327.itg.ti.com (10.180.67.183) by DLEE101.ent.ti.com (157.170.170.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.16 via Frontend Transport; Wed, 11 Jan 2023 05:44:45 -0600 Received: from uda0492258.dhcp.ti.com (ileaxei01-snat.itg.ti.com [10.180.69.5]) by lelv0327.itg.ti.com (8.15.2/8.15.2) with ESMTP id 30BBiUkJ093892; Wed, 11 Jan 2023 05:44:41 -0600 From: Siddharth Vadapalli To: , , , , , , , , , , , CC: , , , , , Subject: [PATCH net-next 2/5] net: ethernet: ti: am65-cpts: add pps support Date: Wed, 11 Jan 2023 17:14:26 +0530 Message-ID: <20230111114429.1297557-3-s-vadapalli@ti.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230111114429.1297557-1-s-vadapalli@ti.com> References: <20230111114429.1297557-1-s-vadapalli@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20230111_034503_552452_2735C02E X-CRM114-Status: GOOD ( 18.26 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Grygorii Strashko CPTS doesn't have HW support for PPS ("pulse per second”) signal generation, but it can be modeled by using Time Sync Router and routing GenFx (periodic signal generator) output to CPTS_HWy_TS_PUSH (hardware time stamp) input, and configuring GenFx to generate 1sec pulses. +------------------------+ | CPTS | | | +--->CPTS_HW4_PUSH GENFx+---+ | | | | | +------------------------+ | | | +--------------------------------+ Add corresponding support to am65-cpts driver. The DT property "ti,pps" has to be used to enable PPS support and configure pair [CPTS_HWy_TS_PUSH, GenFx]. Once enabled, PPS can be tested using ppstest tool: # ./ppstest /dev/pps0 Signed-off-by: Grygorii Strashko Signed-off-by: Siddharth Vadapalli --- drivers/net/ethernet/ti/am65-cpts.c | 85 +++++++++++++++++++++++++++-- 1 file changed, 80 insertions(+), 5 deletions(-) diff --git a/drivers/net/ethernet/ti/am65-cpts.c b/drivers/net/ethernet/ti/am65-cpts.c index 9535396b28cd..6a0f09b497d1 100644 --- a/drivers/net/ethernet/ti/am65-cpts.c +++ b/drivers/net/ethernet/ti/am65-cpts.c @@ -176,6 +176,10 @@ struct am65_cpts { u32 genf_enable; u32 hw_ts_enable; struct sk_buff_head txq; + bool pps_enabled; + bool pps_present; + u32 pps_hw_ts_idx; + u32 pps_genf_idx; /* context save/restore */ u64 sr_cpts_ns; u64 sr_ktime_ns; @@ -319,8 +323,15 @@ static int am65_cpts_fifo_read(struct am65_cpts *cpts) case AM65_CPTS_EV_HW: pevent.index = am65_cpts_event_get_port(event) - 1; pevent.timestamp = event->timestamp; - pevent.type = PTP_CLOCK_EXTTS; - dev_dbg(cpts->dev, "AM65_CPTS_EV_HW p:%d t:%llu\n", + if (cpts->pps_enabled && pevent.index == cpts->pps_hw_ts_idx) { + pevent.type = PTP_CLOCK_PPSUSR; + pevent.pps_times.ts_real = ns_to_timespec64(pevent.timestamp); + } else { + pevent.type = PTP_CLOCK_EXTTS; + } + dev_dbg(cpts->dev, "AM65_CPTS_EV_HW:%s p:%d t:%llu\n", + pevent.type == PTP_CLOCK_EXTTS ? + "extts" : "pps", pevent.index, event->timestamp); ptp_clock_event(cpts->ptp_clock, &pevent); @@ -507,7 +518,13 @@ static void am65_cpts_extts_enable_hw(struct am65_cpts *cpts, u32 index, int on) static int am65_cpts_extts_enable(struct am65_cpts *cpts, u32 index, int on) { - if (!!(cpts->hw_ts_enable & BIT(index)) == !!on) + if (index >= cpts->ptp_info.n_ext_ts) + return -ENXIO; + + if (cpts->pps_present && index == cpts->pps_hw_ts_idx) + return -EINVAL; + + if (((cpts->hw_ts_enable & BIT(index)) >> index) == on) return 0; mutex_lock(&cpts->ptp_clk_lock); @@ -591,6 +608,12 @@ static void am65_cpts_perout_enable_hw(struct am65_cpts *cpts, static int am65_cpts_perout_enable(struct am65_cpts *cpts, struct ptp_perout_request *req, int on) { + if (req->index >= cpts->ptp_info.n_per_out) + return -ENXIO; + + if (cpts->pps_present && req->index == cpts->pps_genf_idx) + return -EINVAL; + if (!!(cpts->genf_enable & BIT(req->index)) == !!on) return 0; @@ -604,6 +627,48 @@ static int am65_cpts_perout_enable(struct am65_cpts *cpts, return 0; } +static int am65_cpts_pps_enable(struct am65_cpts *cpts, int on) +{ + int ret = 0; + struct timespec64 ts; + struct ptp_clock_request rq; + u64 ns; + + if (!cpts->pps_present) + return -EINVAL; + + if (cpts->pps_enabled == !!on) + return 0; + + mutex_lock(&cpts->ptp_clk_lock); + + if (on) { + am65_cpts_extts_enable_hw(cpts, cpts->pps_hw_ts_idx, on); + + ns = am65_cpts_gettime(cpts, NULL); + ts = ns_to_timespec64(ns); + rq.perout.period.sec = 1; + rq.perout.period.nsec = 0; + rq.perout.start.sec = ts.tv_sec + 2; + rq.perout.start.nsec = 0; + rq.perout.index = cpts->pps_genf_idx; + + am65_cpts_perout_enable_hw(cpts, &rq.perout, on); + cpts->pps_enabled = true; + } else { + rq.perout.index = cpts->pps_genf_idx; + am65_cpts_perout_enable_hw(cpts, &rq.perout, on); + am65_cpts_extts_enable_hw(cpts, cpts->pps_hw_ts_idx, on); + cpts->pps_enabled = false; + } + + mutex_unlock(&cpts->ptp_clk_lock); + + dev_dbg(cpts->dev, "%s: pps: %s\n", + __func__, on ? "enabled" : "disabled"); + return ret; +} + static int am65_cpts_ptp_enable(struct ptp_clock_info *ptp, struct ptp_clock_request *rq, int on) { @@ -614,6 +679,8 @@ static int am65_cpts_ptp_enable(struct ptp_clock_info *ptp, return am65_cpts_extts_enable(cpts, rq->extts.index, on); case PTP_CLK_REQ_PEROUT: return am65_cpts_perout_enable(cpts, &rq->perout, on); + case PTP_CLK_REQ_PPS: + return am65_cpts_pps_enable(cpts, on); default: break; } @@ -926,6 +993,12 @@ static int am65_cpts_of_parse(struct am65_cpts *cpts, struct device_node *node) if (!of_property_read_u32(node, "ti,cpts-periodic-outputs", &prop[0])) cpts->genf_num = prop[0]; + if (!of_property_read_u32_array(node, "ti,pps", prop, 2)) { + cpts->pps_present = true; + cpts->pps_hw_ts_idx = prop[0]; + cpts->pps_genf_idx = prop[1]; + } + return cpts_of_mux_clk_setup(cpts, node); } @@ -993,6 +1066,8 @@ struct am65_cpts *am65_cpts_create(struct device *dev, void __iomem *regs, cpts->ptp_info.n_ext_ts = cpts->ext_ts_inputs; if (cpts->genf_num) cpts->ptp_info.n_per_out = cpts->genf_num; + if (cpts->pps_present) + cpts->ptp_info.pps = 1; am65_cpts_set_add_val(cpts); @@ -1028,9 +1103,9 @@ struct am65_cpts *am65_cpts_create(struct device *dev, void __iomem *regs, return ERR_PTR(ret); } - dev_info(dev, "CPTS ver 0x%08x, freq:%u, add_val:%u\n", + dev_info(dev, "CPTS ver 0x%08x, freq:%u, add_val:%u pps:%d\n", am65_cpts_read32(cpts, idver), - cpts->refclk_freq, cpts->ts_add_val); + cpts->refclk_freq, cpts->ts_add_val, cpts->pps_present); return cpts;