From patchwork Fri Jan 6 20:07:34 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Grygorii Strashko X-Patchwork-Id: 9501691 Return-Path: Received: from mail.wl.linuxfoundation.org (pdx-wl-mail.web.codeaurora.org [172.30.200.125]) by pdx-korg-patchwork.web.codeaurora.org (Postfix) with ESMTP id 0407560413 for ; Fri, 6 Jan 2017 20:08:05 +0000 (UTC) Received: from mail.wl.linuxfoundation.org (localhost [127.0.0.1]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id E6C8B1FF0B for ; Fri, 6 Jan 2017 20:08:04 +0000 (UTC) Received: by mail.wl.linuxfoundation.org (Postfix, from userid 486) id DBA6D28520; Fri, 6 Jan 2017 20:08:04 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on pdx-wl-mail.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.9 required=2.0 tests=BAYES_00,RCVD_IN_DNSWL_HI autolearn=ham version=3.3.1 Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.wl.linuxfoundation.org (Postfix) with ESMTP id 081672851E for ; Fri, 6 Jan 2017 20:08:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932484AbdAFUHv (ORCPT ); Fri, 6 Jan 2017 15:07:51 -0500 Received: from lelnx194.ext.ti.com ([198.47.27.80]:36926 "EHLO lelnx194.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S937985AbdAFUHl (ORCPT ); Fri, 6 Jan 2017 15:07:41 -0500 Received: from dlelxv90.itg.ti.com ([172.17.2.17]) by lelnx194.ext.ti.com (8.15.1/8.15.1) with ESMTP id v06K7dwm031995; Fri, 6 Jan 2017 14:07:39 -0600 Received: from DFLE73.ent.ti.com (dfle73.ent.ti.com [128.247.5.110]) by dlelxv90.itg.ti.com (8.14.3/8.13.8) with ESMTP id v06K7dRs013093; Fri, 6 Jan 2017 14:07:39 -0600 Received: from dflp33.itg.ti.com (10.64.6.16) by DFLE73.ent.ti.com (128.247.5.110) with Microsoft SMTP Server id 14.3.294.0; Fri, 6 Jan 2017 14:07:38 -0600 Received: from legion.dal.design.ti.com (legion.dal.design.ti.com [128.247.22.53]) by dflp33.itg.ti.com (8.14.3/8.13.8) with ESMTP id v06K7cLE015885; Fri, 6 Jan 2017 14:07:38 -0600 Received: from localhost (uda0226610.am.dhcp.ti.com [128.247.83.96]) by legion.dal.design.ti.com (8.11.7p1+Sun/8.11.7) with ESMTP id v06K7c311535; Fri, 6 Jan 2017 14:07:38 -0600 (CST) From: Grygorii Strashko To: "David S. Miller" , , Mugunthan V N , Ivan Khoronzhuk CC: Sekhar Nori , , , Grygorii Strashko Subject: [PATCH v2 6/7] net: ethernet: ti: cpsw: add support for ringparam configuration Date: Fri, 6 Jan 2017 14:07:34 -0600 Message-ID: <20170106200735.4210-7-grygorii.strashko@ti.com> X-Mailer: git-send-email 2.10.1.dirty In-Reply-To: <20170106200735.4210-1-grygorii.strashko@ti.com> References: <20170106200735.4210-1-grygorii.strashko@ti.com> MIME-Version: 1.0 Sender: linux-omap-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-omap@vger.kernel.org X-Virus-Scanned: ClamAV using ClamSMTP The CPDMA uses one pool of descriptors for both RX and TX which by default split between all channels proportionally depending on total number of CPDMA channels and number of TX and RX channels. As result, more descriptors will be consumed by TX path if there are more TX channels and there is no way now to dedicate more descriptors for RX path. So, add the ability to re-split CPDMA pool of descriptors between RX and TX path via ethtool '-G' command wich will allow to configure and fix number of descriptors used by RX and TX path, which, then, will be split between RX/TX channels proportionally depending on RX/TX channels number and weight. ethtool '-G' command will accept only number of RX entries and rest of descriptors will be arranged for TX automatically. Command: ethtool -G rx defaults and limitations: - minimum number of rx descriptors is 10% of total number of descriptors in CPDMA pool - maximum number of rx descriptors is 90% of total number of descriptors in CPDMA pool - by default, descriptors will be split equally between RX/TX path - any values passed in "tx" parameter will be ignored Usage: # ethtool -g eth0 Pre-set maximums: RX: 7372 RX Mini: 0 RX Jumbo: 0 TX: 0 Current hardware settings: RX: 4096 RX Mini: 0 RX Jumbo: 0 TX: 4096 # ethtool -G eth0 rx 7372 # ethtool -g eth0 Ring parameters for eth0: Pre-set maximums: RX: 7372 RX Mini: 0 RX Jumbo: 0 TX: 0 Current hardware settings: RX: 7372 RX Mini: 0 RX Jumbo: 0 TX: 820 Signed-off-by: Grygorii Strashko --- drivers/net/ethernet/ti/cpsw.c | 86 +++++++++++++++++++++++++++++++++ drivers/net/ethernet/ti/davinci_cpdma.c | 40 ++++++++++++--- drivers/net/ethernet/ti/davinci_cpdma.h | 4 ++ 3 files changed, 122 insertions(+), 8 deletions(-) diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c index d39875e..f339268 100644 --- a/drivers/net/ethernet/ti/cpsw.c +++ b/drivers/net/ethernet/ti/cpsw.c @@ -2484,6 +2484,90 @@ static int cpsw_nway_reset(struct net_device *ndev) return -EOPNOTSUPP; } +static void cpsw_get_ringparam(struct net_device *ndev, + struct ethtool_ringparam *ering) +{ + struct cpsw_priv *priv = netdev_priv(ndev); + struct cpsw_common *cpsw = priv->cpsw; + + /* not supported */ + ering->tx_max_pending = 0; + ering->tx_pending = cpdma_get_num_tx_descs(cpsw->dma); + /* Max 90% RX buffers */ + ering->rx_max_pending = (descs_pool_size * 9) / 10; + ering->rx_pending = cpdma_get_num_rx_descs(cpsw->dma); +} + +static int cpsw_set_ringparam(struct net_device *ndev, + struct ethtool_ringparam *ering) +{ + struct cpsw_priv *priv = netdev_priv(ndev); + struct cpsw_common *cpsw = priv->cpsw; + struct cpsw_slave *slave; + int i, ret; + + /* ignore ering->tx_pending - only rx_pending adjustment is supported */ + + if (ering->rx_mini_pending || ering->rx_jumbo_pending || + ering->rx_pending < (descs_pool_size / 10) || + ering->rx_pending > ((descs_pool_size * 9) / 10)) + return -EINVAL; + + if (ering->rx_pending == cpdma_get_num_rx_descs(cpsw->dma)) + return 0; + + /* Disable NAPI scheduling */ + cpsw_intr_disable(cpsw); + + /* Stop all transmit queues for every network device. + * Disable re-using rx descriptors with dormant_on. + */ + for (i = cpsw->data.slaves, slave = cpsw->slaves; i; i--, slave++) { + if (!(slave->ndev && netif_running(slave->ndev))) + continue; + + netif_tx_stop_all_queues(slave->ndev); + netif_dormant_on(slave->ndev); + } + + /* Handle rest of tx packets and stop cpdma channels */ + cpdma_ctlr_stop(cpsw->dma); + + cpdma_set_num_rx_descs(cpsw->dma, ering->rx_pending); + + for (i = cpsw->data.slaves, slave = cpsw->slaves; i; i--, slave++) { + if (!(slave->ndev && netif_running(slave->ndev))) + continue; + + /* Enable rx packets handling */ + netif_dormant_off(slave->ndev); + } + + if (cpsw_common_res_usage_state(cpsw)) { + cpdma_chan_split_pool(cpsw->dma); + + ret = cpsw_fill_rx_channels(priv); + if (ret) + goto err; + + /* After this receive is started */ + cpdma_ctlr_start(cpsw->dma); + cpsw_intr_enable(cpsw); + } + + /* Resume transmit for every affected interface */ + for (i = cpsw->data.slaves, slave = cpsw->slaves; i; i--, slave++) { + if (!(slave->ndev && netif_running(slave->ndev))) + continue; + netif_tx_start_all_queues(slave->ndev); + } + return 0; +err: + dev_err(priv->dev, "cannot set ring params, closing device\n"); + dev_close(ndev); + return ret; +} + static const struct ethtool_ops cpsw_ethtool_ops = { .get_drvinfo = cpsw_get_drvinfo, .get_msglevel = cpsw_get_msglevel, @@ -2510,6 +2594,8 @@ static const struct ethtool_ops cpsw_ethtool_ops = { .get_eee = cpsw_get_eee, .set_eee = cpsw_set_eee, .nway_reset = cpsw_nway_reset, + .get_ringparam = cpsw_get_ringparam, + .set_ringparam = cpsw_set_ringparam, }; static void cpsw_slave_init(struct cpsw_slave *slave, struct cpsw_common *cpsw, diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c b/drivers/net/ethernet/ti/davinci_cpdma.c index 65e2f12..d80bff1 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.c +++ b/drivers/net/ethernet/ti/davinci_cpdma.c @@ -108,6 +108,8 @@ struct cpdma_ctlr { spinlock_t lock; struct cpdma_chan *channels[2 * CPDMA_MAX_CHANNELS]; int chan_num; + int num_rx_desc; /* RX descriptors number */ + int num_tx_desc; /* TX descriptors number */ }; struct cpdma_chan { @@ -518,6 +520,9 @@ struct cpdma_ctlr *cpdma_ctlr_create(struct cpdma_params *params) if (cpdma_desc_pool_create(ctlr)) return NULL; + /* split pool equally between RX/TX by default */ + ctlr->num_tx_desc = ctlr->pool->num_desc / 2; + ctlr->num_rx_desc = ctlr->pool->num_desc - ctlr->num_tx_desc; if (WARN_ON(ctlr->num_chan > CPDMA_MAX_CHANNELS)) ctlr->num_chan = CPDMA_MAX_CHANNELS; @@ -717,22 +722,22 @@ static void cpdma_chan_set_descs(struct cpdma_ctlr *ctlr, } } /* use remains */ - most_chan->desc_num += desc_cnt; + if (most_chan) + most_chan->desc_num += desc_cnt; } /** * cpdma_chan_split_pool - Splits ctrl pool between all channels. * Has to be called under ctlr lock */ -static int cpdma_chan_split_pool(struct cpdma_ctlr *ctlr) +int cpdma_chan_split_pool(struct cpdma_ctlr *ctlr) { int tx_per_ch_desc = 0, rx_per_ch_desc = 0; - struct cpdma_desc_pool *pool = ctlr->pool; int free_rx_num = 0, free_tx_num = 0; int rx_weight = 0, tx_weight = 0; int tx_desc_num, rx_desc_num; struct cpdma_chan *chan; - int i, tx_num = 0; + int i; if (!ctlr->chan_num) return 0; @@ -750,15 +755,14 @@ static int cpdma_chan_split_pool(struct cpdma_ctlr *ctlr) if (!chan->weight) free_tx_num++; tx_weight += chan->weight; - tx_num++; } } if (rx_weight > 100 || tx_weight > 100) return -EINVAL; - tx_desc_num = (tx_num * pool->num_desc) / ctlr->chan_num; - rx_desc_num = pool->num_desc - tx_desc_num; + tx_desc_num = ctlr->num_tx_desc; + rx_desc_num = ctlr->num_rx_desc; if (free_tx_num) { tx_per_ch_desc = tx_desc_num - (tx_weight * tx_desc_num) / 100; @@ -774,6 +778,8 @@ static int cpdma_chan_split_pool(struct cpdma_ctlr *ctlr) return 0; } +EXPORT_SYMBOL_GPL(cpdma_chan_split_pool); + /* cpdma_chan_set_weight - set weight of a channel in percentage. * Tx and Rx channels have separate weights. That is 100% for RX @@ -907,7 +913,6 @@ struct cpdma_chan *cpdma_chan_create(struct cpdma_ctlr *ctlr, int chan_num, chan->chan_num = chan_num; chan->handler = handler; chan->rate = 0; - chan->desc_num = ctlr->pool->num_desc / 2; chan->weight = 0; if (is_rx_chan(chan)) { @@ -1329,4 +1334,23 @@ int cpdma_control_set(struct cpdma_ctlr *ctlr, int control, int value) } EXPORT_SYMBOL_GPL(cpdma_control_set); +int cpdma_get_num_rx_descs(struct cpdma_ctlr *ctlr) +{ + return ctlr->num_rx_desc; +} +EXPORT_SYMBOL_GPL(cpdma_get_num_rx_descs); + +int cpdma_get_num_tx_descs(struct cpdma_ctlr *ctlr) +{ + return ctlr->num_tx_desc; +} +EXPORT_SYMBOL_GPL(cpdma_get_num_tx_descs); + +void cpdma_set_num_rx_descs(struct cpdma_ctlr *ctlr, int num_rx_desc) +{ + ctlr->num_rx_desc = num_rx_desc; + ctlr->num_tx_desc = ctlr->pool->num_desc - ctlr->num_rx_desc; +} +EXPORT_SYMBOL_GPL(cpdma_set_num_rx_descs); + MODULE_LICENSE("GPL"); diff --git a/drivers/net/ethernet/ti/davinci_cpdma.h b/drivers/net/ethernet/ti/davinci_cpdma.h index cb45f8f..fd65ce2 100644 --- a/drivers/net/ethernet/ti/davinci_cpdma.h +++ b/drivers/net/ethernet/ti/davinci_cpdma.h @@ -114,5 +114,9 @@ enum cpdma_control { int cpdma_control_get(struct cpdma_ctlr *ctlr, int control); int cpdma_control_set(struct cpdma_ctlr *ctlr, int control, int value); +int cpdma_get_num_rx_descs(struct cpdma_ctlr *ctlr); +void cpdma_set_num_rx_descs(struct cpdma_ctlr *ctlr, int num_rx_desc); +int cpdma_get_num_tx_descs(struct cpdma_ctlr *ctlr); +int cpdma_chan_split_pool(struct cpdma_ctlr *ctlr); #endif